Categories
Uncategorized

12/17/20

20201217
0745-0833
Begin
Search “python create a button”
Watch https://www.youtube.com/watch?v=30oWH6yKuB4, “Python 101 – Making a Button” by HowCode [← Useless.]
Search “tkinter”
Read “Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the Tk GUI toolkit, and is Python’s de facto standard GUI. Tkinter is included with standard Linux, Microsoft Windows and Mac OS X installs of Python. The name Tkinter comes from Tk interface.”
Try using “from tkinter import *” within a function.
Fail “‘from tkinter import *’ only allowed at module level.
Try code as written.
Fail “name ‘exit’ is not defined”.
Watch https://www.youtube.com/watch?v=yuuDJ3-EdNQ, “Creating Buttons With TKinter – Python Tkinter GUI Tutorial #3” by Codemy.com
Try code as written.
Success.

from tkinter import *

root = Tk()

def myClick():
    myLabel = Label(root, text="Look! I clicked a Button!")
    myLabel.pack()

myButton = Button(root, text="Click Me!", padx=50, pady=50,
                command=myClick, fg="white", bg="#000000")
myButton.pack()

root.mainloop()

Keywords: button, button color, RGB, button size, button action.

0833-0937
Read some of:
https://realpython.com/python-gui-tkinter/
https://tkdocs.com/tutorial/widgets.html
https://tkdocs.com/tutorial/fonts.html
https://tkdocs.com/tutorial/styles.html
https://tkdocs.com/tutorial/idle.html
https://tkdocs.com/tutorial/morewidgets.html
https://tkdocs.com/tutorial/concepts.html
https://tkdocs.com/tutorial/index.html
https://www.activestate.com/blog/top-10-must-have-python-packages/
https://www.activestate.com/blog/neural-network-showdown-tensorflow-vs-pytorch/

0937-1234
Watched first 2.5 videos of https://developers.google.com/machine-learning/crash-course
Played with:
http://playground.tensorflow.org/#activation=sigmoid&batchSize=3&dataset=spiral&regDataset=reg-plane&learningRate=0.3&regularizationRate=0&noise=5&networkShape=8&seed=0.49245&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=true&xSquared=true&ySquared=true&cosX=false&sinX=true&cosY=false&sinY=true&collectStats=false&problem=classification&initZero=false&hideText=false&dataset_hide=false
Note:
Vertical and horizontal depth of virtual neurons helps with ReLU and Tanh activation. I didn’t find a purpose for linear activation.
With sigmoid activation, horizontal depth actually killed the learning. Sigmoid activation with one horizontal depth seemed the best for the spiral dataset.
Sigmoid refers to an s-curve.
Learning rates of >= 1 were useless. Learning too fast means the weights (think myelinization) increase too quickly leading to clunky approximation.
Learning rates of < 0.01 were unpleasant because the weights took too long to update.
Slower learning led to more elegant weighting (less overall weight).
Limiting the number of input functions could help decrease calculation time if you knew which ones to choose.
Having too many input functions (6) was far superior to having too few (1). Lacking a key input function hamstrung the learning.

4hrs 50mins, 3hrs 10mins left – 1614 quit if no breaks
1324-1406
Read https://www.activestate.com/blog/neural-network-showdown-tensorflow-vs-pytorch/
Try code as written.
Fail “No module named ‘torch'”.
Read https://pytorch.org/.
Try “conda install pytorch torchvision -c pytorch” in Spyder.
Feedback no visible response after a little while.
Switch to reading.

1406-1442
Read “The Pragmatic Programmer” by David Thomas and Andrew Hunt, Ch1.

1443-1457
Success, pytorch appears to have installed while I was reading.
Try running code again.
Fail “NameError: name ‘data_x’ is not defined”. This will also be true of ‘data_y’.
Try commenting out original code and swapping in x_data and y_data.
Success. Received a message telling me to update my NVidia driver.

import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autog

x_data = np.array([[0,0], [0,1], [1,0]])
y_data = np.array([[0,1,1]]).T
x_data = autog.Variable(torch.FloatTensor(x_data))
y_data = autog.Variable(torch.FloatTensor(y_data), requires_grad=False)

in_dim = 2
out_dim = 1
epochs = 15000
epoch_print = epochs/5
l_rate = 0.001

class NeuralNet(nn.Module):
    def __init__(self, input_size, output_size):
        super(NeuralNet, self).__init__()
        self.lin1 = nn.Linear(input_size, output_size)
    
    def forward(self, x):
        return self.lin1(x)

model = NeuralNet(in_dim, out_dim)
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=l_rate)

for epoch in range(epochs):
    optimizer.zero_grad()
    pred = model(x_data)
    loss = criterion(pred, y_data)
    loss.backward()
    optimizer.step()
    if (epoch + 1) % epoch_print == 0:
        print("Epoch %d Loss %.3f" %(epoch + 1, loss.item()))

for x, y in zip(x_data, y_data):
    pred = model(x)
    print("Input", list(map(int, x)), "Pred", int(pred > 0), "Output", int(y))

Note:
The program didn’t deliver on what I understood to be it’s only task: determining the 4th state of the or function. Perhaps I misunderstand or the variable swap broke it.
PyTorch appears to be a thing I can now use.
PyTorch is an alternative to TensorFlow (Google’s neural network software package).

1457-1614
Read up to http://neuralnetworksanddeeplearning.com/chap1.html. It promises to recognize handwritten numbers with 74 lines of code.

20201217 summary
Tkinter is a built-in means of providing a GUI in Python.
Google offers a crash-course in machine learning: https://developers.google.com/machine-learning/crash-course.
More virtual neurons don’t always improve performance. Activation type, learning speed, input, etc. matter a great deal to ML performance.
I can use PyTorch.

Categories
Uncategorized

1c4 – Introduction to Algorithms

1 – Asymptotic Complexity, Peak Finding

Teaching Assistant:

Θ (theta) represents the complexity of the function. If you zoom out on a graph of computation time, which term of the function describing computation time dominates?
O (big O) represents the upper bound on the computation time of the function. As programmers don’t care how long a program takes if they get lucky, this is the primary measure of computation complexity they care about.
Ω (omega) represents the lower bound of the function’s complexity. This is the smallest amount of time the program will take to run. Note this may always be close to zero if the program gets lucky under certain (or cyclical) conditions.

He notes everything in the course will be:
♀♪♫☼►◄↕‼¶§▬↨↑↓→←∟↔▲▼ !”#$%&'()*+,-./01