Search “python create a button”
Watch, “Python 101 – Making a Button” by HowCode [← Useless.]
Search “tkinter”
Read “Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the Tk GUI toolkit, and is Python’s de facto standard GUI. Tkinter is included with standard Linux, Microsoft Windows and Mac OS X installs of Python. The name Tkinter comes from Tk interface.”
Try using “from tkinter import *” within a function.
Fail “‘from tkinter import *’ only allowed at module level.
Try code as written.
Fail “name ‘exit’ is not defined”.
Watch, “Creating Buttons With TKinter – Python Tkinter GUI Tutorial #3” by
Try code as written.

from tkinter import *

root = Tk()

def myClick():
    myLabel = Label(root, text="Look! I clicked a Button!")

myButton = Button(root, text="Click Me!", padx=50, pady=50,
                command=myClick, fg="white", bg="#000000")


Keywords: button, button color, RGB, button size, button action.

Read some of:

Watched first 2.5 videos of
Played with:
Vertical and horizontal depth of virtual neurons helps with ReLU and Tanh activation. I didn’t find a purpose for linear activation.
With sigmoid activation, horizontal depth actually killed the learning. Sigmoid activation with one horizontal depth seemed the best for the spiral dataset.
Sigmoid refers to an s-curve.
Learning rates of >= 1 were useless. Learning too fast means the weights (think myelinization) increase too quickly leading to clunky approximation.
Learning rates of < 0.01 were unpleasant because the weights took too long to update.
Slower learning led to more elegant weighting (less overall weight).
Limiting the number of input functions could help decrease calculation time if you knew which ones to choose.
Having too many input functions (6) was far superior to having too few (1). Lacking a key input function hamstrung the learning.

4hrs 50mins, 3hrs 10mins left – 1614 quit if no breaks
Try code as written.
Fail “No module named ‘torch'”.
Try “conda install pytorch torchvision -c pytorch” in Spyder.
Feedback no visible response after a little while.
Switch to reading.

Read “The Pragmatic Programmer” by David Thomas and Andrew Hunt, Ch1.

Success, pytorch appears to have installed while I was reading.
Try running code again.
Fail “NameError: name ‘data_x’ is not defined”. This will also be true of ‘data_y’.
Try commenting out original code and swapping in x_data and y_data.
Success. Received a message telling me to update my NVidia driver.

import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autog

x_data = np.array([[0,0], [0,1], [1,0]])
y_data = np.array([[0,1,1]]).T
x_data = autog.Variable(torch.FloatTensor(x_data))
y_data = autog.Variable(torch.FloatTensor(y_data), requires_grad=False)

in_dim = 2
out_dim = 1
epochs = 15000
epoch_print = epochs/5
l_rate = 0.001

class NeuralNet(nn.Module):
    def __init__(self, input_size, output_size):
        super(NeuralNet, self).__init__()
        self.lin1 = nn.Linear(input_size, output_size)
    def forward(self, x):
        return self.lin1(x)

model = NeuralNet(in_dim, out_dim)
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(), lr=l_rate)

for epoch in range(epochs):
    pred = model(x_data)
    loss = criterion(pred, y_data)
    if (epoch + 1) % epoch_print == 0:
        print("Epoch %d Loss %.3f" %(epoch + 1, loss.item()))

for x, y in zip(x_data, y_data):
    pred = model(x)
    print("Input", list(map(int, x)), "Pred", int(pred > 0), "Output", int(y))

The program didn’t deliver on what I understood to be it’s only task: determining the 4th state of the or function. Perhaps I misunderstand or the variable swap broke it.
PyTorch appears to be a thing I can now use.
PyTorch is an alternative to TensorFlow (Google’s neural network software package).

Read up to It promises to recognize handwritten numbers with 74 lines of code.

20201217 summary
Tkinter is a built-in means of providing a GUI in Python.
Google offers a crash-course in machine learning:
More virtual neurons don’t always improve performance. Activation type, learning speed, input, etc. matter a great deal to ML performance.
I can use PyTorch.


1c4 – Introduction to Algorithms

1 – Asymptotic Complexity, Peak Finding

Teaching Assistant:

Θ (theta) represents the complexity of the function. If you zoom out on a graph of computation time, which term of the function describing computation time dominates?
O (big O) represents the upper bound on the computation time of the function. As programmers don’t care how long a program takes if they get lucky, this is the primary measure of computation complexity they care about.
Ω (omega) represents the lower bound of the function’s complexity. This is the smallest amount of time the program will take to run. Note this may always be close to zero if the program gets lucky under certain (or cyclical) conditions.

He notes everything in the course will be:
♀♪♫☼►◄↕‼¶§▬↨↑↓→←∟↔▲▼ !”#$%&'()*+,-./01