Categories
Computer Science School

01 01 – Starting CS

I want to study computer science because

  • I’d like to write a stock trading program (I’ve been told this is so common that it’s a banned topic in some machine learning classes)
  • I’d like to write a chat bot that helps depressed people
  • I’d like to write a program that analyzes potential legislation through giving unique words (likely pork if it mentions a location) and giving the frequency of other words
  • I’d like to write a program that systematically aggregates and translates from non-English sources
  • I’d like to know programming because it can improve the quality of life for anything else I do
  • I’d like to program for a living because it seems like it would pay more than being an auditor.

To this end, I’m going to utilize MIT’s Open Course Ware. Yesterday I found a list of courses at https://ocw.mit.edu/courses/find-by-topic/#cat=engineering&subcat=computerscience. This list was daunting. I want to take 107 courses, about twice what a lot of degrees would require. (5 courses per semester * 8 semesters = 40 courses, 6 * 8 = 48 courses.) The amount to be consumed is a problem.

My advantages are manifold. I’m not taking the traditional courses in-person. I can drop a class if I don’t think it will be useful. I can proceed at my own pace. I can retake the tests. In some courses, I can take multiple midterm tests from different semesters if I feel like I need the practice. We will see if the advantages can surmount the number of courses I’d like to take.

***

Zipf’s law: the idea that the 2nd most frequently used word appears 1/2 as often as the most frequently used word, the 3rd most frequently used word appears 1/3 as often as the most frequently used word, etc. Generalized, the eponymous law states the Nth frequent word appears 1/N as often as the most frequently used word where N !< 2. As “the” is the most frequent word in English, its frequency would then seem to be important.

According to https://www.english-corpora.org/coca/, the frequency of “the” in the COCA (Corpus Of Contemporary American English) is 50,066,305. https://corpus.byu.edu/coca/compare-bnc.asp lists the millions of words in each genre and these add up to 575 million. 50066305 / 575000000 = 0.0870718347826087. 1 / .0870718347826087 = 11.484770046441414. So about every 12th word is “the”.

The Vsauce video on Zipf’s law says the frequency is more like 6%, so about every 16th word is “the”. That video also states the most frequent 100 words account for about half of all words in a work. A word used only once in a work is a “hapax legomenon”. Quizzaciously was, at the time the video was posted, a Google hapax legomenon.

Hapax legomenon are important because they constitute the gist of the message couched in the most frequent words. Additionally, they can make up a significant portion of a work. It stands to reason deep conversation might similarly be better defined not by frequency but by infrequency.

***

The Alt code for capital sigma (signifying summation) is Alt + 228. Σ!

***

Today I am starting with Introduction to Electrical Engineering and Computer Science 1. There is a Python Tutorial section which seems to be an indication I should know a certain amount of Python. I don’t know Python. They point to Think Python: How to Think Like a Computer Scientist by Allen B. Downey. Another resource is the official tutorial. Having read the introduction to both, I’m proceeding with Think Python because it seems more user-friendly.

I read chapter 1 of 19 chapters.

***

Computation Structures, 1 Basics of Information

Information resolves uncertainty about a fact or circumstance. The amount of uncertainty resolved, the more information conveyed. Claude Shannon defines information as I(xi) = log2(1 / pi) where discrete random variable X has N possibilities with P probabilities. (1 / pi) is the probability of a specific instance of information and log2 ensures the information is represented in bits, that is, the number of bits required to encode the information.

Common case: faced with N equally probable cases, you receive data narrowing cases to M choices. The probability such data is sent is M * (1 / N). I(data) = log2( 1 / (M * (1 / N)) ) = log2(N / M) bits.

Common case examples:

  • Coin flip => N=2 & M=1. log2(2/1) = 1 bit.
  • A drawn card’s suit => N=4 & M=1. log2(4/1) = 2 bits.
  • Roll of 2 dice => N=36 (the dice are different colors) & M=1. log2(36/1) = 5.17 bits. This represents the most efficient encoding scheme possible.
  • [mine] roll of 2 identical dice => N=21 & M=1. log2(21/1) = 4.39 bits. I did not find a calculator that worked for this, so I’m using Google to type things like “log2(21)”.

Entropy is the average information received in each datum received regarding X. That is, H(X) = E(I(X)) = Σpi * log2(1 / pi) where H is entropy, E is the expected value (the average), and I is the information. In other words, multiply each datum’s information in bits by the datum’s likelihood and add the products.

Entropy provides a yardstick to measure average bits used in an encoding scheme. If on average the encoding scheme uses less than the entropy, the scheme is not resolving uncertainty. If on average the encoding scheme uses more than the entropy, the scheme may be inefficient. The entropy should be just over H(X), the scheme communicating everything it needs to as efficiently as possible.

Encoding is an unambiguous mapping between bit strings and the set of possible data. There are fixed length and variable length encoding. Variable length encoding can be represented as a binary tree. An example would be to compress text wherein certain words occur more frequently than others.

Fixed length encoding assumes equal probability of the characters (or other piece of information). Similarly, for unknown probabilities and lacking contravening evidence of their equality, the programmer defaults to fixed length encoding. Thus a binary tree for a fixed length encoding scheme would have all the termini at the same height of the tree, otherwise it would be a variable length encoding scheme. Fixed length encoding allows random access, meaning the user can access the Nth character of a string via {(N-1) * [binary tree height] + 1}. The entropy for fixed length encoding is Σ(1/N)log2( 1 / (1/N) ) = log2(N) bits. So for the ASCII 128 characters, that’s 7 bits. Sure enough, log2(128) = 7.

Hexadecimal is often represented by preceding the number with a 0x, e.g. 0x7D1.

N-bit numbers. The highest order bit is the “sign bit” with 0 being positive and 1 being negative. The most negative number is 10…000?! So negative numbers are from opposite land? Yep, it seems a negative number for a “two’s complement value” is its bitwise complement and add 1. The C representation of bitwise complement is ~X where X is the things you want the bitwise complement of.

Huffman’s Algorithm problem: “given a set of symbols and their probabilities, construct an optimal variable length encoding”. Huffman’s Algorithm: (1) starting with the two least probable symbols, create a subtree. (2) Continue to make subtrees using the two least probable symbols or subtrees. Evaluate subtrees’ likelihoods as the sum of their symbols’ likelihoods. (3) Stop when the last symbol has been used. The result is an optimal constructed from the bottom up.

Huffman’s Algorithm need not be applied to single symbols. It can be applied to pairs of symbols or any other length desired. It can also apply to words.

The professor mentions the LZW algorithm. It’s a phenomenal algorithm, but I don’t understand it yet. I did take an hour or two to mess around with Huffman’s Algorithm. I think I’ve got it.

Hamming distance is the number of bits required to change one valid code word to another valid code word. For instance, if a coin flip is encoded to a single bit, the Hamming distance between heads and tails is 1, so a single bit error is undetectable. If the Hamming distance between to code words is zero, they are the same. If the minimum Hamming distance in an encoding scheme is 2, a single bit error can no longer go undetected.

A simple way of increasing Hamming distance is to add even or odd parity to the code words. For instance, if coin flips are encoded as 1 for heads and 0 for tails, adding an even parity bit to the end would give 11 and 00. If a parity check is run and the word isn’t even, an error has occurred. A two bit error would go undetected. To detect E errors, the minimum Hamming distance must be E + 1.

To correct E errors, set code words a minimum of 2E+1 Hamming distance apart. This is required so that no corruption of E magnitude will change any code word into any other code word’s possible corrupted versions.

I don’t think this was an hour long, but it took hours to digest it and type notes. There are 21 lectures, so this would take about 21 days. At first this seemed way too slow, but if you bank on about 120 days in a semester and divide by 6 courses, that’s 20 days per course.

***

Python commands used

  • help() – brings up a help utility
  • [in help()] “keywords” – brought up keywords help could help with
  • [in help()] quit – quits help utility
  • import random – imports the random module
  • random.randint(70, 80) – provides a random integer between 70 and 80, inclusive
  • import math – imports the math module (you get the idea)
  • math.sqrt(4) – gives the square root of 4
  • import calendar
  • calendar.HTMLCalendar(firstweekday=0) – makes a file at a given hexadecimal address. I tried to find said file by looking at files created today. Forgot that I reinstalled python today. I can try again tomorrow.
  • print(“Hello, world!”) – a traditional first step

Leave a Reply

Your email address will not be published. Required fields are marked *