Posts

Showing posts from September, 2018

3.3, due October 3

Difficult: The hardest part of the section was definitely the proof of Theorem 3.3.10, about the minimal number of nodes in an AVL tree. That proof took some thinking through. Reflective: One question I had while I was reading was, when you re-balance an AVL tree, do you have a maximum of two rotations? I understood by the end of the section that you do not -- that is the worst case for one set of three nodes, but that can result in an imbalanced tree that cascades all the way up.

3.2, due October 1

Difficult: I thought it was a little bit difficult the idea of trees and proofs regarding them in mathematical terms. Proposition 3.2.7 took a second to think about. Honestly though, the section didn't seem that bad. Reflective: I thought it was interesting to see the bathematical description of a tree. What is easily recognizable as a tree after working in computer science for so long suddenly gained logical significance when described in mathematical terms. Now we can do proofs on them!

3.1, due September 28

Difficult: I thought the section was pretty simple, but I did think the proofs of propositions about graphs were a bit difficult, just because they prove things using patterns of logic that are less familiar. Reflective: I thought it was a pretty amazing result that by squaring the adjacency matrix you can find the number of paths from node i to node j. I don't understand how it works  (at least not until I prove it in the homework), but I think it is a pretty cool property.

2.1, due on September 26

Reflective: One question I had was, why is it important to find the analytic bounds on the big-O of a function? For example, Example 2.1.2 shows that 22,026! is between 198,235 and 198, 245, which "isn't good enough." Why can we not just compute 22,026!? Is is more difficult to compute factorials than the analytic expressions that bound them? Difficult: Yep, the proof of Stirling's approximation got me. In fact, I had a hard time understanding the very first part. How is nlog(n) - n + 1 = integral_0^n(log(x)dx? It seems like the integral should evaluate to log(n) - log(1) = log(n).

1.10, due on September 24

Difficult: I actually had a hard time understanding the algorithm for multiplying two numbers digit-by-digit at first. But thinking about it more, I was able to understand it. One thing I still do not understand is how eqn. (1.47) only has 3 long multiplications - it looks like it has 5. Do the multiplications by factors of 10 not count? Reflective: I thought it was amazing, the  reworking of equations that they do in order to make the algorithms a tiny bit faster -- for example, Strassen discovered a way to do matrix multiplication using 7 multiplications rather than 8, which reduced the O of the algorithm by <.2 on large matrices. But I think it is amazing, because on large matrices that can result in matrix multiplication in half the time! Pretty cool.

1.9, due September 21

Difficult: I had a hard time understanding the proof of Fermat's little theorem, specifically how the theorem applies to the negative integers. It would be nice to spend some more time on that in class. Its corollaries would also be good to spend some time on. Reflective: I found it amazing that you can compute the modulus of a high-powered number so quickly! And the application of that concept to cryptography is pretty cool -- that it is so easy to encrypt and decrypt something given the key, but so hard to without it!

Sec 1.8 Due Sept 19

Difficult: One part of the reading that found difficult to grasp until the very end was how the GCD could be a linear combination of the numbers that it divides. It was only after the final example that I understood what they meant when they said it was the minimum value d for d = ax + by. It would be good to see an example that clarifies that early on in class. Reflective: It's pretty amazing that we have a simple algorithm over 2000 years old that finds the GCD of 2 numbers remarkably quickly, without factoring, and yet we still don't have an algorithm that finds the factors of a number quickly.

1.7, due on September 17

Difficult: I had a hard time following what Theorem 1.7.19 was trying to say, in simple conceptual terms. It would be nice to spend time on that in class. Reflective: I thought it was interesting the way that they described choosing a committee instead of a presidency. They expressed it in terms of a presidency, but divided out the number of possible orderings of a presidency. I had never thought of combinations as a subset of permutations.

1.5, due on September 14

Difficult: I wasn't sure what the 'last' Python operator was. Also, it seems really laborious to go through a program and count FLOPs. Reflective: On the other hand, the code optimization tricks were awesome! In particular, I thought it was really interesting that the fact that Python is interpreted causes us to miss out on the compiler optimization for for loops. So that's why Python for loops are so slow! I also thought there were some cool math tricks -- for example, for a value that is added n times, once each cycle, instead you can multiply that value by n outside the for loop and add it just once. Also, Example 1.5.8 was pretty cool, showing that for a while loop, instead of calculating something based on the iterator variable each time, you could solve the expression for the iterator variable, compute the max once, and then compare the iterator value to that max each time.

1.4, due on September 12

Difficult: I had a hard time understanding the reindexing based on changing the order of summation. I understand changing the order of summation when the range over which you sum is fixed. But when one set of indices depends on another variable you iterate over, I have a hard time understanding how that transfer of indices works. Reflective: I thought the trick (technique?) of using change of variables to get a difficult-to-sum-over expression in the form of a familiar expression was great. That will be very useful for putting difficult expressions in terms of expressions that we know how to sum over.

1.3, due on September 10

Difficult: I had a hard time understanding the difference in indexing in Threorem 1.3.10 (The Fundamental Theorem of Finite Calculus) -- why is b-1 used in (1.6) and n-1 used in (1.7)? Also, the whole idea of the difference operator and finite calculus is pretty new and it will take some time to develop the intuition for it. Reflective: The proof of the geometric series formula using the FTFC was super cool. I liked the trick where they multiplied the sum by (r - 1) initially and then distributed it, which allowed them to work with function that had the difference operator applied to it, then use the FTFC to further simplify.

1.1-1.2, due on September 7

1.1 I read the entire assignment. Difficult: The most difficult part of the material for me was understanding the intuition behind little-o. I understood the mathematical definition, but the intuition behind it is harder for me than the intuition behind big-O (which I'm more familiar with). Reflective: There were a couple of things that were interesting to me. One, I find it interesting that even if f(x) = 10*g(x), f(x) is still O(g(x)). It's pretty cool that big-O is more concerned with asymptotic growth as x -> infinity than with constant multiples of growth. Also, I think it's hugely valuable to be able to analytically determine the temporal complexity of an algorithm, rather than just giving it a round guess. 1.2 I read the entire assignment. Difficult: I read the entire assignment. The most difficult part of the material for me was understanding to what point we count operations. It seems kind of redundant to count constant-order operations executed once...

Introduction, due on September 7

Year in school and major: Junior, ACME Post-Calc Math Courses: Math 313, 314, 334, 341 Why did you choose ACME? I had a strong feeling that I should switch to ACME, so I did. Most effective math professors: Didn't let questions derail the course of the lecture, covered all of the material in the section in class. Least effective math professors: In office hours, breezed over difficult content rather than slowing down and checking for understanding. Essentially, in my experience the best professors are efficient with class time, but take their time to explain the more difficult points in office hours. Something interesting about me... I got in three car accidents as a teenager, but I grew up with a mom with a do-it-yourself mentality, so I got pretty good at fixing our dented cars. I have classes from 2-4 MW and 3-4 F, so I can only make one office hour/week. The best alternative time would be 1-2 or 4-5 MWF. Thanks for asking!