Posts

Showing posts from November, 2018

8.4, due Nov. 30

Difficult: I had a hard time following the proof of the fast Fourier Transform algorithm. I did think it was neat that we were able to take advantage of mathematical symmetry to improve performance from O(n^2) to O(nlogn)! Reflective: I thought it was pretty cool that because the DFT can be represented as an orthonormal, it is not only invertible, but very easy to invert!

8.3, due Nov. 28

Difficult: My lack of understanding in this chapter began when I couldn't understand the first equality in Lemma 8.3.1... and still hadn't caught back on by the end of the chapter. Looking forward to class tomorrow! Reflective: I didn't understand this section well enough to adequately reflect on it, but it seemed like the formula for the Dirichlet kernel looked very similar to the truncated Fourier series. In Lemma 8.3.6 it showed their relationship, but I guess I'm still missing the intuition for what a Dirichlet kernel is, and what its purpose is.

8.1, due Nov 26

Difficult: I had a hard time distinguishing between k and c_k. As far as I understand, the c_k's are Fourier coefficients, but k's are the index...? I'm not sure. Some extra explanation on that would be helpful. Reflective: I can already see how powerful this could be. Anywhere you see a periodic signal, a Fourier transform will give extremely valuable information. I saw an interesting application of this in my deep learning class, where the position of a word in a sentence was encoded by modulating its input embedding with a certain frequency, based on the position of the word. The neural network then learned to perform a Fourier Transform to use the position of the word as a feature in its learning.

7.3, due Nov. 20

Difficult: The section wasn't too bad; I thought it was difficult to understand the formula for the probability of a hash collision, but that's about it. Reflective: I was aware of hashing, its motivations, and methods. I hadn't thought of it in terms of math before, though -- it is really just a function, and if it is not injective then you will run into hash collisions.

7.2, due Nov 19

Difficult: This section was tough for me. I had a hard time grasping what they meant when they talked about integrating over a similar distribution to approximate the true distribution, and what it meant when they took the ratio of the pdf of Y over the pdf of X. Reflective: I've been enjoying this course quite a lot. But it would be really nice to have a break in homework. Just one day, to let us catch up. If you'd be willing to consider it.

7.1, due 11/15/2018

Difficult: I wasn't sure what the [a, b] notation means - e.g. lambda([a, b]), integral over [a, b], etc. I also had a hard time seeing what we can take from this section as far as concrete theorems... there seem to be examples, but nowhere does it discuss much of the theory behind MC methods -- just that they work. Reflective: I thought it was pretty cool that we can use Monte Carlo methods when all other methods fail, that they are such a versatile numerical method.

6.5, due on Nov 12

Difficult: I could definitely use an explanation why we would choose a Beta distribution as our prior for a Bernoulli distribution. What determines what distribution we use as a prior? Do we have to choose a distribution as a prior? Reflective: I feel like I have been able to do the problems in this chapter because I knew the section that they pertained to. But if I were to be given a fresh problem, I wouldn't necessarily know what tools to apply to it. Would it be possible in our review session in class on Wednesday to go through a list of problems and figure out which theorems and known distributions to apply to those problems?

6.3, due Nov 9

Difficult: The hardest part about this assignment for me was figuring out the intuition of the difference between the sum of i.i.d. random variables being normally distributed, and the distribution from which they were originally sampled being normally distributed. Reflective: I can already see how useful this will be. I thought it was fascinating that in Ex. 6.3.3. they show us that while the Law of Large Numbers gives us a (not very helpful) upper bound on P, with the Central Limit Theorem we can give an approximation of exactly what that probability will be.

6.2, due Nov 7

Difficult: In this (and other) sections, I have had a hard time grasping the jumps they make in their proofs. In this section I didn't understand what they meant in their proof of Lemma 6.2.1. Reflective: It was interesting to me that there exists such thing as the law of averages... I always thought that wasn't really a law. I look forward to learning the strong version of that law.

6.1, due Nov 5

Difficult: I had a hard time understanding the bridge between estimators and maximum likelihood estimators. I wasn't sure how the MLE comes out of the estimator. Also, the notation L(/theta) is confusing -- is it a function of /theta or of x? Reflective: We've talked a lot about maximum likelihood estimation in my machine learning classes; I'm excited to learn the theory and statistics behind it!

5.7, due Nov 2

Difficult: I thought the hardest part of this section was the part on marginal distributions. I was confused by the notation in p.m.f. for marginal distributions -- are we summing/integrating over just the variable that we are taking the marginal distribution of? Or are we summing over all of the variables except the variable that we are taking the marginal distribution of? Reflective: These are some awesome concepts, and I think it would help a ton to solidify understanding if we had a machine learning algorithm to code up that applied these principles. In Dr. Wingate's CS 401R class we did a lab on the Expectation Maximization algorithm for clustering -- one of the most fundamental machine learning algorithms. That lab applied the concepts of covariance matrices, multivariate normal distributions, and also was relevant in showing the numerical instability of covariance matrices. It was a great lab, and I think ACME should cannibalize it. If you're interested, here's t...