Материал готовится,
пожалуйста, возвращайтесь позднее
пожалуйста, возвращайтесь позднее
Instructor (Brad Osgood):I wonder how long I can keep this up. All right. So the first
thing – any questions? Any comments? All right. The first thing I want to do is fix a little
mistake I made at the end of last time. I was upset about it for days afterwards, so I just
want to correct it. This was [inaudible] central limit theorem, which I hope you will agree
was a real treat. I mean, it’s just an amazing fact, and the way that it brings in
convolution, the [inaudible] transform of the galaxy and so on is really, I think, a real
highlight of the course to see those things emerge and to – the whole spooky nature of
repeated convolutions approaching the galaxy, I think, is just an amazing thing.
But I did screw it up at the end, so I just wanted to correct that. Everything was fine
except for one final formula that I wrote, so I’m not going to redo it. I just want to point
out where I wrote the wrong thing. It’s correct in the notes. I just wrote the wrong thing
on the board. This was the setup – so X1 through XN are, as they say in the biz,
independent and identically distributed random variables. You don’t have to write this
down. I’m just recalling what the notation was. And P of little X is the distribution that
goes along with all of them because they have the same distribution.
There’s a single function, P of X, which describes how each one of them is distributed.
So it’s a distribution for each. And then I formed P of N of X was the distribution for the
sum scaled by square root of N. So it’s the average – excuse me. There was some
assumption we made on the Xs on normalation, that is. We assume they had mean zero
and we assume they had standard deviation or variance one, and then if you form the
sum, the mean of the sum is zero but the standard deviation or the variant center
deviation of the sum is the square root of N, so it’s scaled by the square root of N.
You divide by square root of N to have this sum, SN, I called it, have mean zero and
standard deviation one, and P of N of X was the distribution for this. What we found was
that the Fourier transform – here we go. The Fourier Transform of P of N of S was the
Fourier transform of P and S over the square root of N to the Nth power. And then the
trick of the analysis – the way the proof worked was to compute the Fourier transform of
P and S over the square root of N just using the definition of Fourier transform that uses a
very sneaky thing of looking at the Taylor series of expansion with the complex
exponential integrating the terms and so on, and it was really quite clever.
What I found was that the Fourier transform of P at S over the square root of N was
approximately if N is large was one minus two Pi squared S squared over N. That was
fine. That was fine. And then for some reason, I raised this to the Nth power, so the
Fourier transform P at S over the square root of N to the N was then approximately this
thing raised to the Nth one minus two Pi squared S squared over N to the N and then
inexplicably I had the wrong approximation for this in terms of an exponential in terms of
the power of E.
That is this one minus two Pi squared S squared over N is approximately E to the minus
two Pi squared S squared. I’m going to look that up again to make sure I have it right this
time. Sure enough. All right. Then from there, and for some reason I wrote two Pi
squared over S squared last time, but it’s two Pi squared times S squared.
thing – any questions? Any comments? All right. The first thing I want to do is fix a little
mistake I made at the end of last time. I was upset about it for days afterwards, so I just
want to correct it. This was [inaudible] central limit theorem, which I hope you will agree
was a real treat. I mean, it’s just an amazing fact, and the way that it brings in
convolution, the [inaudible] transform of the galaxy and so on is really, I think, a real
highlight of the course to see those things emerge and to – the whole spooky nature of
repeated convolutions approaching the galaxy, I think, is just an amazing thing.
But I did screw it up at the end, so I just wanted to correct that. Everything was fine
except for one final formula that I wrote, so I’m not going to redo it. I just want to point
out where I wrote the wrong thing. It’s correct in the notes. I just wrote the wrong thing
on the board. This was the setup – so X1 through XN are, as they say in the biz,
independent and identically distributed random variables. You don’t have to write this
down. I’m just recalling what the notation was. And P of little X is the distribution that
goes along with all of them because they have the same distribution.
There’s a single function, P of X, which describes how each one of them is distributed.
So it’s a distribution for each. And then I formed P of N of X was the distribution for the
sum scaled by square root of N. So it’s the average – excuse me. There was some
assumption we made on the Xs on normalation, that is. We assume they had mean zero
and we assume they had standard deviation or variance one, and then if you form the
sum, the mean of the sum is zero but the standard deviation or the variant center
deviation of the sum is the square root of N, so it’s scaled by the square root of N.
You divide by square root of N to have this sum, SN, I called it, have mean zero and
standard deviation one, and P of N of X was the distribution for this. What we found was
that the Fourier transform – here we go. The Fourier Transform of P of N of S was the
Fourier transform of P and S over the square root of N to the Nth power. And then the
trick of the analysis – the way the proof worked was to compute the Fourier transform of
P and S over the square root of N just using the definition of Fourier transform that uses a
very sneaky thing of looking at the Taylor series of expansion with the complex
exponential integrating the terms and so on, and it was really quite clever.
What I found was that the Fourier transform of P at S over the square root of N was
approximately if N is large was one minus two Pi squared S squared over N. That was
fine. That was fine. And then for some reason, I raised this to the Nth power, so the
Fourier transform P at S over the square root of N to the N was then approximately this
thing raised to the Nth one minus two Pi squared S squared over N to the N and then
inexplicably I had the wrong approximation for this in terms of an exponential in terms of
the power of E.
That is this one minus two Pi squared S squared over N is approximately E to the minus
two Pi squared S squared. I’m going to look that up again to make sure I have it right this
time. Sure enough. All right. Then from there, and for some reason I wrote two Pi
squared over S squared last time, but it’s two Pi squared times S squared.
Загрузка...
Выбрать следующее задание
Ты добавил
Выбрать следующее задание
Ты добавил