![mathematicalmonk](/img/default-banner.jpg)
- 262
- 9 897 070
mathematicalmonk
Registrace 24. 03. 2011
Videos about math, at the graduate level or upper-level undergraduate.
Tools I use to produce these videos:
- Wacom Bamboo Fun tablet - medium size (~$150 pen tablet)
- SmoothDraw 3.2.7 (free drawing program)
- HyperCam 2 (free screen capture program)
- Sennheiser ME 3-ew (~$125 headset microphone)
Tools I use to produce these videos:
- Wacom Bamboo Fun tablet - medium size (~$150 pen tablet)
- SmoothDraw 3.2.7 (free drawing program)
- HyperCam 2 (free screen capture program)
- Sennheiser ME 3-ew (~$125 headset microphone)
(ML 2.5) Generalizations for trees (CART)
Other "impurity" quantities (entropy and Gini index), and generalizations of decision trees for classification and regression using the CART approach.
A playlist of these Machine Learning videos is available here:
czcams.com/users/my_playlists?p=D0F06AA0D2E8FFBA
A playlist of these Machine Learning videos is available here:
czcams.com/users/my_playlists?p=D0F06AA0D2E8FFBA
zhlédnutí: 34 314
Video
(ML 2.4) Growing a classification tree (CART)
zhlédnutí 37KPřed 11 lety
How to build a decision tree for classification using the CART approach. A playlist of these Machine Learning videos is available here: czcams.com/users/my_playlists?p=D0F06AA0D2E8FFBA
(IC 5.14) Finite-precision arithmetic coding - Decoder
zhlédnutí 10KPřed 12 lety
Pseudocode for the arithmetic coding decoder, using finite-precision. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.13) Finite-precision arithmetic coding - Encoder
zhlédnutí 6KPřed 12 lety
Pseudocode for the arithmetic coding encoder, using finite-precision. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.12) Finite-precision arithmetic coding - Setup
zhlédnutí 6KPřed 12 lety
Pre-defining the quantities that will be needed in the finite-precision algorithm. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.11) Finite-precision arithmetic coding - Rescaling
zhlédnutí 6KPřed 12 lety
We integrate the rescaling operations into the infinite-precision encoder, as a precursor to the finite-precision encoder. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.10) Generalizing arithmetic coding to non-i.i.d. models
zhlédnutí 3,8KPřed 12 lety
Arithmetic coding can accommodate essentially any probabilistic model of the source, in a very natural way. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.9) Computational complexity of arithmetic coding
zhlédnutí 6KPřed 12 lety
Arithmetic coding is linear time in the length of the source message and the encoded message. Since the encoded message length is near optimal on average, the expected time is near optimal. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.8) Near optimality of arithmetic coding
zhlédnutí 3,8KPřed 12 lety
The expected encoded length of the entire message is within 2 bits of the ideal encoded length (the entropy), assuming infinite precision. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.7) Decoder for arithmetic coding (infinite-precision)
zhlédnutí 7KPřed 12 lety
Pseudocode for the arithmetic coding algorithm, assuming addition and multiplication can be done exactly (i.e. with infinite precision). Later we modify this to work with finite precision. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.4) Why the interval needs to be completely contained
zhlédnutí 6KPřed 12 lety
To ensure unique decodeability, it's necessary that the interval [a,b) contain the whole interval corresponding to the encoded binary sequence, rather than just containing the number corresponding to the binary sequence. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.6) Encoder for arithmetic coding (infinite-precision)
zhlédnutí 7KPřed 12 lety
Pseudocode for the arithmetic coding algorithm, assuming addition and multiplication can be done exactly (i.e. with infinite precision). Later we modify this to work with finite precision. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.5) Rescaling operations for arithmetic coding
zhlédnutí 8KPřed 12 lety
Certain rescaling operations are convenient for the infinite-precision algorithm, and are critical for the finite-precision algorithm. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.3) Arithmetic coding - Example #2
zhlédnutí 18KPřed 12 lety
A simple example to illustrate the basic idea of arithmetic coding. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.2) Arithmetic coding - Example #1
zhlédnutí 85KPřed 12 lety
A simple example to illustrate the basic idea of arithmetic coding. A playlist of these videos is available at: czcams.com/play/PLE125425EC837021F.html
(IC 5.1) Arithmetic coding - introduction
zhlédnutí 59KPřed 12 lety
(IC 5.1) Arithmetic coding - introduction
(IC 4.12) Optimality of Huffman codes (part 7) - existence
zhlédnutí 1,8KPřed 12 lety
(IC 4.12) Optimality of Huffman codes (part 7) - existence
(IC 4.13) Not every optimal prefix code is Huffman
zhlédnutí 3,2KPřed 12 lety
(IC 4.13) Not every optimal prefix code is Huffman
(IC 4.11) Optimality of Huffman codes (part 6) - induction
zhlédnutí 2,4KPřed 12 lety
(IC 4.11) Optimality of Huffman codes (part 6) - induction
(IC 4.10) Optimality of Huffman codes (part 5) - extension lemma
zhlédnutí 2,1KPřed 12 lety
(IC 4.10) Optimality of Huffman codes (part 5) - extension lemma
(IC 4.9) Optimality of Huffman codes (part 4) - extension and contraction
zhlédnutí 2KPřed 12 lety
(IC 4.9) Optimality of Huffman codes (part 4) - extension and contraction
(IC 4.8) Optimality of Huffman codes (part 3) - sibling codes
zhlédnutí 1,8KPřed 12 lety
(IC 4.8) Optimality of Huffman codes (part 3) - sibling codes
(IC 4.7) Optimality of Huffman codes (part 2) - weak siblings
zhlédnutí 2,8KPřed 12 lety
(IC 4.7) Optimality of Huffman codes (part 2) - weak siblings
(IC 4.6) Optimality of Huffman codes (part 1) - inverse ordering
zhlédnutí 6KPřed 12 lety
(IC 4.6) Optimality of Huffman codes (part 1) - inverse ordering
(IC 4.5) An issue with Huffman coding
zhlédnutí 3,9KPřed 12 lety
(IC 4.5) An issue with Huffman coding
(IC 4.4) Weighted minimization with Huffman coding
zhlédnutí 4KPřed 12 lety
(IC 4.4) Weighted minimization with Huffman coding
(IC 4.2) Huffman coding - more examples
zhlédnutí 20KPřed 12 lety
(IC 4.2) Huffman coding - more examples
(IC 4.1) Huffman coding - introduction and example
zhlédnutí 119KPřed 12 lety
(IC 4.1) Huffman coding - introduction and example
(IC 3.10) Relative entropy as the mismatch inefficiency
zhlédnutí 5KPřed 12 lety
(IC 3.10) Relative entropy as the mismatch inefficiency
Practical to follow, thanks.
Why we need "smaller" sigma algebras and not use always powerset as sigma algebra?
❤ que se encuentra en el ❤❤❤😂❤❤❤❤❤😂llppku 😂 mmslllllMMKkKKKKkzkzkzzkdkskskskskkjkkksskkskkkksksksjJjJKKakSaLaalaeewee esa🇧🇴🚩🇧🇴🇧🇳🇧🇾🇧🇳🇧🇳🇧🇷🇧🇾🇧🇾🇧🇾🇧🇾🇧🇾🇧🇾🇧🇷🇧🇪🇧🇪🇧🇪🇧🇪🇦🇺🟧🚂🚋🚋🚅🚃🏹 m
Man, 12 years ago you did a better job than recent articles and lessons. I swear I understood the algorithm in less than 5 minutes, which I'm looking for descriptive contents to understand for days. I can't thank you enough, I'm trying to write a new implementation of an image format! If I become Linus Torvalds someday, I'll make sure people will now you helped me a lot haha!
why do we need the probability mass function? wouldn't the steps be the same for encoding and decoding for an even distribution?
very nice video. thank you so much!
Hello ,I have one question,why are mu and theta not hidden variables?
I pressed the like button only for the sounds you created while drawing the lines
The whole series of the lectures are so great and I like them very very much! As you mentioned in one of your lecture, there are three big theorems in information theory: source coding theorem, rate distortion theorem and channel coding theorem. It is my best wish that you can provide video lectures for the contents of the channel coding and rate distortion theorems. Thank you so much for the wonderful lectures!
I have returned to this derivation several times over that past many years. This is a clear calculation of the gradient of the log likelihood function for logistic regression. Thank you!
if C_j is a vector of counts of each category, what does alpha mean?
Sir your tutor is amazing. But many students are not benefiting from it. So for you to help them find your tutor please first add profile to your youtube channel and for your videos add thumbnails on it.
This is like the best algorithm ever designed. So many nice properties, from being able to derive the length of the source string from the interval while decoding, to this elegant entropy property. Utterly beautiful.
You had me at “this is overhead”. Number version is bloat, just like Microsoft telemetry
Sorry the way you pronounce theta and data really confuses me 😶🌫
this is so good man
monk
why is the differential equal to the term inside the infinte sum?
Just one example with real values and data would have helped alot
Fantastic presentation, easy to follow and giving great intuition, thank you.
It's videos like these that make you wish youtube had a 3x playback speed...
The first column of the desing matrix should contain only 1s, shouldnt it?
What did he say
super intuitive Thanks
Why B=2?? In example a!
Hello, I have watched the whole series of lectures about arithmetic encoding and I'm very thrilled about this algorithm and your way of explaining things. You are a great professor and mathematician. Thank you for your lectures! Unfortunately my finite precision implementation works incorrectly on large input data (maybe due to round off error). So maybe you cold share your implementation of the algorithm. It would be a great help! Thank you in advance!
Set the probability distribution so that each p(x_i) has the form 1/2**m with m>=0. It seems the algorithm work for long sequence in this case. I guess this can get rid of the round off error issue but I haven't proved it yet. In addition, any places like "< half" or ">half" or ">quarter and <3*quarter" should use <= and >= instead.
It was one of the best explanations, so informative and helpful. Thank you!
Your enthusiastic way of teaching is so inspiring. Thank you for sharing this great video!
With regrets I have to say, these are hardly good videos to learn this topic from. It is not the first video that makes me completely confused about where things come from or why certain terms appear in the main or side equations. It's not well organised IMO. I have made a few attempts to come back to the playlist and give it a another go. First I thought I was missing basics. Now, I am convinced the videos are problems. The idea is great, the effort is still appreciated, but sadly it's hard to notice or understand things if they are not explained.
You cite a study but you do not cite the inventor of Random Forests, Leo Breiman.
I like these videos but this one was very confusing!
yes this seems good, i agree on that topic
ありがとう、先生👍
Good explanation of Poisson and Exponential rv
I strongly disagree about the naming you are talking at the beginning. Logistic regression ultimately serves as a classification method, but it fits a logistic (or sigmoid) line to the data so it could be thought as a regression process.
Didn't quite follow the step where "the joint was summed over c". What does that mean?
A little dissapointed that Arianna Rosenbluth wasn't included in the list of creators. She was also the first to use the method. Great explanation of MCMC!
I like this name "mathematicalmonk" a lot. Because I think you must have a monk's heart to love mathematics...
thank you
Speaking from personal experience, you explain these concepts so much better than some professor with an h-index of 60 at a well-known university.
sir, may you please make a video for gamma random variables expected values
It was perfect exposition of the subject many thanks
This vid makes references to watching other vids before proceeding to the next one !
I must admit, I got lost in the details here ... 😢
for the continuous case, of what use is it?
Can you please come back to youtube sir, your videos are great. Sadly im 10 years late to see your videos
thanks for ur efforts
Thanks for the video, but in fact you don't need to factor both C and D, just factoring one of them is enough! As Schur did in his paper (1911).
Thanks so much, after looking at many explanation on this topic, I finally got it after watching this video.
In the shell game X and Y are not independent like in your equations. In the shell game there us an additional equation: X + Y + Z = 1.
@mathematicalmonk at 1:26 you write P(A, B, C) =P(A|C) P(B|C) P(C). How do you prove it?