Thank you for posting these! I find your content to be the most educational/entertaining on this subject matter.
Would be amazing if we could have similar quality content on Reinforcement learning from you! Thank you for having the lectures on Youtube!
Kilian + StatQuest = 100 percent learning. Thank You Prof Kilian, absolutely loved the series. Now I'll head over to Deep Learning :)
Really love it, I learn quite a lot each time I watch this video.
"Boosting is gradient descent in function space" - I am gonna steal this line.
Awesome lectures, you made machine learning easy for me, now I have better grip over the concepts of boosting .
"It begins with a Tay and ends with a Lor" hahaha "Taylor expansion" "That's right! HOW DID YOU KNOW?" i burst out laughing at that.
I have had my fruit and I am ready!! Lets do this!
the idea of putting the labels in a very high dimensional space is pedagogically very, very good! I had some troubles trying to understand AnyBoost from the original paper, but Prof. Weinberger made the principle behind it very easy to understand. Great as usual
Explained very well. Thanks a lot sir.!
It’s wonderful to revisit these lectures
Your lecture really helped me understand bagging and boosting more clearly. Thanks for your sharing.
Waking up from coma, What is boosting? Drunk Guy.
Great Lecture Series
Your English is so perfect.
I have to leave another comment. Prof. Weinberger is a teaching genius. Also, who says germans ain't funny?
Very Great Lecture
so well and funnyly explained TY
Really great lectures! Thanks! I have a question: In Gradient Boosting the algorithm says "h* = argmin sum (h(x_i) - t_i)^2". I don't understand if one should: 1. go through all the possible regression trees (with a maximum depth set) and find that "h*" OR 2. one could just apply CART with the new labels "t_i" instead of "y_i". As I might guess, we should go with 2. But doesn't 2 give just an approximate solution to that maximization problem? Am missing something?
@ayushinayak3980