@ayushinayak3980

Really loved the drunk walk demo at the start. You are a good one, Professor!

@OliverJanShD

Thank you for posting these! I find your content to be the most educational/entertaining on this subject matter.

@sans8119

Would be amazing if we could have similar quality content on Reinforcement learning from you! Thank you for having the lectures on Youtube!

@rohit2761

Kilian + StatQuest = 100 percent learning.
Thank You Prof Kilian, absolutely loved the series. 
Now I'll head over to Deep Learning :)

@linxingyao9311

Really love it,  I learn quite a lot each time I watch this video.

@HLi-pc4km

"Boosting is gradient descent in function space" - I am gonna steal this line.

@rishabhkumar-qs3jb

Awesome lectures, you made machine learning easy for me, now I have better grip over the concepts of boosting .

@vatsan16

"It begins with a Tay and ends with a Lor" hahaha "Taylor expansion" "That's right! HOW DID YOU KNOW?" i burst out laughing at that.

@vatsan16

I have had my fruit and I am ready!! Lets do this!

@andreamercuri9984

the idea of putting the labels in a very high dimensional space is pedagogically very, very good! I had some troubles trying to understand AnyBoost from the original paper, but Prof. Weinberger made the principle behind it very easy to understand. Great as usual

@ruman2494

Explained very well. Thanks a lot sir.!

@juliocardenas4485

It’s wonderful to revisit these lectures

@xiaoyu5181

Your lecture really helped me understand bagging and boosting more clearly. Thanks for your sharing.

@kirtanpatel797

Waking up from coma, What is boosting? Drunk Guy.

@Psingh8077

Great Lecture Series

@KW-fb4kv

Your English is so perfect.

@HLi-pc4km

I have to leave another comment. Prof. Weinberger is a teaching genius. Also, who says germans ain't funny?

@Ange-ClementAkazan-k5s

Very Great Lecture

@yannickpezeu3419

so well and funnyly explained TY

@aciobanusebi

Really great lectures! Thanks! I have a question: In Gradient Boosting the algorithm says "h* = argmin sum (h(x_i) - t_i)^2". I don't understand if one should: 
1. go through all the possible regression trees (with a maximum depth set) and find that "h*" 
OR 
2. one could just apply CART with the new labels "t_i" instead of "y_i". 
As I might guess, we should go with 2. But doesn't 2 give just an approximate solution to that maximization problem? Am  missing something?