Great video series. Don’t stop making them. Maybe take another app/tool/methodology and break it into parts like you did with SHAP. Very digestible.
I really enjoyed such a deep discussion about the clear distinction between correlation and causation!
Amazing work, Conor! Keep them coming. These 6 mins have helped clarify so many topics!
best youtuber explaining SHAP I have found!
WOW! Such an amazing explanation on SHAP! I really enjoyed. Thank you.
good explanation on topic , thank you sir
Many thanks from Japan!
Great video!!!
AMAZING WORK!
Great video man. Thank you very much.
Great explanation!
Great video. You mentioned that KernelSHAP suffers from extrapolation if features are correlated, like other permutation based methods. What about TreeSHAP with e.g., XGBoost?
Amazing video. Thank you so much. I have one question please: When explaining kernelShap, what do you mean by permuting values, please? What does mean grey circles in the graph at time 2.28, please? Does permuting refer to changing features order ( this is not clear in the graph in video at 2.28) or it refers to replacing some feature values with random values? Thank in advance for your response
i signed up for the newletter but can't get the free course
I am confused. You said that Machine Leaning only cares about correlations not association but should it be said "only cares about correlations not causation"?
Is there any way to deal with limitation 2: Feature Dependencies ?
can show some code about LIME
@adataodyssey