Loading...

Interpretable Feature Engineering | How to Build Intuitive Machine Learning Features

915 24________

There are many ways to capture underlying relationships in your data. Some will be easier to explain as they align with the intuition of your audience. So we should really be doing feature engineering not just for predictability but also for interpretability.

We’re going to discuss how to reformulate features with the goal of interpretability. At the same time, we’re going to understand how to capture non-linear relationships using polynomial regression, discretization and interactions. The goal is to build intuitive machine learning features. This will give you an interpretable model that is easy to explain.

🚀 Free Course 🚀
Signup here: mailchi.mp/40909011987b/signup
XAI course: adataodyssey.com/courses/xai-with-python/
SHAP course: adataodyssey.com/courses/shap-with-python/

🚀 Link to code 🚀
github.com/a-data-odyssey/XAI-tutorial

🚀 Useful playlists 🚀
XAI:    • Explainable AI (XAI)  
SHAP:    • SHAP  
Algorithm fairness:    • Algorithm Fairness  

🚀 Get in touch 🚀
Medium: conorosullyds.medium.com/
Threads: www.threads.net/@conorosullyds
Twitter: twitter.com/conorosullyDS
Website: adataodyssey.com/

🚀 Chapters 🚀
00:00 Introduction
02:05 Linear regression refresher
05:00 Reformulating linear relationships
10:32 Discretization
12:25 In

コメント