Session: Hybrid Decision making: Black box ML ft. explainable AI
With wide no. of libraries & frameworks available for building ML models ML has become a black-box these days Thus model interpretability is vital. But its hard to define a model’s decision boundary in simple terms. With LIME(an open source library) its easy to produce faithful explanations and decrypt any ML model.
In today’s business centric world, there’s a renewed focus on model interpretability. With ML finding multiple use cases for elevating businesses, it has become vital to Interpret the model, build trust in the model (because there’s money at stake!) and understand how it works for any given data.
The most common all ML enthusiasts have is: why was this prediction made or which variables caused the prediction?
Model interpretability tricks like CV and grid search only answer the question from the perspective of the entire data set. It is hard to diagnose specific model predictions.
How many times have we all been stuck when our model performs well for some labels while fails for spurious data. We are left thinking what went wrong during training? That’s where the magical libraray LIME comes into the picture. LIME is a python library which tries to solve for model interpretability by producing locally faithful explanations thus explaining the decision boundaries of our model in human understandable form.
If you are a novice or an expert in ML, LIME is just the tool for everyone. It explains a prediction so that even the nonexperts could compare and improve on an untrustworthy model through feature engineering.
What will you learn?
- The talk will primarily highlight the necessity of understanding the outputs of ML models to build a profound understanding of how ML works.
- You'll learn a new way of hybrid decision-making.
- At the end of the talk you’ll know how to explain your model to possible anyone! This makes it easier for you to sell your business idea and build transparent and responsible models.
In order to create trust in our model, we need to explain the model not only to ML experts but also to domain experts which require a human understandable explanation. You’ll be equipped in creating a model agnostic locally faithful explanation set which helps even the non experts in understanding how the original model is making it’s decision.
All this sounds exciting, to add to this, a demonstration on the classic TITANIC data-set will be given to build a profound understanding of the concepts discussed.
Bio
I am currently working with Goldman Sachs and building all things data @Marcus by Goldman Sachs, India. I am a big-time Machine Learning aficionado. In the past few years, I have worked with Computer Vision and Music Genre Analysis. While I am not working, I build AI and ML-based applications for social good and work on building applications at scale while at work. I love participating in hackathons. I am a serial hackathon winner (Microsoft AI hackathon, Sabre Hack, Amex AI hackathon, Icertis Blockchain, and AIML hackathon, Mercedes Benz) and people often call me "The Hackathon Girl". I am a tech speaker, tech blogger, podcast host, hackathon mentor @MLH hacks, technical content creator at Omdena, and Global Ambassador at Women. Tech Network. I believe in hacking my way through life one bit at a time.