How to Explain Models with IntepretML Deep Dive

Created by Mudabir Qamar Ansari, Modified on Mon, 14 Dec, 2020 at 5:16 PM by Mudabir Qamar Ansari

With the recent popularity of machine learning algorithms such as neural networks and ensemble methods, etc., machine learning models become more like a 'black box', harder to understand and interpret. To gain the stakeholders' trust, there is a strong need to develop tools and methodologies to help the user to understand and explain how predictions are made. In this video, you learn about our open source Machine Learning Interpretability toolkit, InterpretML, which incorporates the cutting-edge technologies developed by Microsoft and leverages proven third-party libraries. InterpretML introduces a state-of-the-art glass box model (EBM), and provides an easy access to a variety of other glass box models and blackbox explainers.

For more tips like this, check out the working remotely playlist at www.youtube.com/FoetronAcademy. 

 

Also, if you need any further assistance then you can raise a support ticket (https://cloud.foetron.com/) and get it addressed.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article