Skip to content

Latest commit

 

History

History
51 lines (28 loc) · 4.2 KB

File metadata and controls

51 lines (28 loc) · 4.2 KB

Explaining models

Model interpretability with Azure Machine Learning service

Machine learning interpretability is important in two phases of machine learning development cycle:

  • During training: Model designers and evaluators require interpretability tools to explain the output of a model to stakeholders to build trust. They also need insights into the model so that they can debug the model and make decisions on whether the behavior matches their objectives. Finally, they need to ensure that the model is not biased.
  • During inferencing: Predictions need to be explainable to the people who use your model. For example, why did the model deny a mortgage loan, or predict that an investment portfolio carries a higher risk?

The Azure Machine Learning Interpretability Python SDK incorporates technologies developed by Microsoft and proven third-party libraries (for example, SHAP and LIME). The SDK creates a common API across the integrated libraries and integrates Azure Machine Learning services. Using this SDK, you can explain machine learning models globally on all data, or locally on a specific data point using the state-of-art technologies in an easy-to-use and scalable fashion.

Overview

In this lab, we will be using a subset of NYC Taxi & Limousine Commission - green taxi trip records available from Azure Open Datasets. The data is enriched with holiday and weather data. We will use data transformations and the GradientBoostingRegressor algorithm from the scikit-learn library to train a regression model to predict taxi fares in New York City based on input features such as, number of passengers, trip distance, datetime, holiday information and weather information.

The primary goal of this quickstart is to explain the predictions made by our trained model with the various Azure Model Interpretability packages of the Azure Machine Learning Python SDK.

Exercise 1: Run the Notebook for this Lab

  1. In Azure portal, open the available machine learning workspace.

  2. Select Launch now under the Try the new Azure Machine Learning studio message.

    Launch Azure Machine Learning studio.

  3. When you first launch the studio, you may need to set the directory and subscription. If so, you will see this screen:

    Launch Azure Machine Learning studio.

    For the directory, select Udacity and for the subscription, select Azure Sponsorship. For the machine learning workspace, you may see multiple options listed. Select any of these (it doesn't matter which) and then click Get started.

  4. From the studio, navigate to Compute. Next, for the available Compute Instance, under Application URI select Jupyter. Be sure to select Jupyter and not JupterLab.

    Image highlights the steps to launch Jupyter from the Compute Instance.

  5. From within the Jupyter interface, select New, Terminal.

    Image highlights the steps to launch terminal from the Jupyter interface.

  6. In the new terminal window run the following command and wait for it to finish:

    git clone https://github.com/solliancenet/udacity-intro-to-ml-labs.git

    Image highlights the steps to clone the Github repo.

  7. From within the Jupyter interface, navigate to directory udacity-intro-to-ml-labs/aml-visual-interface/lab-23/notebook and open interpretability-with-AML.ipynb. This is the Python notebook you will step through executing in this lab.

    Image highlights the steps to open the notebook.

  8. Follow the instructions within the notebook to complete the lab.

Next Steps

Congratulations! You have just learned how to use the Azure Machine Learning SDK to help you explain what influences the predictions a model makes. You can now return to the Udacity portal to continue with the lesson.