This ModelOp Center monitor computes Feature Importance metrics using a pre-trained SHAP Explainer.
| Type | Number | Description |
|---|---|---|
| Baseline Data | 0 | |
| Sample Data | 1 | A dataset corresponding to a slice of production data |
- Underlying
BUSINESS_MODELbeing monitored has- a pre-trained SHAP Exaplainer asset (as a
.picklefile) - a list of predictive features as used by the scoring model (as a
.picklefile)
- a pre-trained SHAP Exaplainer asset (as a
initfunction loads the list of predictive features and the SHAP explainer from.picklefiles.metricsfunction pre-processes input data to get dummy variables, then computes feature importance by computing SHAP values.- Test results are appended to the list of
interpretabilitytests to be returned by the model.
{
"interpretability": [
{
"test_name": "SHAP",
"test_category": "interpretability",
"test_type": "shap",
"metric": "feature_importance",
"test_id": "interpretability_shap_feature_importance",
"values": {
<feature_1>:<feature_1_importance>,
<feature_2>:<feature_2_importance>,
...:...
}
}
]
}