Model Explainer Dashboard
The Model Explainer Dashboard provides interpretability and transparency for machine learning models. It helps users understand how predictions are made, identify key drivers of model behavior, and test alternative scenarios through interactive exploration.
The dashboard is available once an explainer has been generated for a model. It consists of four main tabs:
Feature Importance
Individual Predictions
What-if Analysis
Feature Dependence
Feature Importance
The Feature Importance tab shows which input features contribute most to the model’s predictions.
Global Importance: Aggregates feature influence across all predictions.
Visualization: Bar charts or ranked lists display features in descending order of impact.
Use cases:
Identify the most influential drivers of model behavior.
Detect irrelevant or low-impact features that may be excluded in retraining.
Validate whether important features align with domain knowledge.
Individual Predictions
The Individual Predictions tab provides explanations for single prediction instances.
Local Explanation: Shows feature contributions for the selected record.
Positive/Negative Influence: Indicators highlight whether a feature increased or decreased the predicted outcome.
Navigation: Users can browse across records or search for a specific instance ID.
Use cases:
Debug unusual or unexpected predictions.
Provide transparency when explaining predictions to end users.
Compare multiple records for fairness or consistency.
What-if Analysis
The What-if Analysis tab allows interactive testing of hypothetical input scenarios.
Manual Input Modification: Adjust feature values (e.g., income, age, transaction amount) for a selected record.
Real-time Prediction Updates: The dashboard recalculates the prediction with the new inputs.
Sensitivity Testing: Explore how small changes in features affect model output.
Use cases:
Evaluate fairness by simulating changes across demographic features.
Test thresholds (e.g., at what point a loan application is approved).
Assess the robustness of predictions under varying conditions.
Feature Dependence
The Feature Dependence tab shows how the model’s predictions vary with changes in a specific feature, often using partial dependence plots.
X-axis: Represents the selected feature.
Y-axis: Represents the predicted outcome or probability.
Trend Visualization: Highlights non-linear relationships, thresholds, or feature interactions.
Use cases:
Identify critical thresholds (e.g., credit score cutoff).
Understand monotonic or non-monotonic effects of features.
Guide feature engineering and business rule definition.
Key Benefits
Transparency: Provides visibility into how the model arrives at predictions.
Trust: Builds confidence among stakeholders by surfacing interpretable results.
Compliance: Supports explainability requirements in regulated industries.
Optimization: Helps data scientists refine models by understanding feature behavior.