Welcome to Machine Learning Model Validation.
Below you will see the agenda for the Masterclass. Please note that due to the interactive nature of these sessions the timings may change slightly.
8:00 Registration and Breakfast | 8:50 Chairs opening remarks
SESSION 1
9:00 Introduction and machine learning explainability
- Elements of machine learning validation: Conceptual soundness and outcome analysis
- Introduction to key concepts: explainability, robustness, reliability and fairness
- Post-hoc explainability tools
- Local explainability: LIME and SHAP
- Global explainability: Variable Importance, Partial Dependence and Accumulated Local Effects
SESSION 2
9:00 Designing inherently interpretable model
- Limitation of post-hoc explainability
- Introduction to building inherently interpretable model
- Explainable boosting machine
- GAMI Neural Networks
- RuleFit
10:20 Morning refreshment break and networking
SESSION 3
9:00 Deep ReLU networks as interpretable models
- Local partition and linear models
- Model interpretation and diagnostics
- Complexity control through regularization
SESSION 4
9:00 Outcome testing
- Identification of performance weakness through slicing
- Reliability evaluation through conformal prediction
- Robustness evaluation for covariate/distribution drift
- Fairness testing
5:30 Chair’s closing remarks
Don’t miss insight from ModelOp at 1pm, sharing expert views on:
Utilizing automation techniques to efficiently execute risk assessment tests
- Develop tests that span model implementation types
- Automate execution of tests in a predictable manner
- Incorporate information about your models into standardized tests
- Automate evaluation of results incorporating model information
- Increase the efficiency of your model risk management assets
Jim Olsen, Chief Technology Officer, ModelOp

