Quantitative Credit Risk Researcher

Shubham
Malpani

PD/LGD/EAD Model Development & Validation  ·  IFRS 9 & Basel III/IV

PD/LGD/EAD Model Development & Validation WoE Scorecards IFRS 9 & Basel III/IV Open to Relocation

Credit Risk professional with 2+ years of industry experience, specialising in PD/LGD/EAD model development, IFRS 9 staging, Basel III/IV IRB validation, and model risk governance.

Shubham Malpani — Quantitative Risk Researcher
📍 Open to Relocation
4+
Models Built
Scroll
2+
Years Experience
4+
Models & Projects
London → India
International Exposure
IFRS 9
& Basel Frameworks

Rigorous models,
real-world impact

I'm a Credit Risk professional with 2+ years of industry experience at Infospectrum Ltd (London, acquired by Lloyd's List Intelligence), delivering counterparty credit risk assessments, financial analysis, and regulatory reporting across global shipping, energy, and commodities markets. As an independent researcher, I have since built and validated a series of Python credit risk projects — spanning end-to-end EL model development, IFRS 9 staging, Basel III IRB validation, and documented model risk analysis.

"From 460,000-record EL frameworks to two-stage LGD modelling and documented failure analysis — each project benchmarked against IFRS 9 and Basel III IRB standards."

In independent research, I have built an end-to-end Expected Loss framework on 460,000+ Lending Club records (AUROC 0.702); a production-style EL pipeline on 307,511 Home Credit applications across 5 relational tables with two-stage LGD and IFRS 9 staging (Gini 0.481, Basel III IRB validated); and a documented model failure analysis examining experimental design flaws applicable to SR 11-7 model risk governance.

I'm open to Quantitative Risk Analytics roles, with a strong interest in leading financial institutions across Europe and India.

Get in Touch

Current Status

Quantitative Credit Risk Researcher — Independent. Actively building production-grade credit risk frameworks in Python across PD/LGD/EAD development, IFRS 9, Basel III IRB validation, and model risk governance. Open to full-time roles.

Industry Experience

2+ years at Infospectrum Ltd, London in counterparty credit risk, due diligence, and regulatory reporting.

Technical Stack

Python (scikit-learn, pandas, statsmodels, numpy, NetworkX) · SQL · Excel/VBA · Git

Location

Currently based in India · Open to Relocation · Available immediately

Skills & technical depth

📊

Credit Risk Modelling

End-to-end PD, LGD, EAD model development using WoE-based feature engineering, interpretable scorecard scaling, and logistic regression.

WoE / IVScorecards Logistic RegressionECL
⚖️

IFRS 9 Frameworks

ECL calculation, staging criteria design, and lifetime PD estimation. Alignment of model outputs with IFRS 9 accounting requirements.

Staging RulesLifetime PD ECL CalcSICR
🏛️

Basel III / IV

IRB approach modelling, parameter estimation, back-testing, and regulatory documentation for supervisory review processes.

IRB ApproachBack-testing RWAICAAPCOREP/FINREP
🔬

Model Validation

Discriminatory power, calibration, stability analysis. CAP curves, Gini, KS, AUROC, and PSI diagnostics across retail and corporate portfolios.

AUROC / GiniKS Statistic CAP CurvePSI
📈

Derivatives Pricing

Binomial tree (American & European options) and Black-Scholes implementations using live market data and dynamically estimated volatility.

Black-ScholesBinomial Tree NetworkXNifty 50
🐍

Python Implementation

Production-ready quantitative code with strong emphasis on interpretability, reproducibility, and clean model documentation.

scikit-learnstatsmodels pandasnumpy
01

WoE & Scorecard Development

Weight of Evidence transformation converts variables into interpretable, monotonic bins. I use Information Value for feature selection and fine/coarse classing to build scorecards scaled to a 300–850 range, aligning with regulatory expectations for interpretability.

WoE BinningInformation Value Fine / Coarse ClassingScore Scaling Odds-to-Score Mapping
02

PD Modelling & Calibration

Probability of Default models built using logistic regression on WoE-transformed features, with calibration to long-run default rates. Validation covers discriminatory power (Gini, AUROC, KS), calibration (Hosmer-Lemeshow), and population stability (PSI).

Logistic RegressionLong-Run Calibration AUROC / Gini / KSHosmer-Lemeshow PSI Monitoring
03

LGD & EAD Estimation

Loss Given Default modelled using two-stage methodology separating zero-loss from positive-loss borrowers, with workout LGD and recovery rate analysis. EAD via credit conversion factors. Both aligned with IFRS 9 and Basel IRB requirements.

Two-Stage LGDWorkout LGD CCF EstimationIFRS 9 Basel IRB
04

Model Validation & Diagnostics

Independent validation covering discriminatory strength via CAP and ROC curves, calibration via back-testing against observed defaults, and stability via PSI and CSI. Results documented in regulatory-grade validation reports.

CAP CurveROC Analysis Back-testingCSI / PSI SR 11-7 Governance
05

Derivatives Pricing

Option pricing using binomial tree (backward induction, risk-neutral valuation) for American and European options, and Black-Scholes with dynamically estimated annualised volatility from historical returns, applied to live Nifty 50 market data.

Black-ScholesBinomial Tree Risk-Neutral ValuationVolatility Estimation NetworkX
06

Python for Quantitative Risk

Production-oriented Python for credit risk: data pipelines in pandas, statistical modelling in statsmodels and scikit-learn, visualisation with matplotlib. Strong emphasis on clean, documented, reproducible code structured for regulatory review.

pandas / numpyscikit-learn statsmodelsmatplotlib Git / GitHub

Selected projects

02 Production-Style Pipeline

Home Credit Default Risk — Full Expected Loss Pipeline

Production-style end-to-end Expected Loss framework across 307,511 loan applications and 5 relational tables. Built a WoE/IV pipeline from scratch across 91 candidate features (43 selected), a two-stage LGD model separating zero-loss from positive-loss borrowers, and a CCF-based EAD model. IFRS 9 staging applied across the full portfolio. Basel III validation confirms model Gini of 0.481 exceeds the 0.35 IRB minimum threshold.

0.741
Test AUROC
0.481
Test Gini
0.364
KS Statistic
0.77%
Portfolio EL Rate

No overfitting: Gini gap of −0.005 · Mean LGD 43.1% · Total ECL Provision: 133,872,222 · IFRS 9 — Stage 1: 213 · Stage 2: 58,239 · Stage 3: 3,051

PythonWoE / IV Two-Stage LGDCCF / EAD IFRS 9 StagingBasel III IRB 307k Records5 Relational Tables
View on GitHub →
03 Model Risk Analysis

PD Modelling — Documented Failure Analysis

Built a complete PD modelling pipeline on the German Credit dataset (1,000 borrowers), then systematically documented why the model was always going to fail — not at the surface level of weak features, but at the level of experimental design. The target variable was simulated using the same features used to model it, creating a self-referential relationship that no amount of tuning could fix. The pipeline is technically correct throughout; the experiment is not.

This distinction — between pipeline correctness and experimental validity — is directly relevant to model validation, SR 11-7 compliance, and internal model risk frameworks.

0.548
Test AUROC
0.096
Test Gini
21 pts
Train-Test Gap
PythonWoE / IV Logistic RegressionScorecard Scaling Bernoulli SimulationModel Risk
View on GitHub →
04

Binomial Tree Option Pricing (Nifty 50)

Implemented a multi-step binomial tree framework using backward induction and live market data. Priced both American and European options using risk-neutral valuation. Integrated NetworkX for interactive graphical tree visualisation of the pricing lattice.

PythonNetworkX Risk-Neutral ValuationAmerican OptionsEuropean OptionsNifty 50
View on GitHub →
05

Black-Scholes Option Pricing (Nifty 50)

Developed a Python implementation of the Black-Scholes model using dynamically estimated annualised volatility derived from historical returns — moving away from fixed volatility assumptions toward a data-driven approach with real-time market data integration.

PythonBlack-Scholes Volatility EstimationReal-Time DataNifty 50
View on GitHub →

How I can help you

Beyond full-time roles, I take on short-term engagements where I can add genuine value. Whether you need a credit risk model built, an IFRS 9 framework designed, or guidance navigating a finance qualification — I am open to a conversation.

For Professionals
Need a credit risk model built from scratch?
I develop end-to-end PD, LGD, and EAD frameworks in Python — from raw data through WoE/IV feature engineering, scorecard development, and full validation. Aligned with IFRS 9 and Basel III/IV.
Want your existing model independently reviewed?
I provide independent model validation covering discriminatory power (AUROC, Gini, KS), calibration, population stability, and regulatory alignment — with a written findings report.
Building an IFRS 9 ECL framework?
I can help design or review staging criteria, lifetime PD estimation, macro-economic overlays, and ECL calculation methodology for your portfolio.
For Students
Struggling with CFA Level I or II?
Having passed CFA Level II, I can help you build a study plan, work through quantitative methods, fixed income, and derivatives, and develop the exam discipline that actually works.
Applying for an MSc in Finance or Banking?
I completed my MSc Banking & International Finance at Bayes Business School, City, University of London. I can help with school selection, personal statement, and interview preparation.
Want to break into credit risk analytics?
I can help you understand what hiring managers look for, how to build a relevant Python project portfolio, and how to position yourself for quantitative risk roles.

Not sure if what you need fits here? Reach out anyway.

Let's Talk →

Let's connect

I'm actively exploring Quantitative Risk Analytics roles. Whether you're a recruiter, hiring manager, or fellow quant — I'd love to hear from you.

Available for opportunities — open to relocation