PD/LGD/EAD Model Development & Validation · IFRS 9 & Basel III/IV
Credit Risk professional with 2+ years of industry experience, specialising in PD/LGD/EAD model development, IFRS 9 staging, Basel III/IV IRB validation, and model risk governance.
I'm a Credit Risk professional with 2+ years of industry experience at Infospectrum Ltd (London, acquired by Lloyd's List Intelligence), delivering counterparty credit risk assessments, financial analysis, and regulatory reporting across global shipping, energy, and commodities markets. As an independent researcher, I have since built and validated a series of Python credit risk projects — spanning end-to-end EL model development, IFRS 9 staging, Basel III IRB validation, and documented model risk analysis.
In independent research, I have built an end-to-end Expected Loss framework on 460,000+ Lending Club records (AUROC 0.702); a production-style EL pipeline on 307,511 Home Credit applications across 5 relational tables with two-stage LGD and IFRS 9 staging (Gini 0.481, Basel III IRB validated); and a documented model failure analysis examining experimental design flaws applicable to SR 11-7 model risk governance.
I'm open to Quantitative Risk Analytics roles, with a strong interest in leading financial institutions across Europe and India.
Quantitative Credit Risk Researcher — Independent. Actively building production-grade credit risk frameworks in Python across PD/LGD/EAD development, IFRS 9, Basel III IRB validation, and model risk governance. Open to full-time roles.
2+ years at Infospectrum Ltd, London in counterparty credit risk, due diligence, and regulatory reporting.
Python (scikit-learn, pandas, statsmodels, numpy, NetworkX) · SQL · Excel/VBA · Git
Currently based in India · Open to Relocation · Available immediately
End-to-end PD, LGD, EAD model development using WoE-based feature engineering, interpretable scorecard scaling, and logistic regression.
ECL calculation, staging criteria design, and lifetime PD estimation. Alignment of model outputs with IFRS 9 accounting requirements.
IRB approach modelling, parameter estimation, back-testing, and regulatory documentation for supervisory review processes.
Discriminatory power, calibration, stability analysis. CAP curves, Gini, KS, AUROC, and PSI diagnostics across retail and corporate portfolios.
Binomial tree (American & European options) and Black-Scholes implementations using live market data and dynamically estimated volatility.
Production-ready quantitative code with strong emphasis on interpretability, reproducibility, and clean model documentation.
Weight of Evidence transformation converts variables into interpretable, monotonic bins. I use Information Value for feature selection and fine/coarse classing to build scorecards scaled to a 300–850 range, aligning with regulatory expectations for interpretability.
Probability of Default models built using logistic regression on WoE-transformed features, with calibration to long-run default rates. Validation covers discriminatory power (Gini, AUROC, KS), calibration (Hosmer-Lemeshow), and population stability (PSI).
Loss Given Default modelled using two-stage methodology separating zero-loss from positive-loss borrowers, with workout LGD and recovery rate analysis. EAD via credit conversion factors. Both aligned with IFRS 9 and Basel IRB requirements.
Independent validation covering discriminatory strength via CAP and ROC curves, calibration via back-testing against observed defaults, and stability via PSI and CSI. Results documented in regulatory-grade validation reports.
Option pricing using binomial tree (backward induction, risk-neutral valuation) for American and European options, and Black-Scholes with dynamically estimated annualised volatility from historical returns, applied to live Nifty 50 market data.
Production-oriented Python for credit risk: data pipelines in pandas, statistical modelling in statsmodels and scikit-learn, visualisation with matplotlib. Strong emphasis on clean, documented, reproducible code structured for regulatory review.
Developed a full Expected Loss framework on a 460,000+ record dataset (17 features selected from 74 via WoE/IV analysis). Built an interpretable WoE-based scorecard (300–850 range) for PD estimation, with LGD and EAD modelling completing the regulatory-grade pipeline. Model validated out-of-sample using Gini, KS, and CAP curve diagnostics — benchmarked against Basel III IRB standards.
View on GitHub →Production-style end-to-end Expected Loss framework across 307,511 loan applications and 5 relational tables. Built a WoE/IV pipeline from scratch across 91 candidate features (43 selected), a two-stage LGD model separating zero-loss from positive-loss borrowers, and a CCF-based EAD model. IFRS 9 staging applied across the full portfolio. Basel III validation confirms model Gini of 0.481 exceeds the 0.35 IRB minimum threshold.
No overfitting: Gini gap of −0.005 · Mean LGD 43.1% · Total ECL Provision: 133,872,222 · IFRS 9 — Stage 1: 213 · Stage 2: 58,239 · Stage 3: 3,051
View on GitHub →Built a complete PD modelling pipeline on the German Credit dataset (1,000 borrowers), then systematically documented why the model was always going to fail — not at the surface level of weak features, but at the level of experimental design. The target variable was simulated using the same features used to model it, creating a self-referential relationship that no amount of tuning could fix. The pipeline is technically correct throughout; the experiment is not.
This distinction — between pipeline correctness and experimental validity — is directly relevant to model validation, SR 11-7 compliance, and internal model risk frameworks.
Implemented a multi-step binomial tree framework using backward induction and live market data. Priced both American and European options using risk-neutral valuation. Integrated NetworkX for interactive graphical tree visualisation of the pricing lattice.
View on GitHub →Developed a Python implementation of the Black-Scholes model using dynamically estimated annualised volatility derived from historical returns — moving away from fixed volatility assumptions toward a data-driven approach with real-time market data integration.
View on GitHub →Beyond full-time roles, I take on short-term engagements where I can add genuine value. Whether you need a credit risk model built, an IFRS 9 framework designed, or guidance navigating a finance qualification — I am open to a conversation.
Not sure if what you need fits here? Reach out anyway.
Let's Talk →I'm actively exploring Quantitative Risk Analytics roles. Whether you're a recruiter, hiring manager, or fellow quant — I'd love to hear from you.