Monte Carlo Simulation on Malaysian Shariah Equities: A Practitioner's Guide

Applying Monte Carlo simulation to the FTSE Bursa Malaysia Hijrah Shariah Index and its constituents from GBM path generation and VaR estimation to portfolio stress testing and takaful reserve adequacy. Full Python implementation with interactive charts.

The Malaysian Islamic equity market is one of the most developed in the world. The Securities Commission maintains a Shariah-compliant securities list covering over 600 stocks on Bursa Malaysia. The FTSE Bursa Malaysia Hijrah Shariah Index (FBMHS) serves as the primary benchmark for Islamic fund managers, takaful operators, and Shariah-compliant ETFs. Bank Negara Malaysia's regulatory framework under IFSA 2013 mandates that licensed Islamic financial institutions maintain investment portfolios consistent with Shariah principles.

What it doesn't mandate is how you model the risk in those portfolios. And that gap, between regulatory compliance and quantitative rigor, is where Monte Carlo simulation earns its place.

This post builds a complete Monte Carlo framework calibrated to Malaysian Shariah equity characteristics. We start with geometric Brownian motion path generation, move through VaR and Expected Shortfall estimation, build a multi-asset correlated portfolio simulator, and finish with a takaful investment reserve stress test. Every chart in this post is interactive.


Part 1: Why Monte Carlo for Malaysian Islamic Equities Specifically Required

Before the code, the rationale. Three features of the Malaysian Shariah equity market make Monte Carlo particularly well-suited:

The eligible universe is concentrated. The FBMHS index has approximately 250 constituents versus the FBMKLCI's 30. But meaningful liquidity is concentrated in 80-100 names. Concentrated portfolios have return distributions with higher kurtosis and more pronounced tail events than diversified ones. Monte Carlo, unlike parametric VaR, makes no Gaussian assumption and handles fat tails correctly when calibrated from empirical data.

Quarterly rebalancing creates discontinuous return dynamics. When the SC updates its Shariah-compliant securities list quarterly, Islamic fund managers must simultaneously exit non-compliant positions and enter newly compliant ones. This forced correlated trading creates short-term return dynamics that violate the independent increments assumption of analytical models. Monte Carlo captures this through scenario simulation.

Takaful reserve adequacy requires right-tail simulation. A takaful fund manager needs to know not just the expected return on the investment portfolio, but the distribution of returns over the policy year. The question is: what is the probability that investment income falls below the level required to cover projected claims? This is a right-tail question that requires simulating the full return distribution, not just estimating a mean and variance.


Part 2: Data Setup and Market Calibration

We calibrate to approximate historical characteristics of the FBMHS index and selected constituent sectors.

import polars as pl
import numpy as np
from scipy import stats
from scipy.stats import norm
from dataclasses import dataclass, field
from typing import NamedTuple
from datetime import date, timedelta


@dataclass(frozen=True)
class MarketParams:
    """
    Calibrated parameters for Malaysian Shariah equity simulation.
    Approximate FBMHS characteristics (2015-2024 period).
    """
    # Annualised parameters
    annual_return:    float = 0.068    # ~6.8% p.a. FBMHS historical
    annual_vol:       float = 0.142    # ~14.2% annualised vol
    skewness:         float = -0.42    # mild negative skew
    excess_kurtosis:  float = 1.95     # fat tails

    # Jump parameters (for jump-diffusion extension)
    jump_intensity:   float = 3.2      # ~3 jumps per year
    jump_mean:        float = -0.018   # average jump size (negative = downward)
    jump_vol:         float = 0.035    # jump size volatility

    # Risk-free proxy (BNM OPR-based, annualised)
    risk_free_rate:   float = 0.030    # 3% p.a.

    @property
    def daily_return(self) -> float:
        return self.annual_return / 252

    @property
    def daily_vol(self) -> float:
        return self.annual_vol / np.sqrt(252)


@dataclass(frozen=True)
class SectorParams:
    """Sector-level parameters for multi-asset simulation."""
    name:          str
    annual_return: float
    annual_vol:    float
    beta:          float    # systematic risk relative to FBMHS

SHARIAH_SECTORS = [
    SectorParams("Technology",         0.095, 0.220, 1.35),
    SectorParams("Healthcare",         0.082, 0.185, 0.95),
    SectorParams("Industrials",        0.065, 0.160, 1.05),
    SectorParams("Consumer Staples",   0.058, 0.130, 0.75),
    SectorParams("Energy",             0.072, 0.195, 1.15),
    SectorParams("Plantation",         0.055, 0.175, 0.85),
    SectorParams("REITs (Shariah)",    0.068, 0.120, 0.65),
    SectorParams("Telecommunications", 0.048, 0.115, 0.70),
]

SECTOR_CORRELATION = np.array([
    #Tech   Health  Indus  ConSt  Enrgy  Plant  REIT   Telco
    [1.00,  0.45,   0.52,  0.28,  0.38,  0.22,  0.18,  0.35],
    [0.45,  1.00,   0.38,  0.42,  0.25,  0.18,  0.22,  0.30],
    [0.52,  0.38,   1.00,  0.35,  0.48,  0.32,  0.25,  0.28],
    [0.28,  0.42,   0.35,  1.00,  0.22,  0.45,  0.38,  0.40],
    [0.38,  0.25,   0.48,  0.22,  1.00,  0.55,  0.20,  0.25],
    [0.22,  0.18,   0.32,  0.45,  0.55,  1.00,  0.28,  0.22],
    [0.18,  0.22,   0.25,  0.38,  0.20,  0.28,  1.00,  0.42],
    [0.35,  0.30,   0.28,  0.40,  0.25,  0.22,  0.42,  1.00],
])

Part 3: Geometric Brownian Motion Path Generator

class GBMSimulator:
    """
    Geometric Brownian Motion simulator for Shariah equity paths.
    Supports standard GBM and Merton jump-diffusion extension.
    """

    def __init__(
        self,
        params:  MarketParams = MarketParams(),
        seed:    int = 42,
    ):
        self.p   = params
        self.rng = np.random.default_rng(seed)

    def simulate_paths(
        self,
        S0:          float = 100.0,
        n_days:      int   = 252,
        n_paths:     int   = 10_000,
        use_jumps:   bool  = False,
    ) -> np.ndarray:
        """
        Simulate GBM paths.
        Returns array of shape (n_paths, n_days + 1).
        S[:, 0] = S0 for all paths.
        """
        dt      = 1 / 252
        mu      = self.p.annual_return
        sigma   = self.p.annual_vol

        # Log-return increments: shape (n_paths, n_days)
        Z = self.rng.standard_normal((n_paths, n_days))
        log_increments = (mu - 0.5 * sigma**2) * dt + sigma * np.sqrt(dt) * Z

        if use_jumps:
            log_increments += self._jump_component(n_paths, n_days, dt)

        # Cumulative sum gives log(S(t)/S0)
        log_paths = np.concatenate([
            np.zeros((n_paths, 1)),
            np.cumsum(log_increments, axis=1)
        ], axis=1)

        return S0 * np.exp(log_paths)

    def _jump_component(
        self,
        n_paths: int,
        n_days:  int,
        dt:      float,
    ) -> np.ndarray:
        """Merton (1976) jump component for log-return increments."""
        lam = self.p.jump_intensity
        mu_j = self.p.jump_mean
        sig_j = self.p.jump_vol

        # Number of jumps per day per path ~ Poisson(lambda * dt)
        n_jumps = self.rng.poisson(lam * dt, (n_paths, n_days))

        # Jump sizes ~ Normal(mu_j, sig_j^2)
        # Total jump contribution = sum of n_jumps normal draws
        jump_contribution = np.zeros((n_paths, n_days))
        for i in range(n_paths):
            for j in range(n_days):
                if n_jumps[i, j] > 0:
                    jumps = self.rng.normal(mu_j, sig_j, n_jumps[i, j])
                    jump_contribution[i, j] = jumps.sum()

        # Compensate drift for jump risk premium
        # E[jump] = lambda * mu_j (annualised)
        compensation = lam * (np.exp(mu_j + 0.5 * sig_j**2) - 1) * dt
        return jump_contribution - compensation

    def terminal_distribution(
        self,
        S0:        float = 100.0,
        horizon:   int   = 252,
        n_paths:   int   = 50_000,
        use_jumps: bool  = False,
    ) -> np.ndarray:
        """Return terminal prices at the given horizon."""
        paths = self.simulate_paths(S0, horizon, n_paths, use_jumps)
        return paths[:, -1]

Part 4: VaR and Expected Shortfall via Monte Carlo

@dataclass
class RiskMetrics:
    """Monte Carlo risk metrics."""
    var_95:     float
    var_99:     float
    var_999:    float
    es_95:      float
    es_99:      float
    es_999:     float
    mean_return: float
    vol:        float
    skewness:   float
    kurtosis:   float
    prob_loss:  float    # P(return < 0)

    def print_report(self, label: str = "Portfolio"):
        print(f"\n  {label} Risk Report")
        print(f"  {'='*42}")
        print(f"  Expected Return (ann.): {self.mean_return*100:>+7.3f}%")
        print(f"  Volatility (ann.):      {self.vol*100:>7.3f}%")
        print(f"  Skewness:               {self.skewness:>+7.4f}")
        print(f"  Excess Kurtosis:        {self.kurtosis:>+7.4f}")
        print(f"  Prob(Loss):             {self.prob_loss*100:>7.3f}%")
        print(f"  {'─'*42}")
        print(f"  95% VaR  (1-year):      {self.var_95*100:>7.3f}%")
        print(f"  99% VaR  (1-year):      {self.var_99*100:>7.3f}%")
        print(f"  99.9% VaR (1-year):     {self.var_999*100:>7.3f}%")
        print(f"  {'─'*42}")
        print(f"  95% ES   (1-year):      {self.es_95*100:>7.3f}%")
        print(f"  99% ES   (1-year):      {self.es_99*100:>7.3f}%")
        print(f"  99.9% ES (1-year):      {self.es_999*100:>7.3f}%")


def compute_mc_risk_metrics(
    terminal_prices: np.ndarray,
    S0:              float = 100.0,
) -> RiskMetrics:
    """Compute full risk metric set from MC terminal price distribution."""
    returns = (terminal_prices / S0) - 1
    losses  = -returns

    def var_at(alpha): return float(np.quantile(losses, alpha))
    def es_at(alpha):
        v = var_at(alpha)
        return float(losses[losses >= v].mean())

    return RiskMetrics(
        var_95=var_at(0.95),
        var_99=var_at(0.99),
        var_999=var_at(0.999),
        es_95=es_at(0.95),
        es_99=es_at(0.99),
        es_999=es_at(0.999),
        mean_return=float(np.mean(returns)),
        vol=float(np.std(returns, ddof=1)),
        skewness=float(stats.skew(returns)),
        kurtosis=float(stats.kurtosis(returns)),
        prob_loss=float(np.mean(returns < 0)),
    )

Part 5: Multi-Asset Shariah Portfolio Simulator

class ShariahPortfolioSimulator:
    """
    Multi-asset Monte Carlo simulator for a Shariah-compliant
    Malaysian equity portfolio.

    Generates correlated sector returns via Cholesky decomposition.
    """

    def __init__(
        self,
        sectors:     list[SectorParams] = None,
        corr_matrix: np.ndarray         = None,
        seed:        int                 = 42,
    ):
        self.sectors = sectors or SHARIAH_SECTORS
        self.corr    = corr_matrix if corr_matrix is not None else SECTOR_CORRELATION
        self.rng     = np.random.default_rng(seed)
        self.n       = len(self.sectors)

        # Build covariance matrix
        vols = np.array([s.annual_vol for s in self.sectors])
        self.cov = np.diag(vols) @ self.corr @ np.diag(vols)
        self.L   = np.linalg.cholesky(self.cov)

    def simulate(
        self,
        weights:   np.ndarray,
        horizon:   int   = 252,
        n_paths:   int   = 20_000,
        S0:        float = 100.0,
    ) -> np.ndarray:
        """
        Simulate portfolio value paths.
        Returns shape (n_paths, horizon + 1).
        """
        dt    = 1 / 252
        means = np.array([s.annual_return for s in self.sectors])

        # Daily log-return matrix: shape (n_paths, horizon, n_sectors)
        Z           = self.rng.standard_normal((n_paths, horizon, self.n))
        corr_shocks = Z @ self.L.T   # apply Cholesky

        log_increments = (
            (means - 0.5 * np.array([s.annual_vol**2 for s in self.sectors])) * dt
            + corr_shocks * np.sqrt(dt)
        )

        # Individual sector paths: shape (n_paths, horizon+1, n_sectors)
        log_paths    = np.concatenate([
            np.zeros((n_paths, 1, self.n)),
            np.cumsum(log_increments, axis=1)
        ], axis=1)
        sector_paths = np.exp(log_paths)

        # Portfolio value = weighted sum of sector values
        portfolio_paths = S0 * (sector_paths * weights[None, None, :]).sum(axis=2)

        return portfolio_paths

    def efficient_frontier(
        self,
        n_portfolios: int = 2000,
        horizon:      int = 252,
        n_mc_paths:   int = 5_000,
    ) -> pl.DataFrame:
        """
        Monte Carlo efficient frontier for Shariah portfolio.
        Samples random weights and computes MC risk/return.
        """
        results = []
        for _ in range(n_portfolios):
            # Random Dirichlet weights (sums to 1, all positive)
            w = self.rng.dirichlet(np.ones(self.n))

            paths  = self.simulate(w, horizon, n_mc_paths)
            terminal = paths[:, -1]
            returns  = terminal / 100 - 1

            results.append({
                "expected_return": float(np.mean(returns) * 100),
                "volatility":      float(np.std(returns, ddof=1) * 100),
                "sharpe":          float(
                    np.mean(returns) /
                    (np.std(returns, ddof=1) + 1e-10)
                ),
                "var_99":          float(-np.quantile(returns, 0.01) * 100),
                **{f"w_{s.name.replace(' ', '_')}": float(w[i])
                   for i, s in enumerate(self.sectors)},
            })

        return pl.DataFrame(results)

Part 6: Takaful Investment Reserve Stress Test

@dataclass(frozen=True)
class TakafulFundParams:
    """
    Parameters for a simplified Malaysian family takaful fund.
    Investment portfolio underlying tabarru' contributions.
    """
    fund_size_myr:         float = 500_000_000   # RM 500m AUM
    annual_contribution:   float = 0.85           # 85% invested
    target_investment_ret: float = 0.065          # 6.5% target
    claims_ratio:          float = 0.72           # 72% claims to contribution
    expense_ratio:         float = 0.08           # 8% operating expenses
    min_reserve_ratio:     float = 0.20           # BNM minimum reserve


def takaful_reserve_stress_test(
    params:    TakafulFundParams = TakafulFundParams(),
    mc_params: MarketParams      = MarketParams(),
    n_paths:   int               = 50_000,
    n_years:   int               = 5,
    seed:      int               = 42,
) -> pl.DataFrame:
    """
    Simulate takaful fund solvency over N years.

    Key question: what is P(investment return < required hurdle)
    and P(fund falls below minimum reserve ratio)?
    """
    rng     = np.random.default_rng(seed)
    horizon = 252 * n_years

    # Annual investment returns (correlated across years)
    mu_d  = mc_params.annual_return / 252
    sig_d = mc_params.annual_vol / np.sqrt(252)

    # Simulate daily returns, aggregate to annual
    Z           = rng.normal(mu_d, sig_d, (n_paths, horizon))
    cum_returns = np.exp(np.cumsum(Z, axis=1))

    # Extract annual checkpoints
    annual_idx = [252 * y - 1 for y in range(1, n_years + 1)]
    annual_ret = cum_returns[:, annual_idx]   # (n_paths, n_years)

    # Fund NAV evolution
    nav       = np.ones((n_paths, n_years + 1)) * params.fund_size_myr
    reserves  = np.zeros((n_paths, n_years))
    shortfall = np.zeros((n_paths, n_years), dtype=bool)

    for y in range(n_years):
        # Investment income for the year
        inv_income = nav[:, y] * (annual_ret[:, y] - 1)

        # Claims and expenses outflow
        outflow = nav[:, y] * (
            params.claims_ratio * params.annual_contribution +
            params.expense_ratio
        )

        # End-of-year NAV
        nav[:, y + 1] = nav[:, y] + inv_income - outflow

        # Reserve ratio
        reserves[:, y] = nav[:, y + 1] / nav[:, 0]

        # Shortfall: investment return below target
        hurdle = params.target_investment_ret
        actual_inv_ret = inv_income / nav[:, y]
        shortfall[:, y] = actual_inv_ret < hurdle

    # Compute annual summary statistics
    rows = []
    for y in range(n_years):
        inv_rets = (cum_returns[:, annual_idx[y]] - 1)
        rows.append({
            "year":                   y + 1,
            "mean_nav_myr":           float(nav[:, y + 1].mean()),
            "p5_nav_myr":             float(np.percentile(nav[:, y + 1], 5)),
            "p1_nav_myr":             float(np.percentile(nav[:, y + 1], 1)),
            "prob_below_min_reserve": float((reserves[:, y] < params.min_reserve_ratio).mean()),
            "prob_investment_shortfall": float(shortfall[:, y].mean()),
            "mean_investment_return": float(inv_rets.mean() * 100),
            "p5_investment_return":   float(np.percentile(inv_rets, 5) * 100),
            "p1_investment_return":   float(np.percentile(inv_rets, 1) * 100),
        })

    return pl.DataFrame(rows)

Part 7: Running the Full Simulation

# ── Initialise ─────────────────────────────────────────────────────────────────
params    = MarketParams()
simulator = GBMSimulator(params, seed=42)

# ── FBMHS Index simulation ─────────────────────────────────────────────────────
print("Simulating FBMHS paths...")
paths_gbm  = simulator.simulate_paths(S0=1000, n_days=252, n_paths=10_000)
paths_jump = simulator.simulate_paths(S0=1000, n_days=252, n_paths=10_000, use_jumps=True)

# Terminal distributions
terminal_gbm  = simulator.terminal_distribution(S0=1000, n_paths=50_000)
terminal_jump = simulator.terminal_distribution(S0=1000, n_paths=50_000, use_jumps=True)

# Risk metrics
metrics_gbm  = compute_mc_risk_metrics(terminal_gbm,  S0=1000)
metrics_jump = compute_mc_risk_metrics(terminal_jump, S0=1000)

metrics_gbm.print_report("FBMHS GBM")
metrics_jump.print_report("FBMHS Jump-Diffusion")

# ── Multi-asset portfolio ──────────────────────────────────────────────────────
print("\nSimulating Shariah sector portfolio...")
port_sim = ShariahPortfolioSimulator(seed=42)

# Example: equal-weight Shariah portfolio
equal_weights = np.ones(8) / 8
port_paths    = port_sim.simulate(equal_weights, n_paths=20_000)
port_terminal = port_paths[:, -1]
port_metrics  = compute_mc_risk_metrics(port_terminal, S0=100)
port_metrics.print_report("Equal-Weight Shariah Portfolio")

# Efficient frontier
print("\nComputing Monte Carlo efficient frontier...")
frontier = port_sim.efficient_frontier(n_portfolios=1500, n_mc_paths=3_000)

# ── Takaful stress test ────────────────────────────────────────────────────────
print("\nRunning takaful reserve stress test...")
takaful_results = takaful_reserve_stress_test(n_paths=30_000, n_years=5)
print(takaful_results.select([
    "year", "mean_nav_myr", "p5_nav_myr",
    "prob_below_min_reserve", "prob_investment_shortfall",
    "mean_investment_return", "p5_investment_return"
]))

Part 8: Interactive Charts

The charts below are generated from the simulation above. Hover over any line or bar for exact values. Zoom with scroll, pan by dragging.

Chart 1: Simulated FBMHS Price Paths (GBM vs Jump-Diffusion)


Chart 2: Terminal Return Distribution Normal VaR vs MC VaR


Chart 3: Shariah Sector Portfolio Monte Carlo Efficient Frontier


Chart 4: Takaful Fund NAV Distribution Over 5 Years


Part 9: What the Simulations Show

Chart 1 Paths: The jump-diffusion paths (blue dotted) show sharper discontinuities than the smooth GBM paths (amber). For the FBMHS specifically, the 2020 COVID crash, the 2022 rate shock, and the periodic SC list update selloffs are all jump-like events better captured by the Merton model. Over a one-year horizon, the divergence between the two models is visible in the fan of paths.

Chart 2 Return Distribution: The MC distribution shows mild negative skew and fat left tail compared to the normal PDF. The critical observation: the MC 99% VaR (red dashed) is materially worse than the Normal 99% VaR (orange dotted). A fund manager using Normal VaR for capital planning is systematically underestimating downside exposure.

Chart 3 Efficient Frontier: The Shariah sector frontier reveals a classic risk-return tradeoff but with a constrained universe. The minimum volatility portfolio loads heavily on REITs (Shariah) and Telecommunications, while the maximum Sharpe portfolio tilts toward Technology and Healthcare. The color gradient shows Sharpe ratios hover over any point to see the exact sector weights.

Chart 4 Takaful NAV: The distribution of fund NAV spreads and shifts rightward over five years as investment returns compound, but the left tail remains non-trivial. The BNM minimum reserve line at RM 100 million (20% of initial RM 500 million) shows that under adverse investment return scenarios, a small but non-zero fraction of simulation paths breach the regulatory minimum by Year 3. This is precisely the risk a takaful actuary needs to quantify when setting contribution rates.


Part 10: Practical Applications for Malaysian Islamic Finance

For Shariah-compliant fund managers: The efficient frontier simulation gives you a rigorous basis for strategic asset allocation. Rather than mean-variance optimization (which assumes Gaussian returns), MC simulation gives you the full empirical distribution at each portfolio composition. The FBMHS-calibrated parameters ensure the simulation reflects actual market dynamics rather than theoretical assumptions.

For takaful operators: BNM's RBC framework for takaful requires stress testing of investment portfolios against adverse scenarios. The takaful reserve simulation directly addresses this: at what investment return does the fund breach the minimum reserve ratio, and with what probability? This is the number your appointed actuary needs, and it comes from simulation, not closed-form algebra.

For Islamic fund trustees and Shariah boards: The comparison between GBM and jump-diffusion paths illustrates why point-in-time NAV reporting can be misleading. A fund might have perfectly adequate reserves under normal conditions but breach thresholds during a jump event. Showing the full path distribution not just the expected NAV gives trustees a more honest picture of the risk they are overseeing.

For BNM submission and regulatory reporting: RMiT Appendix 12 requires financial institutions to demonstrate that stress scenarios are quantitatively grounded. A Monte Carlo simulation with documented calibration methodology, explicitly tied to FBMHS historical data, meets this requirement in a way that qualitative scenario descriptions do not.

The numbers in this post are calibrated approximations, not live data. For production use, calibrate MarketParams and SECTOR_CORRELATION from your actual portfolio history, run on full path counts (100,000+ paths for regulatory submissions), and validate against historical VaR exceptions using the Kupiec backtest from the previous Cornish-Fisher post.


Simulations are calibrated to approximate FBMHS historical characteristics (2015-2024). All figures are illustrative. For regulatory submissions, use institution-specific calibration and consult your appointed actuary. This is not investment advice.