Quantitative Risk Scoring Methodology

Plain-English Summary: Every potential collateral asset is put under the microscope of our Risk Framework. We quantify each of the above risk categories using data-driven metrics, and then combine them into a single Risk Score (R) for that asset on a scale from 0 to 10. A score of 0 means virtually no risk (an idealized case), while 10 indicates extremely high risk. In practice, most real collaterals will fall somewhere in between. This scoring methodology allows us to compare very different assets on an apples-to-apples risk basis. The higher an asset’s R score, the more conservative we are in allowing it to be used in the system (i.e. we impose stricter parameters or lower exposure). Conversely, a low R score means we can be more permissive because the asset is safer. The Risk Scoring Flow illustrates how we go from raw data to final risk score to collateral parameters.

Risk Factor Metrics

For each risk category, we define one or more quantitative metrics to evaluate the asset. We gather historical data and current on-chain data to score the asset in that dimension. Here are the key factors and examples of how we quantify them:

  • Market Volatility: We measure the asset’s price fluctuations over various time horizons. For example, we calculate the 30-day price volatility (σ₃₀d) and compare it to a benchmark (σ_bench) like a major reference asset’s volatility. An example metric could be VolatilityScore = σ₃₀d / σ_bench. An asset that’s twice as volatile as our benchmark might get a higher score (indicating more risk). We also look at historical max drawdowns (how much the price fell from peak to trough) to understand tail risk.

  • Liquidity & Slippage: We assess the asset’s trading depth both on decentralized exchanges (DEXs) and centralized exchanges (CEXs). Two metrics we use are daily trading volume and slippage. For slippage, we simulate selling a standard amount (for instance, an amount equal to a typical large liquidation size) and see how much the price would move. The LiquidityScore is higher (riskier) for assets that show big slippage or have low volume. We might say Asset A has $100M/day volume and <1% slippage for a $1M sale (score = 1, very liquid), whereas Asset B has $1M/day volume and 10% slippage for the same sale (score = 9, very illiquid).

  • Smart-Contract Risk: This is harder to quantify with pure numbers, but we use proxies to score it. We consider security audits count, time since deployment, amount of value locked in the contract, and whether there have been past incidents. For example, an asset running on a contract that’s been live for 2+ years with billions in it and multiple audits might get a low risk score (1 or 2). A new protocol’s token with complex code and maybe only one audit could score high (8+). We normalize these inputs into a ContractRisk score from 1 (safe) to 10 (very risky code).

  • Oracle & Data Reliability: We examine how the asset’s price is determined in our system. If there’s a battle-tested oracle network (like Chainlink) providing a reliable price feed with adequate updates, the OracleRisk is low. If we must rely on a single exchange’s price or a less proven oracle mechanism, risk is higher. We also simulate scenarios like oracle delays or manipulations (e.g. what if the oracle price lags actual market by 5 minutes during a crash?). An asset that could be subject to price manipulation (perhaps low liquidity asset where an attacker could pump price and trick the oracle) will get a high Oracle & Data risk score.

  • Concentration & Governance: To quantify this, we look at metrics like top holder concentration (what percent of the token supply is held by the top 10 addresses), and the asset’s governance structure (if applicable). For example, if a token’s top 3 holders own 50% of supply, that’s a high concentration risk score. We also factor in any centralization in governance: e.g. if an admin key or multisig can change the token’s rules or freeze transfers, that boosts the risk score. On the flip side, a widely distributed token (no single holder over say 5%) with community-driven governance (no unilateral control) would score very low in this category.

  • Peg & Systemic Factors: For a pegged asset (like a stablecoin or a staked derivative), we quantify peg deviation risk by looking at historical price deviation from the peg and the mechanisms supporting the peg. For instance, we might check the largest deviation of a stablecoin from $1 over the past 2 years (e.g. did it ever drop to $0.95? how long did it take to recover?). That informs a PegRisk score. Systemic risk is a bit more qualitative, but we attempt to gauge it by how correlated the asset is with other collateral (using correlation coefficients) and by scenario analysis: if this asset fails, how much would it impact the rest of the system? For example, ETH has systemic importance (if ETH crashed, many other collaterals likely crash too), so we treat that as a consideration in stress tests (though we might not directly give ETH a high “score” for that since it’s more about planning worst-case scenarios). For something like a tri-pool LP token composed of major stablecoins, the peg risk of each underlying stablecoin contributes to this asset’s overall risk.

Normalization and Weighting: Each of the above factors is scored on a comparable 1 to 10 scale (where 1 = very low risk, 10 = very high risk) relative to our portfolio of assets. We then apply a weighting to each category to reflect its importance. For example, we might weight Market Volatility at 20% of the total score, Liquidity at 20%, Smart Contract risk at 15%, Oracle at 10%, Concentration/Governance at 15%, and Peg/Systemic at 20%. (These weights are set by our risk team based on experience and can be adjusted as we learn more or as market conditions change.)

Once we have all the sub-scores, we compute the composite Risk Score R as a weighted sum:

R = w_volVolatilityScore + w_liquidityLiquidityScore + w_contract*ContractRisk +

w_oracleOracleRisk + w_concentrationConcentrationRisk + w_peg*PegRisk

We then normalize R to a 0–10 scale for simplicity. This final R is the primary number we use to compare assets. For intuition, a very safe asset like ETH might end up with R around ~2–3, a somewhat riskier one like a complex LP token might be ~6–7, and we generally would not list an asset if its R is extremely high (above, say, ~8–9) unless there are extraordinary mitigations or strategic reasons – it would be put on hold until risk is reduced.

From Risk Score to Parameters

The beauty of having a numerical Risk Score is that it directly drives the collateral parameters in a rules-based way. In StableUnit, each collateral asset has four key risk parameters: Minimum Collateral Ratio, Liquidation Threshold, Stability Fee, and Debt Ceiling. We use the risk score R to set each of these, ensuring higher risk gets tighter limits. In broad terms:

  • Minimum Collateral Ratio (MCR): This is the minimum required collateralization for a loan using that asset. A safer asset (low R) can have a lower MCR (meaning you can borrow more against it). Risky assets (high R) require a higher MCR (you must over-collateralize by a larger amount). For example, an extremely stable asset might be allowed at 110% (almost 1:1 collateral to debt), whereas a very volatile token could need 200% or more (two dollars of collateral per dollar of debt). This ratio is usually set such that even under large price swings, there’s a cushion. We also align with industry norms – e.g. large-cap assets like ETH often end up around 150% in similar systems, and we wouldn’t go below that for something that volatile without strong justification.

  • Liquidation Threshold: This is the collateral ratio at which the protocol will consider the position undercollateralized and trigger liquidation. In practice, we often set the Liquidation Threshold equal to or slightly above the MCR. For instance, if MCR is 150%, we might set the liquidation threshold at 150% as well (meaning if you dip even a hair below 150%, you’re eligible for liquidation), or we might give a small buffer (say liquidation at 145% while requiring new loans at 150% – giving a bit of grace so that a position isn’t instantly liquidated the moment it’s opened at exactly 150%). Higher risk assets tend to have liquidation thresholds very close to their MCR; we don’t allow much wiggle room because we want to liquidate early if things go south. Lower risk assets might afford a slightly looser gap. These choices are informed by R and by stress tests – we ask “how fast can this asset drop or how quickly can we liquidate it?” and set thresholds accordingly.

  • Stability Fee: The Stability Fee is the interest rate borrowers pay (in our stablecoin) for borrowing against that collateral. We use this to price the risk – riskier assets carry a higher Stability Fee (a risk premium) to compensate the system and to discourage leveraging very risky positions. The fee is annualized (for example, 5% per year). An asset with a very low R might have a low fee, even approaching 0% for something ultra-safe or strategically important (for example, some protocols set 0% fee on certain stablecoin collaterals to encourage usage). On the other hand, an asset with a high R could have a fee in the high single digits or more. For instance, a blue-chip like ETH might have a 2–4% stability fee, whereas a DeFi LP token with much greater risk might have an 8% or 10% fee. We calibrate this such that the fees collected also help build the Insurance Fund buffers for that asset’s risk (more on that later). The exact fee vs R mapping can be linear or tiered – e.g. every point of R might add, say, 1% to the base rate, but we also consider market rates (if similar platforms charge X% for that asset, we remain competitive unless risk dictates otherwise).

  • Debt Ceiling: This is the maximum amount of stablecoins that can be minted using that asset as collateral across the whole system. Even a relatively safe asset shouldn’t have an infinite ceiling – we cap exposure to manage concentration risk at the protocol level. Assets with low risk scores (and large, liquid markets) get high debt ceilings (tens or hundreds of millions of USD equivalent). For example, we might allow up to $100 million of stablecoin debt against ETH if R is low. Assets with higher R get a much smaller cap, maybe a few million, to limit the absolute impact if that asset fails. If an asset is new or unproven, we start with a very conservative ceiling (like $1M or $5M) and only raise it after observing performance over time. The debt ceiling is a direct lever to contain risk: even if an asset has a bad event, the losses are bounded by the ceiling. We revisit and adjust ceilings periodically (especially if the asset grows in market size or proves stability, we might raise it, or if concerns grow, we might freeze or lower it).

These four parameters are interrelated: for instance, a high Stability Fee can partially compensate for risk and thus sometimes an asset might get a slightly lower collateral ratio than otherwise, but generally we err on the side of caution for all parameters if R is high. The combination of MCR + Liquidation Threshold + Stability Fee + Debt Ceiling determined by the risk score ensures that each collateral operates within safe boundaries. If an asset’s risk increases, we adjust these parameters (process outlined in the Parameter Adjustment section below).

To summarize: the higher the Risk Score (R), the more restrictive the parameters. A low-risk asset enjoys more flexibility (lower collateral requirements, lower fees, higher usage limits), whereas a high-risk asset is kept on a tight leash (lots of collateral required, higher borrowing cost, and a hard cap on exposure).

Objective: Analyze the Probability Distribution of Collateral-caused Bad Debt Accrual Events.

Purpose: To model the sensitivity of the loss distribution of bad debt accrual caused by specific collateral to Stable Unit consisting of n underlying collaterals to changes in correlation of being the cause of Bad Debt Accrual Events.

Underlying math behind the modeling of Distribution Probability calculation:

  • Distribution is built on the principle that collateral may cause occurrence of a bad debt accrual event and, in the case of such event the loss exposure. In this regard, the probability of default represents the likelihood of an insolvency event. The loss given default (LGD) accounts for the unrecovered portion (bad debt) of an asset exposure.

  • Various approaches exist to model the losses. Some rely on a formula-based approach, but more commonly used are a Monte Carlo simulation approach to derive such a distribution. For each simulation g = 1, …, G, the expected portfolio loss is computed.

  • If we repeat the random process and obtain a distribution, we can draw a shape shown above for humans to read and make recommendations.

  • This curve highlights a typical asymmetric profile where small losses have a higher probability of occurrence than greater losses.

  • At the same time, this chart also points out the distinction between expected loss and unexpected loss.

  • The latter is obtained as a difference between a given quantile of the distribution (i.e., credit value at risk VaR of debt, (1−α)) and the expected loss.