Smart contract cover pricing algorithm

Hi all. First wanted to say thank you to Hugh for sharing the notes on the general direction for smart contract cover in this thread.

I wanted to follow up specifically on the cover pricing algorithm. It’s good to hear a new pricing model is under development, and I think it is useful to explore the problems with the current model openly with the community, such that any new ideas surfaced can be incorporated into the thinking around the new model.

The recent acceleration in DeFi bug exploits and Nexus’ claim payouts have resulted in losses on claims paid being greater than premiums earned. While the sample size is small, this suggests NXM holders are not adequately being compensated for the risk we are underwriting. As Hugh mentioned in the thread linked above, part of the reason for this is the sensitivity of the current model to staking, but there are additional factors worth considering as well.

If we assume all constants in the pricing algo stay fixed, we can simplify the pricing algorithm to the following:


Where the exponentiated value must be between 0.01 and 1, yielding a minimum cover cost of 1.3% for the least risky contracts and 130% for the most risky contracts.

Currently, most (if not all?) contracts are priced at 1.3% of the cover amount, implying that all contracts covered are equally risky, which we know is not the case.

To illustrate graphically, here’s a chart that shows the change in price that corresponds to a change in each of these parameters alone, while keeping the rest fixed at their starting value. I chose the starting values to reflect what I would consider an extremely risky contract and the end values match those of Uniswap.


For context, the risky contract is 1 month old, has 500 NXM staked against it, has processed 1 transaction, holds an average of 10 ETH over the last 30 days, and is roughly 50% more complex based on gas units consumed on deployment than MakerDAO’s MCD.

Under the current pricing algorithm, staking an additional 1,500 NXM ($4,500 at $3/NXM) reduces the cost of cover to 1.3% – without changing any other parameters. Similarly, if the average amount of ETH in the contract were increased by 7,350 (almost $1.5m) that alone would also reduce the price to 1.3%. Gas complexity changes and # of transactions have a more limited impact.

This analysis highlights a few issues:

  • Adding ETH to the contract doesn’t make it inherently less risky.
  • Average ETH held by the contract can be gamed to lower the cost of cover (not sure exactly how Nexus measures this).
  • It is way too cheap to stake and lower the price.
  • A contract that gets older isn’t necessarily less risky. While you could argue that age implies the contract has avoided hacks so far, and thus is likely safer, an old contract with no ETH in it can all of a sudden start to get deposits and usage after a year – the algorithm would perceive this as a safe contract while obviously in that case it would not be.
  • Number of txs is more reflective of use case than risk.
  • Gas complexity is directionally correct but low gas doesn’t mean low risk. A simple contract can be written just as poorly as a complicated one, though of course it is more likely that there are bugs the more complex the code.

Hugh has said that a goal of the new pricing model will be to eliminate non-stake variables. I strongly support that idea, as each variable exposes an attack vector. This pushes all the burden of pricing onto risk assessors and means that if someone wants to dramatically reduce the cost of cover for a risky contract, they have to assume that risk by putting skin in the game and adding capital to the pool. It also makes it easier for the mutual to calibrate the pricing algorithm appropriately, something that is difficult to do with 5 variables of different weights.

Interestingly, a change to a model where only NXM stake is considered would also increase demand for NXM, thus helping to capitalize the mutual. Contracts that want to self-insure would have to acquire a decent NXM stake. Because of the mutual’s capital efficiency and pooled risk model, this NXM stake should still be significantly more attractive than self-insurance. This would also be a step towards putting a greater emphasis on Nexus Mutual’s other (often overlooked) product: NXM itself. A larger purchase of NXM could in theory be considered both an investment and a vehicle for self-insurance via a mutualized model.

A question for the community would be whether there is good reason to keep any of the other variables outside of NXM stake. I am in favor of simplicity, but if we were to keep any variables, my preference would be for it to be the product of contract age and avg ETH held, as these together could be viewed as a bug bounty that hasn’t been collected – a sign of relative security similar to the argument that Bitcoin is a $200B bug bounty that no one can figure out how to collect.

I’m sure I’m off on some of my assumptions here and these are just initial thoughts – Hugh & team are much closer to this so let me stop here and open the thread up to feedback and discussion.

5 Likes

Thanks for writing this up @aleks. I tend to also favor simplicity and intuitively NXM staked should be the best proxy for perceived risk and therefore dictate pricing. At least in the long term. I do wonder though whether it might price out of the market new systems as they come online, and so what would a pricing curve vs staked NXM look like.

This would obviously be reliant on an active and lucrative staking market, which hopefully will emerge with the new pooled staking model.

1 Like

That’s a very good point. The main risk of pricing out new systems is that they take their business to other insurance providers and Nexus loses by being overly conservative.

If NXM were the only variable, it would have some parameter that determines its impact on price and that parameter could in theory be voted on by NXM holders on some regular interval – perhaps similarly to MakerDAO’s stability fee. This parameter would likely be impossible to get right out of the gate, and would probably need to be calibrated over a longer period of time, with the goal of establishing a healthy balance between profitability and growth.

The key to this would be to ensure that growth (in terms of new contracts underwritten) is sustainable, e.g. we don’t take excessive risk that ends up bankrupting the mutual. In the event that we find that we’re pricing out new systems that actually are good candidates for insurance, then we should be able to ease pricing.

My preference is to start on the conservative side because at this stage, Nexus needs capital pool growth more than cover growth and the biggest risk for NXM’s performance is that it takes excessive risk. If risk is curtailed and managed carefully in a more dynamic manner going forward, I think that makes the asset more attractive for investors.

1 Like

Thanks @aleks ! Great post and it’s something I’ve been thinking about for a while as well.

Before getting into details it’s worth highlighting my broader goals for the pricing mechanic:

  1. Simplicity - bare minimum number of factors, that pushes complexity off-chain to risk assessors (to come up with complex risk models as they wish)
  2. Decentralisation - simple enough, and only reliant on on-chain info so it can be run entirely on chain (vs current off-chain quote engine)
  3. Flexible - ability to price not only smart contract cover but any type of risk, so we can roll-out new products quickly

Also, it’s worth noting that the pricing mechanic has to solve two problems:

  1. Price the Risk
  2. Determine how much capacity to offer on the risk

Right now pricing is based on the factors in the original post, and is supplemented by staking. Staking isn’t necessary to offer cover or a price. eg if nobody was staking on Compound, Nexus would still offer cover as Compound has been sufficiently “battle-tested”. However for new contracts staking is required and we’re seeing lots of potential demand here that requires staking to enable.

Note: this also plays into reward levels for staking, if staking isn’t actually required then rewards should be lower, but if staking is required then they need to be higher (currently this aspect isn’t differentiated in the reward mechanism)

Capacity is actually a bit more tricky to get right. We are trying to balance two items, a) using staking as a mechanism to indicate when it is worth while deploying the mutuals pool to a particular risk and taking advantage of the pooled liquidity model vs b) stopping the attack of staking a small amount on a buggy contract > taking a large cover > sacrificing the stake > get the claim payout.

To achieve all of the above goals we’ve come up with a new pricing framework, that is entirely based on staked NXM and doesn’t rely on anything else. We’re currently tweaking some parameters and doing some more testing, so will share more details soon. It’s also worth noting that our new pricing framework only really works with pooled staking, so we have to release that first.

Overview of New Pricing
Staked NXM is the only factor that influences both price and capacity.

Price
Simple linear interpolation between a max price (TBA ~25% pa?) and a min price (1.3%) based on how much is value is staked. Min price achieved once a certain amount of NXM value (in ETH terms) has been staked.

Capacity
When someone stakes the capacity will be the staked NXM value to start with. This restricts the attack as loss of stake = claim payout. Then over time the mutual will release more capacity up to a maximum multiple of stake, eg 3x, which proxies for the battle-tested aspect.

If a stake is withdrawn all capacity, including the additional mutual pool capacity is immediately withdrawn. This allows the mutual to quickly respond to changes in risk, as if stakers withdraw, price should increase and capacity should be taken away.

Conceptually it looks something like this (block time on the x-axis):
image

A note on staking rewards:
Under this new model Risk Assessors are providing 25% of the capital/capacity (if we use a 3x factor) and 100% of the expertise (pricing is fully reliant on stake). Currently, Risk Assessors are providing a varying amount of capital, and quite often none is actually required, and also a varying amount of expertise (new protocols → lots, older protocols → not much).

Currently Risk Assessors get 20% of the cover cost, but in the new model the mutual is much more reliant on them to be successful. 25% of the capital and 100% of the expertise indicates rewards should be around 50% of cover cost.

We haven’t come to a view on the starting factors yet but hopefully this provides more insight on the structure and thinking behind our proposed next steps.

1 Like

Thanks for sharing Hugh, really helpful. I think the new pricing model looks elegant and solves a lot of the issues with the old one.

One question - to your point on balancing between relying on stake vs stopping the stake attack, does pooled staking eliminate that attack vector?

My sense is that if it is possible to write an intentionally buggy contract, stake X on it and then take out cover for 3X some months later, exploit the bug and collect the claim, then people will do it until there are no funds left to take. It seems like the way to mitigate that risk would be by taking into consideration the diversity of stakes, e.g. if you want to perform such an attack, you will need to control a certain number of stakers. That number could be picked in a way that an attack is deemed prohibitively difficult from a coordination perspective.

(Are there any materials or discussions floating around that go over plans around Pooled Staking?)

One question - to your point on balancing between relying on stake vs stopping the stake attack, does pooled staking eliminate that attack vector?

I don’t think it can ever be completely eliminated but we need to get it to controllable levels. The proposed approach is very similar to how the current model works. If you write an intentionally buggy contract then you have to expose funds for a period of time before additional capacity is released (effectively battle-testing the contract), so someone else could hack it first and the would be attacker would lose.

There is also a clause in the cover wording which means claims assessors can deny claims if the contract has been used to intentionally claim on the cover. This is obviously hard to assess but it does give a social backstop against clear “fraud”.

(Are there any materials or discussions floating around that go over plans around Pooled Staking?)

See here, it gives a reasonable amount of detail. Some minor items may have changed now based on implementation but the general structure is the same.

1 Like

Will the stake withdrawal period be dependent on the duration of covers that have been purchased for a specific contract? The scenario that I’m worried about is an attack where someone stakes to increase capacity on a buggy contract, buys cover against it, and then unstakes, waits the 90 days and redeems their stake for ETH. At that point, would it not be possible to have little or no NXM, but still retain a solid amount of cover on a contract?

1 Like

Will the stake withdrawal period be dependent on the duration of covers that have been purchased for a specific contract? The scenario that I’m worried about is an attack where someone stakes to increase capacity on a buggy contract, buys cover against it, and then unstakes, waits the 90 days and redeems their stake for ETH. At that point, would it not be possible to have little or no NXM, but still retain a solid amount of cover on a contract?

Good pick up, we’ve been discussing this point as a team as it is relevant for both pricing and the interaction with the pooled staking implementation. It’s a careful balance:

  • We want a fixed withdrawal period for the simplicity of pooled staking and from a user perspective.
  • We don’t want to open the mutual up to too much risk.
  • If we’re too restrictive and only offer cover where it is fully backed by staking it will be too capital intensive.

Our suggested approach is therefore to stop the worst cases and have a balanced approach for the rest, where settings can be tweaked if necessary.

The biggest issue is where someone buys a very long cover period, but stake gets withdrawn very quickly. With the new pricing implementation we’re very likely to cap the length of cover to 1 year which would address the worst case. The other lever we have is the ramp up period on capacity, which caps the amount of cover. The longer this is the less of a potential issue we have.

Based on user feedback I’m of a relatively strong opinion that the withdrawal period needs to stay at 90 days. From a risk perspective we want this as long as possible but it is quite a long lock-up relative to other protocols, so any longer feels like a hard sell to stakers.

Overall, I’m wary of this issue but not overly concerned about it, as the stakers must still take on material risk (90 days).