Flexible Pricing

I’ve been doing a lot of thinking on pricing recently. Since launch we’ve tried a few different models and have learnt a lot. We also need to shift the pricing engine on-chain to make integrations easier and increase decentralisation.

Ideally we’d like a pricing (and capacity) mechanism that achieves the following:

  1. Flexibility - pricing result is independent of capacity offered, so we can offer high price/high capacity and low price/low capacity. Bootstrapping of new covers can be difficult, as we have to reach material staking volume (50,000 NXM for min price).

  2. Dynamic - pricing automatically increases when capacity is in short supply.

  3. Simple - as simple and intuitive as possible, so that complicated risk models can be run off-chain with simple representations on chain.

  4. Gas Efficient - to reduce gas costs as much as possible.

I’ll address the first two points in some more detail, as 3 and 4 are rather self explanatory.

Flexibility

Right now stakers provide one input (# of NXM staked) and the system needs two outputs, price and capacity. This means we are limited in the option space of price vs capacity as we have to fix the relationship via a price curve. What we need is the ability for stakers to signal what price they would like to offer capacity at.

This can be achieved by taking a weighted average of the price signal using stake as the weight but it opens up a potential issue where the staker can arbitrarily set a very high (or low) price to boost the weighted average price up (or down) exactly to their desired level.

Related to this, there are going to be an increasing number of cases where one, or relatively few, stakers are likely to set the price for a particular risk. For example, consider a particular business that would like to distribute its own white-labelled product on Nexus (eg earthquake cover) and wants to be able to set prices at 1km areas efficiently. We want the protocol to be able to cater for this future use case which is potentially very large.

Suggested Approach

Have two categories of stakers 1) regular and 2) long term aligned (LTA) stakers.

Regular stakers can choose price, but with restrictions, for example when staking they choose their price input as either up, down, or same. So they have some input into price setting but cannot arbitrarily shift things to precisely where they want without substantial weight (NXM staked).

LTA stakers have more freedom, within certain bounds (eg max price 100%, min price 1%) they can choose price as they see fit. We need a mechanism to ensure long term alignment, so to become and LTA staker you need to have NXM locked for a long time eg 12 months or more.

For our current products LA stakers aren’t actually required but it’s something we should be building the protocol for. Nexus requires diversification of risks to be successful, which means we need to handle the long tail of markets and we’d be quite foolish if we expect deep liquid staking markets on the long tail. The liquidity fragmentation is too great, so we need to build for a future where experts on particular business lines handle the majority of pricing.

Dynamic

The concept here is that supply/demand push prices up/down automatically, like yields on Compound/Aave. Conceptually this is quite appealing but it also suffers from the liquidity fragmentation issue described above. Supply/Demand can only fully drive price where liquidity is sufficient and this is unlikely to happen on the long tail of risks resulting in volatile or nonsensical prices. However, there is one area where dynamic pricing makes a lot of sense, when capacity on Nexus is near maximum for a particular risk, at this point the mutual should automatically charge more for it’s capacity.

Suggested Approach

Apply a formulaic pricing loading to the price determined via staking when capacity on Nexus is near the global maximum. This should work like a bonding curve so that the loading is determined based on how much capacity is taken, rather than spot price.

Conceptually it would look like this:

Summary

After experimenting a lot with Gaussian curves and other more structured approaches, a simple weighted average price input vs stake is likely to be the most flexible and simple to implement. When combined with a pricing surge factor we can capture the benefits of dynamic supply/demand pricing where there is deeper liquidity, but not be hampered by the need for liquidity on the long tail.

Next steps:

  • Work through details and scenarios in terms of sensitivity of up/down inputs.
  • Confirm wether integration with current staking can work or if more ground up revamp of staking is required
  • More detail on how the LTA group can be selected.
5 Likes

Thanks for the write-up - really interesting topic. A few initial thoughts:

  • Having white-labelling and re-insurance for new cover types in mind it makes perfect sense to have informed stakers in the driver seat for setting a price range and then having regular stakers to vote up/down within this pricing range.
  • Especially for cover types that have sufficient claim history it would be good to provide stakers the opportunity to assess a new opportunity for the mutual by studying this history and to come-up with a price range with which they think there would be new members interested to buy cover. Depending on the cover type it might also make sense to request that historical data and the approach is published based on which the price is derived.
  • On the other hand, I’m still struggling to believe that even the most informed stakers on our smart contract cover can reasonably predict how many exploits of what magnitude will occur in a certain portfolio to define an individual price for any listed protocol.
  • I would think this remains the case until we have a certain amount of statistics, which allows us to price more individually to write more cover. I suppose this will also be needed once protocols get more mature and yields get lower.
  • It might also be the case that probability and magnitude of exploits become less individually pronounced once certain minimum standards for audits, etc. become more universally distributed.

Means, if there is enough data to come up with an individual price its great to price more appropriately to open up new cover types, support the growth of the mutual and the areas its active in. But if not, I would think a mutually agreed and initially conservative pricing and capacity function, might be a temporary better option than having a few significant stakers taking a guess on behalf of other stakers.

This allows different stakers to run their different scenarios (which in the absence of statistics can be very different) to assess whether they feel comfortable to stake or not. The more clear the picture gets, as we collect more evidence over time, like for the smart-contract cover, the formulas can get more fine-tuned and subsequently replaced by the individual pricing approach.

1 Like

Hey Hugh – love hearing your thoughts as always. I definitely agree that the pricing mechanism should generally try to optimize for all four of the priorities you listed.

I think the ability to dynamically offer staking capacity at various yields would significantly increase the overall depth of cover capacity available. I don’t agree as much with having different optionality / functionality based on long-term / short-term holder as I think it will hamper adoption and create confusion, though I agree with the underlying goal of trying to prevent too much fluidity in staking to the point that pricing and capacity are volatile.

I think a way to circumvent any gaming of the system with false bids is to incorporate elements of a reverse dutch auction or the bidding system the US treasury uses to auction off new debt issues. A simple scenario could look like this:

  • For a given protocol, stakers can indicate quantities of NXM they would stake at a given price (either multiple separate blocks of NXM @ each rate or, if ever possible, allow them to construct their own supply curves on a simple UI)
    • Prices & stake amounts can only be changed every X {3-14} days
  • As cover is purchased, the lowest cost NXM is effectively treated as the seller of the cover
    • Some/the majority of the premium can potentially be given to the staker(s) whose NXM are effectively used to write the cover (incentivizing competitive pricing)
    • Alternatively, for any given batch of NXM staked on a protocol, the rewards earned can be scaled based on the distance the 'reserve price’ entered on that NXM is away from the price of the cover written (relative to all staked NXM on that protocol)
  • As cover is written and capacity used up, the cover supply / price adjust based on the remaining signaled capacity. Covers can be priced as a simple weighted average of the reserve prices of the NXM ‘used’ to write such cover.
  • As cover is written, prices will adjust in line with demand, drawing in new supply when prices increase above ‘fair market’ levels and, conversely, dropping when available supply is too far above demand levels.

Open to questions / thoughts on this (still a working idea), but very interested in driving us towards more dynamic, supply/demand based pricing and a more optimal allocation of assets overall.

3 Likes

With the right parameters on up/down/same plus the surge pricing I think we can achieve the dynamic pricing we’re after. In combination with unstaking causing a price increase (in more cases) I’m confident price will become more dynamic and differentiated between different risks. In general I think we are under-pricing in parts so this will allow us to capture more premium without impacting the low rates we need for the more battle-tested protocols.

If we wanted more direct formulaic pricing this would only work for a handful of risks, eg Curve, Aave, Compound, Uniswap etc. Maybe the top 5-10 if we’re lucky. After that there isn’t enough liquidity in the markets to reliably set rates. This conflicts quite heavily with the need for the mutual to diversify risks, that’s how we get capital efficiency. Can’t have deep liquid markets on the long tail.

Ongoing dutch auctions feels like over-engineering to me, but also quite challenging in the current gas environment, which is a big aspect to why I’ve tried to keep this very simple.

As a related aside, we are exploring the idea of a limit order for cover buys. Still early stages but this would help stakers by demonstrating cover demand they can meet if they stake and create a more efficient and dynamic market.

1 Like

A reverse dutch auction makes the most sense @voteless . It ensures that customers get the lowest price possible, versus one that’s influenced by stakers who would push the average price higher.

For low liquidity markets like the obscure earthquake example you gave, it also allows any risk assessor to come in and instantly dispute the price. Whereas a weighted system would mean they have a relatively small impact.

I might have misunderstood what you’re saying @Hugh and below could be what you’re proposing… but:

In a reverse dutch system there could be an earthquake example where one whale staker is offering 5% APR for 1000 NXM. But I could come in and offer 1% APR for 100 NXM. The customer would pay (1001%) + (9005%) = 46 NXM. I think the price would be the same with a weighted system.

The difference versus the weighted system you suggested seems to be that the reward for the lower and higher staker would differ more. Which seems to be the correct incentive. Whereas with the weighted system both risk assessors would earn the same reward (relative to their stake) i.e. 900 NXM gets you 41.4 NXM reward and 100 NXM gets you 4.6 NXM reward. With the reverse dutch auction the reward is also impacted by the APR bid so the 900 NXM bidder would earn 45 NXM and the 100 NXM bidder would earn 1 NXM. This should remove the incentivize to massively under or overbid in an attempt to skew pricing.

Bringing it back to a real life example, skewed pricing based on volume doesn’t encourage me to stake much more. Whether it’s 2.6% of 3.5%, I’m still not staking on Aave v2, because I don’t know enough. But if I could contribute the final 100 NXM to an individuals cover at 9% APR… well, that might be good enough for me.

This might blur too close to a pure marketplace, but I thought I would offer my 2 cents.

Again, I want to emphasize that I’m not an expert in this area and am just trying to stimulate discussion… even if it’s only to point out why my idea is wrong :joy:

1 Like

One additional comment on my first point - I do think L2 solutions need to be explored across the nexus mutual ecosystem to make usage more frictionless (e.g. staking, cover purchase / expiry, etc.) and my idea would be predicated on there being de minimis transaction costs associated with the ‘bids’.

In general, I do think a more free market-based system with layers of liquidity at various ‘bid’ and ‘ask’ prices for purchasing / writing covers respectively, could bring in more liquidity than other systems. Alternatively, if we could add this as an additional layer on top of the current staking method, whereby maybe users of a limit buy / sell order of cover pays a small transaction fee directly to the mutual for getting to used an ‘advanced’ order type. This would prevent cannabalizing existing staking and only potentially draw in additional layers of marginal liquidity.

1 Like

Using more complex market based mechanics is theoretically appealing (to me as well) but it has some fundamental challenges when applied to insurance pricing.

Firstly, insurance works by diversification of lots of different risks, so you need deep liquid markets on lots of risks. So the assumption about having an even somewhat efficient market becomes quite challenging.

Secondly, insurance risk is all about the extremes of the probability distribution, not the middle. It’s hard enough to get efficient markets in the middle, but pricing in the extremes is hard and small differences in likelihood make material differences in price. eg a +/- 1% view when the expected result is 50% is small, 49%-51%, but when we’re talking 2% pricing it’s a large relative difference in price, 1%-3%.

On auctions in particular, we’d need ongoing auctions for each risk, to capture price moving over time and new stakers entering/leaving. This seems very complex to me as well as being challenging from a UX perspective.

On L2, we’re keeping this in mind as we enhance different aspects of the protocol. But I did want to highlight that optimising gas on L1 is incredibly important regardless, so simple design is still required and we can’t rely on L2 to solve all of this. Also, I’d be careful of assuming L2 will solve everything for Nexus. Nexus has relatively few transactions and requires lots of data on chain, this means we are less impacted by L1 congestion than others (still not great) but it also means L2 will benefit us less than others.