Tokenomics Proposal: Replacing the Bonding Curve

Thank you, @Rei, for putting this proposal together.

As stated in Discord, we are on board with tokenomics modifications in line with the prioritization presented (see discord):

  1. Fix MCR first
  2. Identify/develop mechanisms for encouraging capital
  3. Buying/selling NXM

Of the presented solutions in the proposal, we prefer option A. However, we would like to emphasize that there needs to be clarity on:

  1. The anticipated budget and rollout plan. This is a new mechanism and itโ€™s fairly complex. Ideally, we start with a small initial budget and increase over time rather than funding immediately with e.g. 10k ETH and potentially overlooking something. Weโ€™re not sure how this could be done in practice, but it warrants exploration to avoid a potentially expensive lesson.
  2. A contingency plan for the possibility that anticipated liquidity needs are exceeded. We can envision scenarios in which sell-pressure might still exceed the budget even when we allocate a large amount to it as mentioned above. Some possible causes for this sell pressure might include 1) the surge in price itself, 2) the anticipated decrease of book value (e.g. related to claims), and 3) the increase in risk if the active cover is not decreased in parallel (i.e. if the supply is decreased by -x%, the expected loss per NXM will increase by x/(1-x) %). It is not clear if we should increase the budget in those cases or stop the ratcheting altogether. Ideally, we understand how the risk profile (e.g. slashing insurance) and supply-profile (e.g. longer term staking) evolves in v2 as it likely directly impacts the potential sell-pressure. Overall, it is crucial to address these potential scenarios to safeguard both the protocol and long-term investors.
  3. A mechanism to avoid frontrunning/gamification. Frontrunning could be an inherent problem of the ratcheting system (incl. withdrawals on anticipated claims mentioned in 2). Mechanisms such as adding a fee and/or adjusting slippage should ideally be developed to reduce these kinds of activities.
  4. A replacement for the current MCR. Lifting the current MCR floor is reasonable, but it would be good to know plans/timelines on potential adjustments of the MCR-estimation in place then.

We understand that 1 and 2 above require additional data following the launch of v2, and that the current signaling vote is solely to help provide clarity on direction. However, we were hoping to have some additional clarity around these data points before voting and/or at least want to check that all of the points are part of parameters that we can decide on once this has been put in code. If any of the above points have been addressed elsewhere, we would love to take a look.

3 Likes

Hey @Justin-1kx,

Just wanted to note that the Snapshot signaling vote is only to guage sentiment on where the community would like to see development resources allocated post-V2 launch.

No changes can be made to the protocol without an all-member on-chain vote. In the Technical specifications section of the Snapshot proposal, Iโ€™ve highlighted what comes after this Snapshot vote:

The next stage is dependent on the outcome of this signaling vote. If Option A or Option B receive the majority vote, then the next stages will involve:

Technical development. The Engineering team would develop the smart contracts necessary to implement the chosen proposal. This would include development, testing, and audits of the design.

Determine parameters of the chosen design. Members will discuss and select the initial parameters for the chosen design and approve the implementation through an on-chain vote for the final protocol improvement proposal.

Members will need to discussion #1 after the Snapshot signaling vote closes, as well as the other points that youโ€™ve noted. @Rei did include in the original post an estimate on timing in the original post above:

Below is a timeline from my perspective going forward, provided there is agreement on the mechanism. As always, way more important to be thorough and ensure code & mechanism is rock-solid rather than shipping something with holes in it.

Q1 Feedback on mechanism and parameter discussion. If no major red flags and nothing significant needs to be redesigned, engineering work begins to convert tokenomics spec to solidity code after v2 is live.

Q2 Code finalised, appropriate testing, audits.

Q3/Q4 Governance process, New mechanism live.

Iโ€™m looking forward to members discussing the parameters going forward once thereโ€™s clarity on which direction members signal support for.

2 Likes

Thanks for the comments. Discussing the below for the Ratcheting AMM design only. Generally, I believe points 1.-3. can indeed be addressed through parameterisation and smaller tweaks to the system.

1. Anticipated Budget & rollout - agree that getting this right is crucial and should be a big part of the launch process design. Ideally weโ€™d run a small-scale experiment to see what happens as suggested.
Have to note that the work so far suggests setting the right amount of initial/ongoing liquidity is key to achieving desired outcomes, so might not be as simple as just going in with e.g. 10% of what we actually want to start with to test the waters, because the outcome would be entirely different. However, plenty to play around with here to give ourselves the most certainty of a good outcome.

2. Contingency Plan Yep, we need to plan for both higher and lower sell pressure, but I think the real concern here comes back to the system ensuring that the ability to enter and exit works well when operating at capital efficiency, i.e. capital can/wants to come in and can leave reasonably while ensuring liabilities are appropriately provided for.

In the long term, we should be operating stably, profitably and with the ability to grow while operating with a capital pool close to 100% of cover-driven MCR.

Whether that comes sooner or later depends largely on how quickly cover grows, and in what profile. The best outcome is that cover increases by 5x tomorrow and we run on cover-driven MCR at or near Capital Pool = MCR at current level. The other outcome is that covers donโ€™t grow, but eventually we still will be running at Capital Pool = MCR.

To address the possible causes you mentioned:

  1. Price surge above book value implies all sell pressure is then routed through wNXM and the protocol doesnโ€™t lose any capital. Weโ€™re only losing capital below book value. Liquidity provision is the key parameter to how quickly that can happen.
  2. Expecting large claims will indeed lower the book value, but frontrunning this in the short term can be mitigated by slippage, and in the long term book value should go up as a result of cover fees > claims.
  3. Unless Iโ€™m misunderstanding the point, this is around the Capital Pool = MCR consideration I mentioned above.

Talking through and getting comfortable with these scenarios and more will indeed be a key part of parameter setting and implementation.

3. Frontrunning. Agree - one of the aims is minimising scenarios where someone can turn a profit without contributing anything to the system. One of the benefits of the design is that the frontrunning can be mitigated by slippage, which then gets reversed by the ratchet, compared to the current system where sequences such as e.g. sell high โ†’ MCR% reduced by claims โ†’ buy back at lower price are much more predictable and easily gamed (although there hasnโ€™t been much history of this happening in practice in advance of claims).

Once the direction is decided, look forward to the community coming together to put forward any unaddressed scenarios of concern so that they can be incorporated during development.

4. MCR As per the discord post you referenced, there is no intention to change the current on-chain implementation of Cover Amount * X.
Mainly this is due to technical reasons - posting actuarial calculations of the type
Capital Requirement (CR) = โˆšโˆ‘๐‘–,๐‘— ๐ถ๐‘œ๐‘Ÿ๐‘Ÿ(๐‘–,๐‘—) โˆ™ ๐ธ๐‘ฅ๐‘(๐‘–) โˆ™ ๐ธ๐‘ฅ๐‘(๐‘—)
where
๐ธ๐‘ฅ๐‘(๐‘–) - losses in extreme events, and
๐ถ๐‘œ๐‘Ÿ๐‘Ÿ(๐‘–,๐‘—) โ€“ correlations
is prohibitive due to computational costs, and Cover Amount * X is a proxy that moves in the same direction as the exposure changes.

However, one of my other projects within the DAO R&D team is to establish a more consistent updating process between these calculations off-chain and the governing the X multiplier of the Cover Amount proxy thatโ€™s posted on-chain. Since most of this work will be done off-chain, it can be run in parallel to Tokenomics implementation work. Exact timelines TBD but Iโ€™m expecting to start working on the actuarial side at the end of March and have a process in place before the current DAO funding period ends at the beginning of August.

3 Likes

Following on from the snapshot vote concluding yesterday, wanted to briefly mention how I see the immediate next steps.

As per the snapshot vote ,and as mentioned by @BraveNewDefi above, to get to launch, we need to complete two parallel strands:

  • Technical development. The Engineering team would develop the smart contracts necessary to implement the chosen proposal. This would include writing the smart contracts, testing, and audits of the design.
  • Determine parameters of the chosen design. Members will discuss and select the initial parameters for the chosen design and approve the implementation through an on-chain vote for the final protocol improvement proposal.

The next milestone will be to create a specification document that can be handed to the engineering team - hoping to do this within a month.

Some aspects of fine-tuning the chosen design will influence the design spec, e.g.

  • any transition mechanisms that require their own code,
  • any edge cases that may force a design change, and/or
  • any technical limitations discovered by engineering team themselves

Other aspects will not, e.g. setting the numeric parameters of the system, like opening price, liquidity and ratchet speed.

Therefore, my initial focus will be on nailing down those aspects that will influence the design, so that whatโ€™s handed over to the engineers is as robust as possible.

Some examples of the sorts of things I mean:

  • Currency of redemptions. Currently everything is modelled/calculated in ETH, but the capital pool is denominated in more currencies, with stETH being chief among them and may be diversified into further investment assets in the future. Should we be enabling/setting withdrawal to be in multiple currencies according to the current split of the capital pool or should we set aside a pool of ETH cash specifically for withdrawals?
  • NXM price transition. Detaching the price from the current bonding curve level to a market-consistent level would create a capacity shock. Should we trend it down over time, if so, whatโ€™s the best approach?
  • Oracle safety: There is the possibility that Book Value as per the system is slightly out due to oracle errors / delays. Is this a concern in the range around book value and, if so, should we introduce some sort of buffer around Book Value?
  • TWAP for system price: how defined, what time frame, etc.

Meeting some of the foundation team next week to brainstorm the next level of detail on the points above and more, and intending to collate a list of suggestions and discussion points for the wider community + post them in a separate forum post, so that we can get a wide range of interested brainpower working on this.

In the meantime (ideally by end of Tuesday 24th Feb), Iโ€™d encourage everyone here to highlight anything that jumps out to you in the current design that gives you concern.

4 Likes