Please login or register.

A formal approach towards better hard fork management

Seeing as how there are going to be things that will have to be introduced via a hard fork in future it stands to reason that we have to look at better ways of doing this than the current "fork and pray" approach. The scheme outlined below does not force miners / nodes to adopt a fork if we add something stupid in, but it creates a more fluid network that can robustly handle changes.

Basic bottom line: every 6 months there's a hard fork. You get 1 hard fork's grace before you have to update or be left behind.

Details

Every 6 months, either on March 15 + September 15 or on April 15 + October 15, the Monero network will have a hard fork. 30 days before the fork we will have a code freeze + tag + release, and if there are no major changes we'll have an increase in the protocol version (ie. that's at a minimum). A similar fork system to Bitcoin will apply, whereby a rollover to the new code after the trigger block will only occur if a sufficient number of miners are running the new code.

Anything that is more of a soft fork will kick in immediately (as long as it doesn't drop pre-fork clients off the network). Anything on the p2p layer (ie. hard forkable) will be kept in the wings until the next fork date (as roughly estimated from block height) and then is enabled.

The upshot of this is that you can run a client that is a year old, but pretty much after that 1 year anniversary you'll be dropped off the network (even if there have been no "real" changes in that year).

There is no goal or aim to continue to support clients running older versions of Monero, and just like you have to update your OS security software we have to create an environment where people are "forced" to update their full node.

Discuss

Thoughts? Suggestions? Changes? Let's discuss them before we make this indelible.

Replies: 46
Reply to: fluffypony rocco rocco
Kazuki edited 8 years ago Weight: -429 | Link [ + ]

6 months is a good period, e.g. next September 18th (6 months after the b-day) the first hard fork could happen then, it will also coincide with the "halving". Also a suggestion, having all core devs GPG sign a document saying they will never change the emission or inflation would put most preoccupations at rest (the 'Monero constitution' or something like that)

ArticMine edited 8 years ago Replies: 3 | Weight: -427 | Link [ + ]

My initial reaction to this is that while this makes a lot of sense over the next 18 months with the impact felt over the next 30 months I would advise extreme caution after that. I believe there are important lessons here from Bitcoin and also from Ripple and even Dash.

The reason why Bitcoin is hard to fork can best be summarized as a result of maturity. Satoshi did not have any problem implementing the 1MB limit in 2010, now compare this with the debate over the 1MB limit that has been raging since at least 2013. There were clear warnings back in 2010 with Bitcoin as can be seen from the following thread. https://bitcointalk.org/index.php?topic=1347.0;all The following quote from caveden in November 2010 is prophetic.

“Only recently I learned about this block size limit.

I understand not putting any limit might allow flooding. On the other hand, the smaller your block, the faster it will propagate to network (I suppose.. or is there "I've got a block!" sort of message sent before the entire content of the block?), so miners do have an interest on not producing large blocks.

I'm very uncomfortable with this block size limit rule. This is a "protocol-rule" (not a "client-rule"), what makes it almost impossible to change once you have enough different softwares running the protocol. Take SMTP as an example... it's unchangeable.

I think we should schedule a large increase in the block size limit right now while the protocol rules are easier to change. Maybe even schedule an infinite series of increases, as we can't really predict how many transactions there will be 50 years from now.

Honestly, I'd like to get rid of such rule. I find it dangerous. But I can't think of an easy way to stop flooding without it, though.”

The reality is that for a true Decentralized Virtual Currency (I am using the FinCEN Definition http://fincen.gov/statutes_regs/guidance/html/FIN-2013-G001.html ) it becomes very hard to hard fork after two years, unless the hard fork is driven by an emergency or it fundamentally does not change any of the economic aspects of the currency. This is the key lesson from Bitcoin. We must consider that as more and more services are built on top of Monero it will become harder and harder to hard fork. Even something like changing the number of block per hour has in Monero has already become harder simply because of the advent of XMR.TO. XMR.TO relies on confirming a Monero transaction and having Bitcoin sent to a Bitcoin address withing the 10-15 min period that many Bitcoin payment processor use. Now change the blocktime from 1 min to say 2 min and the economics of XMR.TO as a business are fundamentally changed. Just take a look on how hard it is for the Internet to change from IPv4 to IPv6. This change will only happen when the pain of not changing becomes real. Hard forks are possible in Bitcoin now but they must be driven by significant pain, such as for example the hard fork that occurred in the Spring of 2013. I believe Bitcoin will hard fork over the 1MB, but only after the pain becomes obvious, as will the Internet “hard fork” to IPv6 but again only after the pain is felt.

The alternative is what Ripple has done and to a large degree what Dash is doing. While it is simple for Ripple Labs to “hard fork” Ripple this comes at the huge cost of becoming a Centralized Virtual Currency. This makes the developers MSBs and give a regulator the leverage they need to push through the protocol changes they desire. This is precisely what has happened to Ripple. Dash will likely provide an important test case because it literally straddles the Decentralized Virtual Currency / Centralized Virtual Currency Definition. How regulators deal with Dash will provide critical precedents and lessons not just for Monero but for many Virtual Currencies. The lesson here is make hard forks easy and efficient and one runs the very real risk of becoming a Centralized Virtual Currency with the developers as the “Administrators”.

One a related note the idea of forcing software upgrades every 6 months to a year as an ongoing policy is doomed to failure. One only needs to take a look at the fierce resistance most business big and small have to Microsoft's software upgrade cycle. Who can blame them. Upgrading Windows means significant costs with miniscule if any productivity gains. The proper response from any business is to delay the upgrade as much as possible in order to minimize these costs. This resistance is also manifested for example in the Internet IPv4 to IPv6 upgrade for exactly the same reason.

In conclusion I say what is being proposed will work very well for only two maybe three six month cycles after that Monero will become like SMTP, IPv4/IPv6 etc with hard forks only possible in emergencies and / or when the pain of not hard forking is patently obvious. The alternative is a degree of centralization and regulatory risk that most members of the community will find unacceptable.

Edit: Fork and pray keeps the regulator away.

Reply to: ArticMine
Kazuki edited 8 years ago Weight: -429 | Link [ + ]

The problem with Ripple and others is not because they fork, its because they were premined/instamined, Monero devs did not issued tokens for themselves nor are selling it. But to solve your interpretation just create a window of 1 year where any number of hard fork can happen, until XMR 2nd b-day (2016) or until Jan, 2017. Call it stability period. After that only in emergencies or worse - its end of Monero, I dont think after a certain period it can fully recover from severe attacks or bugs (like Bitcoin had in the past), so hardforking automatically will become just an upgrading sore.

sylviaplathlikestobake edited 8 years ago Replies: 1 | Weight: -427 | Link [ + ]

Isn't this going to create a debate on which hard-fork to implement every six months? Could this cause a schism? Is there a possibility of a hardfork tie? Or too many hardfork possibilities leading to a break?

I guess my main question is who decides on the hardfork attributes, how is their position determined, will it be more than a one person decision? If it is more than one person, how will consensus be achieved in the hardfork's attributes? And if there isn't consensus, will it be miner's voting and what prevents a tie or multiple forks from occurring?

I'm worried about the human factors more than computer factors here; people lusting after power or money has always had unintended consequences, and the more Monero grows, the more these factors will take effect.

Reply to: ArticMine
rocco edited 8 years ago Weight: -427 | Link [ + ]

thank you for this post ArticMine! i somehow share this opinion too and i thinks the devs are fully aware of it too. we have to use this short time window very wisely! not rush but also we should not hesistate.

things like the min. mixing branch should maybe go live without this asap, with a real, immediate hard fork, update or die

Reply to: fluffypony rocco
rocco edited 8 years ago Weight: -427 | Link [ + ]

this is very true what you say, but i think there are features that everyone ( Mr "Unknown" included) wants and they need a hard fork. i am sure it/he/she would come out and tell us if they are not ok with it. everyone involved in mining/running a node is following monero very closely, so i think for the moment hard forks will go through very smoothly. (assuming software quality is good :-)

"no news good news"

Reply to: sylviaplathlikestobake
fluffypony edited 8 years ago Replies: 1 | Weight: -427 | Link [ + ]

> Isn't this going to create a debate on which hard-fork to implement every six months?

If there isn't general consensus on a change then it gets pushed forward to the next fork. The point is not to make a breaking change every 6 months, the point is simply to be able to.

> I guess my main question is who decides on the hard fork attributes

That process doesn't change, but with planned hardforks it becomes easier.

Let me use a practical example to illustrate. Let's pretend we were facing the same situation as Bitcoin, with a polarised decision that has to be made. It's very unlikely that 1 of the 7 core team members could act like Gavin, because the division would be clear (not so with Bitcoin where there is this misconception that "only Blockstream devs disagree with Gavin" or "everyone that matters agrees with Gavin" when meanwhile, back at the ranch, he's the only one of the 5 core maintainers that wants the 20mb limit right now). But even so, let's imagine it's 3 vs 3 on the decision with 1 abstain, and the community is torn.

Now here's the clincher: with planned hard forks we could very easily put the code in and then measure full node uptake over 6 months. Since the code only kicks in after the 6 month window and the node observes a sufficient percentage of blocks on the new version, it simply wouldn't enable that code if there is not sufficient uptake. This could be modified to be both % of mined blocks + % of connected nodes for the sake of fairness. We therefore let the miners and users decide, and we can always nuke that code in the next hard fork update.

Thus, to answer your questions, the "consensus" is system wide and dependent on the system's users, the core team would merely be the instigators based on feedback from the community.

Reply to: ArticMine
fluffypony edited 8 years ago Replies: 1 | Weight: -427 | Link [ + ]

> One a related note the idea of forcing software upgrades every 6 months to a year as an ongoing policy is doomed to failure

I can't agree with this statement. We are not talking about an operating system where you want the latest whiz-bang features, we are talking about the safety of money in a global arms race. If you knew that your passwords stored on your computer were protected by a 64-bit encryption key, and you heard that cryptographers were close to cracking that key length, would you go "well it's a standard, I can't change it"? Now what about those threats you don't hear about because they're found preemptively?

More to the point: your privacy and security depends, not only on you, but on a majority of peers in the network being good actors and obeying the rules (although on an individual node basis you don't have to be connected to a majority of good actors because of PoW). If a supermajority of the peers in the system are sluggish to update they don't only put themselves at risk, they put you at risk too (from a privacy leak or DoS perspective, for instance).

This is a consensus system, it is decentralised, distributed security software. Every part of the system needs to be in sync, and the system is intolerant of alternate implementations (as they may not emulate everything, including bugs, which could lead to a consensus fork). On the other hand, TCP/IP or SMTP will still work as implementations even if there are thousands of bad implementations. Consensus among active implementations / users is not required, and so the standard can be ratified and then left alone. Monero's situation is incomparable, and given that the safety of large quantities of funds depends on people upgrading it stands to reason that this is a different scenario to a networking standard or to operating system releases.

fluffypony edited 8 years ago Replies: 1 | Weight: -425 | Link [ + ]

Booze works:

Awesome Ale
Beautiful Brandy
Charming Champagne
Delicate Drambuie

Etc.

Reply to: fluffypony ArticMine
ArticMine edited 8 years ago Replies: 1 | Weight: -426 | Link [ + ]

The privacy and security argument in reality simply does not apply here. If a security vulnerability, and yes that includes weakened or weak cryptography, is discovered in Monero then it makes no sense to wait for up to six months to release the fix and up to a year for final implementation regardless of whether a hard fork is needed or not. During that period of time with the vulnerability out in the wild Monero could easily be destroyed if prompt action was not taken. This is also the case where getting community consensus to implement the fix is easy, in fact the community would be clamouring for the required security hard fork not resisting its implementation.

The next scenario is a hard fork that can wait somewhat but is otherwise required. A good example is the implementation of MRL-0004. In this case once the software is completed and tested to the required standard then why wait for up to six months before the launch? There is also the risk of rushing the development and testing process in order to meet an artificial deadline. I would argue that in the very near future Monero may have to hard fork with a greater frequency than six months.

The scenario that is left is that the deadline comes up with no code changes requiring a hard fork. At this point a hard fork is implemented for no other reason that a certain date has passed. I fail to see the point of this. This is the situation that is closely related to my operating system example. There are no real gains in productivity (for example: security, or privacy enhancements) but an upgrade is still required. Furthermore how will this be enforced? If there are no changes to the protocol then any compatible client should work both for nodes and miners. My real fear here is that this will lead to a situation where only “approved” clients will be able to run full nodes or mining nodes on the Monero network. If this actually occurs then whomever controls the approval keys is subject to regulation as an MSB. Then we run the real risk of the equivalent of “64-bit encryption key” finding its way into the Monero network at the behest of some government regulator. By the way deliberately crippling and / or weakening cryptography is a favourite tactic of many a government agency around the world.

The more I look at this rigid six month cycle the less I like it. It would delay a hard fork when it is needed or desirable, while at the same time imposing a hard fork when it is not needed or even available. With a de-centralized POW virtual currency one by its very nature trusts the wisdom of the majority of the network, be it mining nodes and /or network nodes over the wisdom of any particular group of people.

Reply to: ArticMine fluffypony ArticMine
fluffypony edited 8 years ago Replies: 1 | Weight: -426 | Link [ + ]

> then it makes no sense to wait for up to six months to release the fix and up to a year for final implementation regardless of whether a hard fork is needed or not

We're not talking about emergency, reactive forks. We're talking about the continuous improvements required to preemptively avoid attacks and to improve performance.

Let's think about some practical examples of changes that would benefit from this 6-month fork window, changes that aren't politicised or hard to justify but also aren't "emergency" changes:

  • changing the per-kb fee because the price has increased
  • changing the PoW to something with similar properties but faster verification
  • changing the difficulty algorithm because the current one is obfuscated and messy
  • enforcing a particular mixin selection distribution
  • defaulting nodes to i2p/tcp hybrid connections to increase i2p's size
  • fixing a bug in difficulty / emission that will only be encountered in a couple of years time
  • switching from boost serialisation to protobuf on a p2p layer
  • changing the default display units from XMR to mXMR or whatever
  • improving the mempool handling for large transactions
  • fixing a DoS vector that could only be exploited when the median block size is much higher
  • overhauling and replacing the p2p layer
  • completely changing the sync strategy to serve headers first etc.

Some of those are not necessarily hard forks, but they would benefit from there being more participants in the network with that functionality (eg. i2p and sync strategy changes).

> It would delay a hard fork when it is needed or desirable, while at the same time imposing a hard fork when it is not needed or even available

It won't delay an emergency hard fork. And for something like MRL-0004 we've already been unable to attend to it for a few months as we've been doing a bunch of other core stuff (blockchainDB etc.) so this could be no worse. Even if we had the resources to write the code in January there's literally no way to communicate with everyone and get them to update. Even worse - clients have no way of knowing that a hard fork has happened (if they haven't updated), so users and merchants and even exchanges that are forked won't understand what's happening and will think that they're still connected to the network (even though they aren't). Having a hard fork cycle fixes that problem.

> With a de-centralized POW virtual currency one by its very nature trusts the wisdom of the majority of the network

No, you don't, that's precisely what PoW fixes. If you are connected to 12 nodes how many of them need to be truthful for you to sync up to the real network? 6? 7? The correct answer is 1. You only need to trust 1 of those nodes. The entire network can be filled with bad actors, and you can still weed out the honest peers just by starting with 1 honest peer.

> be it mining nodes and /or network nodes over the wisdom of any particular group of people

Well that's contradictory, trusting mining nodes means trusting miners (or trusting pool operators, really). It's also problematic, as not only do miners have plenty of perverse incentives (eg. the less miners there are the more they'll earn), but they are typically more focused on profit than on the greater good of the network. Ultimately we have to trust miners to upgrade to all reasonable forks, but that trust should be tempered - which is what we're trying to solve with this.

Reply to: fluffypony
Kazuki edited 8 years ago Weight: -426 | Link [ + ]

awe

Reply to: fluffypony ArticMine fluffypony ArticMine
sylviaplathlikestobake edited 8 years ago Replies: 1 | Weight: -426 | Link [ + ]

Maybe we can re-frame this debate in the context of what won't ever change, no matter what, never. I'd assume no Dev would ever be able to include a backdoor, no Dev would be able to limit fungibility, no Dev would be able to change the emissions rate, no Dev would be able to increase initial supply.....

Reply to: fluffypony sylviaplathlikestobake
papa_lazzarou edited 8 years ago Replies: 1 | Weight: -426 | Link [ + ]

Overall I think its a good strategy. It reduces the disparity of running versions in the network.

>Since the code only kicks in after the 6 month window and the node observes a sufficient percentage of blocks on the new version, it simply wouldn't enable that code if there is not sufficient uptake.

What would happen if there was the need for an emergency update during the controversial fork assessment period? Since people would be forced to change in that situation the results of the "suffrage" would be skewed.

This is really an edge case but, would it be practical/safe to include a user controlled switch for the code during the trial period?

Reply to: papa_lazzarou fluffypony sylviaplathlikestobake
fluffypony edited 8 years ago Weight: -426 | Link [ + ]

An emergency patch could be backported to a previous version, so both would still have the fix, and people could choose to update their current version with the same version or a more recent one