Please login or register.

A formal approach towards better hard fork management

Seeing as how there are going to be things that will have to be introduced via a hard fork in future it stands to reason that we have to look at better ways of doing this than the current "fork and pray" approach. The scheme outlined below does not force miners / nodes to adopt a fork if we add something stupid in, but it creates a more fluid network that can robustly handle changes.

Basic bottom line: every 6 months there's a hard fork. You get 1 hard fork's grace before you have to update or be left behind.

Details

Every 6 months, either on March 15 + September 15 or on April 15 + October 15, the Monero network will have a hard fork. 30 days before the fork we will have a code freeze + tag + release, and if there are no major changes we'll have an increase in the protocol version (ie. that's at a minimum). A similar fork system to Bitcoin will apply, whereby a rollover to the new code after the trigger block will only occur if a sufficient number of miners are running the new code.

Anything that is more of a soft fork will kick in immediately (as long as it doesn't drop pre-fork clients off the network). Anything on the p2p layer (ie. hard forkable) will be kept in the wings until the next fork date (as roughly estimated from block height) and then is enabled.

The upshot of this is that you can run a client that is a year old, but pretty much after that 1 year anniversary you'll be dropped off the network (even if there have been no "real" changes in that year).

There is no goal or aim to continue to support clients running older versions of Monero, and just like you have to update your OS security software we have to create an environment where people are "forced" to update their full node.

Discuss

Thoughts? Suggestions? Changes? Let's discuss them before we make this indelible.

Replies: 46
ArticMine posted 2 years ago Replies: 1 | Weight: -241 | Link [ - ]

Are we still looking now at a September 15th code freeze+tag+release with an October 15th hard fork?

Reply to: ArticMine
fluffypony posted 2 years ago Replies: 2 | Weight: -238 | Link [ - ]

Yes we're hoping to - although it'll be September 15 code freeze, and then the actual roll-over is March 15 (for this first one) after which it continues as normal. We're busy writing it up more formally and will update this thread.

Reply to: fluffypony ArticMine
ArticMine edited 2 years ago Replies: 1 | Weight: -141 | Link [ - ]

Now that both the September 15th 2015, code freeze and October 15th, 2015 release dates have passed are we still looking at a March 15th, 2016 hard fork date or has this also been postponed? I see a real danger here, since most people will likely be running a version more recent than the official 0.8.8.6 release, of the network forking because of different versions forking at different times or not forking at all.

Edit: My previous comment on this was posted on September 04, 2015. Now over 6 weeks have passed with no news on this.

Reply to: ArticMine fluffypony ArticMine
fluffypony posted 1 year ago Weight: -124 | Link [ - ]

Yes - fork date will stay the same, we'll just code freeze a little later than expected. We can take a bit of a shorter freeze period on this first fork because EVERYONE will want to update anyway:)

Reply to: fluffypony ArticMine
ArticMine posted 2 years ago Weight: -233 | Link [ - ]

Yes, Of course for a code freeze in September the hard fork would be next year after the six month upgrade window is over.

smooth edited 2 years ago Weight: -418 | Link [ + ]

One comment on the concept of "voting" being problematic

In many cases it is largely irrelevant whether miners have upgraded. What matters more is whether the larger community of participants (I'll call these users, but I include in that group major commercial participants such as exchanges, etc.) have upgraded.

If most miners upgrade but nearly all users don't, what will happen come fork time is the majority of miners will find their blocks rejected. The difficulty will drop, which (assuming the coin retains value) in turn will attract new miners (and/or encourage existing miners to switch over).

Likewise if nearly all users do upgrade but most miners don't, again most of those miners' blocks will be rejected, the difficulty will drop on the chain users accept, and miners will show up there in response to the lower difficulty.

Now if you have a situation where say, half the user community wants to upgrade and half doesn't, then you will have chaos because people won't agree on the state of the blockchain and won't be able to transact, again regardless of what miners are doing. Both sets of users will see a blockchain (with different difficulties) but they won't be able to agree on which is the correct one.

If the user community doesn't unify behind one fork, then the coin will likely be destroyed by the resulting chaos. But if it does unify, then it doesn't really matter what the miners do. As long as there are some miners on the users' fork, then blocks will continue to be produced, difficulty will adjust if necessary, and miners (possibly the same ones, possibly different ones) will come.

Unfortunately, the problem is that it is much harder to measure users' adoption decisions than miners', so in many cases this idea of miner voting is overemphasized because of the http://en.wikipedia.org/wiki/Streetlight_effect

Miner voting is most useful for simple technical corrections where there is little contraversy and upgrading is unlikely to ever by an issue for users beyond an "oops" when they see a message that their client is out of date. It has been proposed as part of a plan to address transaction malleability in bitcoin, and is probably good for that. It is also very nice, certainly ideal, that any hard fork have strong support from both users and miners, and of course that requires miners' support as a subset. Miner voting can tell you that at least.

It would be a terrible way to handle the bitcoin blocksize debate, though.

opennux edited 2 years ago Replies: 1 | Weight: -425 | Link [ + ]

Where'd ArticMine's reply go?

Reply to: opennux
Kazuki edited 2 years ago Replies: 2 | Weight: -425 | Link [ + ]

the default view sux and make posts "disappear", set it to oldest or latest and you'll find his comment.

Reply to: Kazuki opennux
fluffypony edited 2 years ago Weight: -424 | Link [ + ]

It wraps up replies you've already read so you can see the ones you haven't...possibly we've broken something in Latest/Oldest with this change, will have to investigate

Reply to: Kazuki opennux
opennux edited 2 years ago Replies: 1 | Weight: -425 | Link [ + ]

It is set on "latest". His first post is showing, but his second post is not showing. I can only see the very start of it by hovering his name in the "Reply to: sylviaplathlikestobake fluffypony ArticMine fluffy pony ArticMine" above the posts.

Reply to: opennux Kazuki opennux
fluffypony edited 2 years ago Replies: 1 | Weight: -424 | Link [ + ]

@opennux can you send me a screenshot of what you're seeing so we can investigate?

Reply to: fluffypony opennux Kazuki opennux
opennux edited 2 years ago Replies: 1 | Weight: -424 | Link [ + ]

@fluffypony - Give me a sec. Specifically it is post 1266. I can't even click it, like other replies. So "go to #post-1266", doesn't bring anything. Maybe ArticMine just deleted it?

I'm aware of the fold out option that exists also.

Reply to: opennux fluffypony opennux Kazuki opennux
fluffypony edited 2 years ago Weight: -424 | Link [ + ]

@opennux this is what I see: http://i.imgur.com/Fq8wCR2.png - if I click on that "fluffypony and 3 others have replied" then I can see #1266

ArticMine edited 2 years ago Replies: 3 | Weight: -427 | Link [ + ]

My initial reaction to this is that while this makes a lot of sense over the next 18 months with the impact felt over the next 30 months I would advise extreme caution after that. I believe there are important lessons here from Bitcoin and also from Ripple and even Dash.

The reason why Bitcoin is hard to fork can best be summarized as a result of maturity. Satoshi did not have any problem implementing the 1MB limit in 2010, now compare this with the debate over the 1MB limit that has been raging since at least 2013. There were clear warnings back in 2010 with Bitcoin as can be seen from the following thread. https://bitcointalk.org/index.php?topic=1347.0;all The following quote from caveden in November 2010 is prophetic.

“Only recently I learned about this block size limit.

I understand not putting any limit might allow flooding. On the other hand, the smaller your block, the faster it will propagate to network (I suppose.. or is there "I've got a block!" sort of message sent before the entire content of the block?), so miners do have an interest on not producing large blocks.

I'm very uncomfortable with this block size limit rule. This is a "protocol-rule" (not a "client-rule"), what makes it almost impossible to change once you have enough different softwares running the protocol. Take SMTP as an example... it's unchangeable.

I think we should schedule a large increase in the block size limit right now while the protocol rules are easier to change. Maybe even schedule an infinite series of increases, as we can't really predict how many transactions there will be 50 years from now.

Honestly, I'd like to get rid of such rule. I find it dangerous. But I can't think of an easy way to stop flooding without it, though.”

The reality is that for a true Decentralized Virtual Currency (I am using the FinCEN Definition http://fincen.gov/statutes_regs/guidance/html/FIN-2013-G001.html ) it becomes very hard to hard fork after two years, unless the hard fork is driven by an emergency or it fundamentally does not change any of the economic aspects of the currency. This is the key lesson from Bitcoin. We must consider that as more and more services are built on top of Monero it will become harder and harder to hard fork. Even something like changing the number of block per hour has in Monero has already become harder simply because of the advent of XMR.TO. XMR.TO relies on confirming a Monero transaction and having Bitcoin sent to a Bitcoin address withing the 10-15 min period that many Bitcoin payment processor use. Now change the blocktime from 1 min to say 2 min and the economics of XMR.TO as a business are fundamentally changed. Just take a look on how hard it is for the Internet to change from IPv4 to IPv6. This change will only happen when the pain of not changing becomes real. Hard forks are possible in Bitcoin now but they must be driven by significant pain, such as for example the hard fork that occurred in the Spring of 2013. I believe Bitcoin will hard fork over the 1MB, but only after the pain becomes obvious, as will the Internet “hard fork” to IPv6 but again only after the pain is felt.

The alternative is what Ripple has done and to a large degree what Dash is doing. While it is simple for Ripple Labs to “hard fork” Ripple this comes at the huge cost of becoming a Centralized Virtual Currency. This makes the developers MSBs and give a regulator the leverage they need to push through the protocol changes they desire. This is precisely what has happened to Ripple. Dash will likely provide an important test case because it literally straddles the Decentralized Virtual Currency / Centralized Virtual Currency Definition. How regulators deal with Dash will provide critical precedents and lessons not just for Monero but for many Virtual Currencies. The lesson here is make hard forks easy and efficient and one runs the very real risk of becoming a Centralized Virtual Currency with the developers as the “Administrators”.

One a related note the idea of forcing software upgrades every 6 months to a year as an ongoing policy is doomed to failure. One only needs to take a look at the fierce resistance most business big and small have to Microsoft's software upgrade cycle. Who can blame them. Upgrading Windows means significant costs with miniscule if any productivity gains. The proper response from any business is to delay the upgrade as much as possible in order to minimize these costs. This resistance is also manifested for example in the Internet IPv4 to IPv6 upgrade for exactly the same reason.

In conclusion I say what is being proposed will work very well for only two maybe three six month cycles after that Monero will become like SMTP, IPv4/IPv6 etc with hard forks only possible in emergencies and / or when the pain of not hard forking is patently obvious. The alternative is a degree of centralization and regulatory risk that most members of the community will find unacceptable.

Edit: Fork and pray keeps the regulator away.

Reply to: ArticMine
rocco edited 2 years ago Weight: -427 | Link [ + ]

thank you for this post ArticMine! i somehow share this opinion too and i thinks the devs are fully aware of it too. we have to use this short time window very wisely! not rush but also we should not hesistate.

things like the min. mixing branch should maybe go live without this asap, with a real, immediate hard fork, update or die

Reply to: ArticMine
fluffypony edited 2 years ago Replies: 1 | Weight: -427 | Link [ + ]

> One a related note the idea of forcing software upgrades every 6 months to a year as an ongoing policy is doomed to failure

I can't agree with this statement. We are not talking about an operating system where you want the latest whiz-bang features, we are talking about the safety of money in a global arms race. If you knew that your passwords stored on your computer were protected by a 64-bit encryption key, and you heard that cryptographers were close to cracking that key length, would you go "well it's a standard, I can't change it"? Now what about those threats you don't hear about because they're found preemptively?

More to the point: your privacy and security depends, not only on you, but on a majority of peers in the network being good actors and obeying the rules (although on an individual node basis you don't have to be connected to a majority of good actors because of PoW). If a supermajority of the peers in the system are sluggish to update they don't only put themselves at risk, they put you at risk too (from a privacy leak or DoS perspective, for instance).

This is a consensus system, it is decentralised, distributed security software. Every part of the system needs to be in sync, and the system is intolerant of alternate implementations (as they may not emulate everything, including bugs, which could lead to a consensus fork). On the other hand, TCP/IP or SMTP will still work as implementations even if there are thousands of bad implementations. Consensus among active implementations / users is not required, and so the standard can be ratified and then left alone. Monero's situation is incomparable, and given that the safety of large quantities of funds depends on people upgrading it stands to reason that this is a different scenario to a networking standard or to operating system releases.

Reply to: fluffypony ArticMine
ArticMine edited 2 years ago Replies: 1 | Weight: -426 | Link [ + ]

The privacy and security argument in reality simply does not apply here. If a security vulnerability, and yes that includes weakened or weak cryptography, is discovered in Monero then it makes no sense to wait for up to six months to release the fix and up to a year for final implementation regardless of whether a hard fork is needed or not. During that period of time with the vulnerability out in the wild Monero could easily be destroyed if prompt action was not taken. This is also the case where getting community consensus to implement the fix is easy, in fact the community would be clamouring for the required security hard fork not resisting its implementation.

The next scenario is a hard fork that can wait somewhat but is otherwise required. A good example is the implementation of MRL-0004. In this case once the software is completed and tested to the required standard then why wait for up to six months before the launch? There is also the risk of rushing the development and testing process in order to meet an artificial deadline. I would argue that in the very near future Monero may have to hard fork with a greater frequency than six months.

The scenario that is left is that the deadline comes up with no code changes requiring a hard fork. At this point a hard fork is implemented for no other reason that a certain date has passed. I fail to see the point of this. This is the situation that is closely related to my operating system example. There are no real gains in productivity (for example: security, or privacy enhancements) but an upgrade is still required. Furthermore how will this be enforced? If there are no changes to the protocol then any compatible client should work both for nodes and miners. My real fear here is that this will lead to a situation where only “approved” clients will be able to run full nodes or mining nodes on the Monero network. If this actually occurs then whomever controls the approval keys is subject to regulation as an MSB. Then we run the real risk of the equivalent of “64-bit encryption key” finding its way into the Monero network at the behest of some government regulator. By the way deliberately crippling and / or weakening cryptography is a favourite tactic of many a government agency around the world.

The more I look at this rigid six month cycle the less I like it. It would delay a hard fork when it is needed or desirable, while at the same time imposing a hard fork when it is not needed or even available. With a de-centralized POW virtual currency one by its very nature trusts the wisdom of the majority of the network, be it mining nodes and /or network nodes over the wisdom of any particular group of people.

Reply to: ArticMine fluffypony ArticMine
fluffypony edited 2 years ago Replies: 1 | Weight: -426 | Link [ + ]

> then it makes no sense to wait for up to six months to release the fix and up to a year for final implementation regardless of whether a hard fork is needed or not

We're not talking about emergency, reactive forks. We're talking about the continuous improvements required to preemptively avoid attacks and to improve performance.

Let's think about some practical examples of changes that would benefit from this 6-month fork window, changes that aren't politicised or hard to justify but also aren't "emergency" changes:

  • changing the per-kb fee because the price has increased
  • changing the PoW to something with similar properties but faster verification
  • changing the difficulty algorithm because the current one is obfuscated and messy
  • enforcing a particular mixin selection distribution
  • defaulting nodes to i2p/tcp hybrid connections to increase i2p's size
  • fixing a bug in difficulty / emission that will only be encountered in a couple of years time
  • switching from boost serialisation to protobuf on a p2p layer
  • changing the default display units from XMR to mXMR or whatever
  • improving the mempool handling for large transactions
  • fixing a DoS vector that could only be exploited when the median block size is much higher
  • overhauling and replacing the p2p layer
  • completely changing the sync strategy to serve headers first etc.

Some of those are not necessarily hard forks, but they would benefit from there being more participants in the network with that functionality (eg. i2p and sync strategy changes).

> It would delay a hard fork when it is needed or desirable, while at the same time imposing a hard fork when it is not needed or even available

It won't delay an emergency hard fork. And for something like MRL-0004 we've already been unable to attend to it for a few months as we've been doing a bunch of other core stuff (blockchainDB etc.) so this could be no worse. Even if we had the resources to write the code in January there's literally no way to communicate with everyone and get them to update. Even worse - clients have no way of knowing that a hard fork has happened (if they haven't updated), so users and merchants and even exchanges that are forked won't understand what's happening and will think that they're still connected to the network (even though they aren't). Having a hard fork cycle fixes that problem.

> With a de-centralized POW virtual currency one by its very nature trusts the wisdom of the majority of the network

No, you don't, that's precisely what PoW fixes. If you are connected to 12 nodes how many of them need to be truthful for you to sync up to the real network? 6? 7? The correct answer is 1. You only need to trust 1 of those nodes. The entire network can be filled with bad actors, and you can still weed out the honest peers just by starting with 1 honest peer.

> be it mining nodes and /or network nodes over the wisdom of any particular group of people

Well that's contradictory, trusting mining nodes means trusting miners (or trusting pool operators, really). It's also problematic, as not only do miners have plenty of perverse incentives (eg. the less miners there are the more they'll earn), but they are typically more focused on profit than on the greater good of the network. Ultimately we have to trust miners to upgrade to all reasonable forks, but that trust should be tempered - which is what we're trying to solve with this.

Reply to: fluffypony ArticMine fluffypony ArticMine
sylviaplathlikestobake edited 2 years ago Replies: 1 | Weight: -426 | Link [ + ]

Maybe we can re-frame this debate in the context of what won't ever change, no matter what, never. I'd assume no Dev would ever be able to include a backdoor, no Dev would be able to limit fungibility, no Dev would be able to change the emissions rate, no Dev would be able to increase initial supply.....

Reply to: sylviaplathlikestobake fluffypony ArticMine fluffypony ArticMine
fluffypony edited 2 years ago Replies: 1 | Weight: -426 | Link [ + ]

It's an open source project, so that's not quite the way it works (right now and in the future). I'll put up another post later about changes we want to make to the way contributors and code works, but suffice it to say that nobody will download a new version with a code snippet that gives fluffypony 50 billion XMR. Attempts to hide stuff in the code will be spotted. Attempts to change constants / agorithms without discussion will be rejected. So this isn't a free-for-all and then everyone is forced to update. Anything controversial will be pushed back and discussed (in fact it'll be discussed publicly before code is even written).

Reply to: fluffypony sylviaplathlikestobake fluffypony ArticMine fluffypony
sylviaplathlikestobake edited 2 years ago Replies: 1 | Weight: -424 | Link [ + ]

I hope the full presentation alleviates my concerns. I'm just leery of the Devs having too much influence when more power (in all its forms) gets introduced into the system.

Reply to: sylviaplathlikestobake fluffypony sylviaplathlikestobake fluffypony ArticMine
ArticMine edited 2 years ago Weight: -423 | Link [ + ]

fluffypony did address, in the IRC dev channel, my key concern of unofficial node and / or mining clients, that otherwise meet the protocol requirements, being blocked from the Monero network by the regular “forced” upgrades. This is not going to happen. It is critical that these unofficial clients not be blocked in order to prevent the concentration of power in the devs and the regulatory risk that this would entail.

The case for the using the regular schedule for proactive / preventive hard forks is actually very strong. Yes there is on average an additional 3 month delay; however this has to be counterbalanced by removing the requirement to advise everyone that a hard fork is coming, and the time the devs would have to spend advising everyone.

There is both an upside and a downside to this approach. The upside is that surprise hard forks will not happen except in an emergency, so this means that the community will have plenty of notice. The downside is that this places an additional requirement of vigilance on the part of the community since in some respects this is more of an opt out rather than opt in approach.

With the above in mind I am prepared to support this proposal for at least 3 years subject of course to a material change in the full presentation.

Reply to: ArticMine
Kazuki edited 2 years ago Weight: -429 | Link [ + ]

The problem with Ripple and others is not because they fork, its because they were premined/instamined, Monero devs did not issued tokens for themselves nor are selling it. But to solve your interpretation just create a window of 1 year where any number of hard fork can happen, until XMR 2nd b-day (2016) or until Jan, 2017. Call it stability period. After that only in emergencies or worse - its end of Monero, I dont think after a certain period it can fully recover from severe attacks or bugs (like Bitcoin had in the past), so hardforking automatically will become just an upgrading sore.

sylviaplathlikestobake edited 2 years ago Replies: 1 | Weight: -427 | Link [ + ]

Isn't this going to create a debate on which hard-fork to implement every six months? Could this cause a schism? Is there a possibility of a hardfork tie? Or too many hardfork possibilities leading to a break?

I guess my main question is who decides on the hardfork attributes, how is their position determined, will it be more than a one person decision? If it is more than one person, how will consensus be achieved in the hardfork's attributes? And if there isn't consensus, will it be miner's voting and what prevents a tie or multiple forks from occurring?

I'm worried about the human factors more than computer factors here; people lusting after power or money has always had unintended consequences, and the more Monero grows, the more these factors will take effect.

Reply to: sylviaplathlikestobake
fluffypony edited 2 years ago Replies: 1 | Weight: -427 | Link [ + ]

> Isn't this going to create a debate on which hard-fork to implement every six months?

If there isn't general consensus on a change then it gets pushed forward to the next fork. The point is not to make a breaking change every 6 months, the point is simply to be able to.

> I guess my main question is who decides on the hard fork attributes

That process doesn't change, but with planned hardforks it becomes easier.

Let me use a practical example to illustrate. Let's pretend we were facing the same situation as Bitcoin, with a polarised decision that has to be made. It's very unlikely that 1 of the 7 core team members could act like Gavin, because the division would be clear (not so with Bitcoin where there is this misconception that "only Blockstream devs disagree with Gavin" or "everyone that matters agrees with Gavin" when meanwhile, back at the ranch, he's the only one of the 5 core maintainers that wants the 20mb limit right now). But even so, let's imagine it's 3 vs 3 on the decision with 1 abstain, and the community is torn.

Now here's the clincher: with planned hard forks we could very easily put the code in and then measure full node uptake over 6 months. Since the code only kicks in after the 6 month window and the node observes a sufficient percentage of blocks on the new version, it simply wouldn't enable that code if there is not sufficient uptake. This could be modified to be both % of mined blocks + % of connected nodes for the sake of fairness. We therefore let the miners and users decide, and we can always nuke that code in the next hard fork update.

Thus, to answer your questions, the "consensus" is system wide and dependent on the system's users, the core team would merely be the instigators based on feedback from the community.

Reply to: fluffypony sylviaplathlikestobake
papa_lazzarou edited 2 years ago Replies: 1 | Weight: -426 | Link [ + ]

Overall I think its a good strategy. It reduces the disparity of running versions in the network.

>Since the code only kicks in after the 6 month window and the node observes a sufficient percentage of blocks on the new version, it simply wouldn't enable that code if there is not sufficient uptake.

What would happen if there was the need for an emergency update during the controversial fork assessment period? Since people would be forced to change in that situation the results of the "suffrage" would be skewed.

This is really an edge case but, would it be practical/safe to include a user controlled switch for the code during the trial period?

Reply to: papa_lazzarou fluffypony sylviaplathlikestobake
fluffypony edited 2 years ago Weight: -426 | Link [ + ]

An emergency patch could be backported to a previous version, so both would still have the fix, and people could choose to update their current version with the same version or a more recent one

binaryFate edited 2 years ago Replies: 2 | Weight: -428 | Link [ + ]

One can see how difficult it is to hard-fork in Bitcoin. People are legitimately afraid of it because it's handled in such a ad-hoc way everytime it must happen.

If Monero can have such an organized hard-fork-release schedule right from the beginning, all users and developers will feel at ease with these inevitable events in the future. So I completely support this idea!

Reply to: binaryFate
dEBRUYNE edited 2 years ago Weight: -430 | Link [ + ]

This perfectly describes my thoughts on this subject. So I support this as well!

Reply to: binaryFate
Kazuki edited 2 years ago Weight: -430 | Link [ + ]

"Nobody panics when things go "according to plan.""

cAPSLOCK edited 2 years ago Replies: 1 | Weight: -429 | Link [ + ]

Overall I agree with the general sentiment that this is better than the other alternative listed here.

My initial reaction also included a small bit of recoil. My nature is conservative (to a fault at times) and I am inclined to "leave well enough alone". Sometimes budgets scare me since some people approach a budget like: "So.. we have M10,000, but only see M7,000 worth of stuff to spend it on... what else should we buy".

In this way I would hate to think this would foster an environment of change for change sake.

"Well the next fork is coming... what should we use it on?"

Another way of saying it is this: Bitcoin is struggling somewhat against it's own constraints. And many of us can see the negative in this. However, there is a sort of stability that comes with these constraints. We know fundamental change is less likely with the way Bitcoin is built and guided by it's core community.

OK. All that said... In the final analysis I see this is a generally positive idea. I just wanted to shoot out that one caveat.

Reply to: cAPSLOCK
fluffypony edited 2 years ago Weight: -430 | Link [ + ]

> In this way I would hate to think this would foster an environment of change for change sake.

I think that the majority of the forks will be "benign", just protocol version bumps, so this is instead just a way of creating an environment that is more robust. As I mentioned to @GingerAle, Bitcoin has had a major fork because of a database change, so keeping everyone current "by force" has the added bonus of not leaving us open to this sort of issue.

rocco edited 2 years ago Replies: 2 | Weight: -429 | Link [ + ]

i want to talk mostly about quality assurance and change management in this post. writing english takes me a lot of energy so i try to make it short:-) First of all, i have no experience in open source projects and its change/release management methods, but i know how big banks and payment processors develop financial software. Also i do not know anything about the deployment process and policy of monero testnet.

new branch means new risk. to be on the safe side and flexible enough with this kind of branching i would recommend you having at least one more testnet instance. also i do not know how far you can run the trunk/your own branch locally, so its difficult to give an advice without really knowing how you prefer to work.

as long as open bugs still can be fixed, one month freeze sounds fine, maybe a little long, but depends on the testing efficiency too.

you devs know best if 6 months is enough or if 3 would be better (and off after 6), there is no need to punish yourself with this. as long as emergency changes are excluded of course (should be possible to completely disable this feature and hardfork immediately) me personally would welcome 3 months, but i do not know if this is possible with still keeping quality high and risks low.

in general i would not make to many rules so in the end you have to make a lot of exceptions. as light as possible is best. my feeling is that 1 year is too long for now, but if we mature more it will be very useful for users to have this. (as we know allready, usability is key). even more non functional things like reliability and robustness of the network profit from it.

maybe you should also think about a method to estimate how much nodes/blocks allready use the new branch, so you can lower the praying even more once the real hard fork kicks in.

all in all, i like it. if i have more info i can give better advise regarding quality assurance

Reply to: rocco
rocco edited 2 years ago Replies: 1 | Weight: -429 | Link [ + ]

if i think about it a little bit, part of the testing could be "easily" automated. imagine automatic deployment on 3-4 virtual machines. then we put them a simple synthetic database inside and let them sync. this would make regression testing much more easy. later make a few trx from node to node and validate. this could all be done with shell skripts i think, but would still be a lot of work no question, maybe too much effort for the gain.

Reply to: rocco rocco
fluffypony edited 2 years ago Replies: 1 | Weight: -429 | Link [ + ]

OH! I forgot about your comment about testnet.

The problem is that 99% of the nodes don't run testnet (at least not on the same machine). I've noticed that if I stop my testnet miners then testnet stalls entirely, so it's not a particularly strong component at the moment. That will get better in future, and it will be a good proving ground, but even there we still have to have a rolling hard fork schedule (maybe 3 months before for experimental changes?) otherwise everyone gets stuck with the old version.

Reply to: fluffypony rocco rocco
Kazuki edited 2 years ago Weight: -429 | Link [ + ]

6 months is a good period, e.g. next September 18th (6 months after the b-day) the first hard fork could happen then, it will also coincide with the "halving". Also a suggestion, having all core devs GPG sign a document saying they will never change the emission or inflation would put most preoccupations at rest (the 'Monero constitution' or something like that)

Reply to: rocco
fluffypony edited 2 years ago Replies: 1 | Weight: -429 | Link [ + ]

I agree with you that we can move to a 1 year cycle in future, and I also agree that 3 months is probably going to be more of a nuisance than a positive force. I think the most important thing is to get people used to "update-or-die". I'd rather have 100 nodes all over the world that are ALWAYS current than 1000 nodes where we can't make changes because we somehow have to keep them connected;)

The biggest issue with this is that, unlike commercial software, we don't know the people running the nodes. We might know some of the vocal ones, but honestly even with Bitcoin the loudest talkers don't represent the supermajority. So we have to make changes without knowing who our "clients" are or even if they'll upgrade. This means we have to reduce the set to those that will upgrade, and that is why we have this proposal.

PS. Your English is WAY better than my grasp of any-language-other-than-English!

Reply to: fluffypony rocco
rocco edited 2 years ago Weight: -427 | Link [ + ]

this is very true what you say, but i think there are features that everyone ( Mr "Unknown" included) wants and they need a hard fork. i am sure it/he/she would come out and tell us if they are not ok with it. everyone involved in mining/running a node is following monero very closely, so i think for the moment hard forks will go through very smoothly. (assuming software quality is good :-)

"no news good news"

Kazuki edited 2 years ago Weight: -430 | Link [ + ]

it puts Monero right on its track as scalable cryptocurrency, as long as emission and other key economic aspect that were already agreed upon by the community are maintained, from the record I can see most improvements will be made towards better privacy and more efficiency. So I support this idea as well.

Pandher edited 2 years ago Weight: -430 | Link [ + ]

Yes, dates with GMT times for clients dropping off network, very elegant approach fluffy. I support it

rpietila edited 2 years ago Weight: -430 | Link [ + ]

Without going into details why it is so, I believe that the benefits of the stated approach reasonably overweigh the alternative, which is to "fork and pray" as the OP put it.

Hueristic edited 2 years ago Replies: 1 | Weight: -430 | Link [ + ]

My only concern would be if this method opens a avenue for attack. GA makes a good point as well.

Reply to: Hueristic
farfiman edited 2 years ago Replies: 1 | Weight: -430 | Link [ + ]

"Well the next fork is coming... what should we use it on?"

Other things to think of. If the hardfork dates are locked in - what happens if one is needed in an emergency (security,attack etc.) or knowing that a hardfork date is coming up there might be a rush to take a change and "stuff it in" before its's really ready so not to have to wait another 6 months. I'm not implying that the current team would ever do it but who knows what the future will bring.

Reply to: farfiman Hueristic
fluffypony edited 2 years ago Weight: -430 | Link [ + ]

Emergency hardforks aren't precluded by this policy, they can still be deployed as normal.

Also remember that even though there's a "code freeze" at the 1 month pre-fork point, if it truly is "hard fork" code it'll actually only kick in a year later (to allow for the grace period). For instance, the MRL-0004 minimum mixin could be added in such a freeze, such that a month later it kicks in for your own transactions that you broadcast (and blocks, if you're a miner), but transactions and blocks that don't conform are still allowed for another 6 months until the subsequent hard fork, after which they are then banned even on a p2p level.