Ethereum’s biggest-ever upgrade — the move to a proof-of-stake consensus mechanism — is right around the corner. But while the Merge should add security and sustainability, it doesn’t include sharding, the long-anticipated method of scaling the network. 

In Part I of our conversation with Ethereum Foundation (EF) researcher Danny Ryan, who’s helped coordinate the upgrade process, we discussed what the Merge is designed to bring in terms of security and stability.

In Part II, Ryan talks about upgrades users can expect in the future, including danksharding, stateless Ethereum, and security updates that grapple with the rise of miner extractable value (MEV). He also explains how this years-long effort resulted in new methods for researching and testing future upgrades.


Coordination on a decentralized network

FUTURE: You alluded to the possibility that miners will fork and continue trying to use the old chain. But for the most part, this process has gotten everybody on board. What is your role in that as an Ethereum Foundation researcher? How does such a massive move get coordinated?

DANNY RYAN: I started getting involved in proof-of-stake stuff in around 2017, and even then it felt like a foregone conclusion. That was five years ago. And the Ethereum community has been very willing to not stagnate and to do it right, and construct a protocol that doesn’t just work today but works, hopefully, for 100 years or more. 

Thus, early in its ethos, when there was a hunch that proof of stake could be done securely and better than proof of work, people were very excited about that. And by the time 2016, 2017 rolls around, people are not only excited about it, but they’re anxious for it to happen. It seems like it’s kind of very deep in the Ethereum community’s ethos that this is going to happen.

There are more sensitive issues. There are less foregone conclusions where the EF, the research team, and the clients that are outside of the EF are all trying to come up with solutions to problems and keep things moving. Sometimes the solutions are in a bit more of a gray zone — is this the right solution? Do we do it now? Do we do it later? That ends up being tough, and the EF attempts to help coordinate in those methods, help do some R&D to help vet solutions, help facilitate conversations to decide on timelines and priorities and orders. 

But at the end of the day, on most items, the EF agenda is to help make the protocol more sustainable, secure, and scalable while being decentralized — and not to ship a particular feature over the other. So, a lot of what we are focused on when it comes to both technical work and social coordination is around facilitating good information, good research, and good dialogue so that the many participants involved in the R&D, the engineering, and the community can keep things moving and come to decisions.

In the last five years there have been a lot more voices added to the community, and after the Merge, it’s theoretically going to become more decentralized. What thoughts do you have about the future process for upgrades? Is it possible that we’ll be looking at some sort of layer-one DAO to coordinate upgrades?

As I understand it, the Ethereum community is not into on-chain voting — or any sort of plutocratic voting and upgrades — and that the protocol is the one the users decide to run. Generally, there’s broad consensus. Sometimes there’s schisms — for example, Ethereum vs. Ethereum classic. But at the end of the day it’s your right and the community’s right and users’ rights to figure out what software they want to run. Generally, we agree because people are trying to make Ethereum better, and there’s not a lot of conflict in some of the core stuff there. 

So I don’t expect a formal technical mechanism. I do expect the process to continue to grow and change and evolve in this kind of loose governance, where there’s researchers, there’s developers, there’s community members, there’s dapps, and things like that. 

I would say that — and I think you alluded to it — there’s more and more people at the table, and it’s getting harder and harder to make decisions and ship things. I personally believe that that’s a feature. I do think that both from a reliability standpoint for applications and users, and from avoidance of capture in the long run, that it’s probably important for a lot of the Ethereum protocol to ossify. So although it is increasingly difficult to be in the maelstrom of governance and try to ship, and sometimes it feels like I’m trying to run with a weighted vest and weights on my ankles and now I’ve got weights on my wrists, I think we have some key stuff to get done over the next few years. But I think it’s going to be harder and harder to get things done. And I think that’s a good thing.

Vitalik calls it “functional escape velocity.” Let’s get Ethereum to a place where it has sufficient scale and functionality that it can be extended and utilized in an infinite multitude of ways in the next layer of the stack. Have the EVM have minimum sufficient functionality, have there be enough data availability to handle massive amounts of scale, and then applications can extend it in smart contracts. Layer twos can experiment with new VMs inside of their layer-two constructions; you can scale Ethereum and so on and so forth.

I think it’s going to be harder and harder to get things done. And I think that’s a good thing.

Shadow forks

One of the things that came out of this specific testing process was shadow forks, the process of copying real Ethereum data to a testnet to simulate a mainnet testing environment. Was that always in the plan? And how do you think that might change the R&D process for future upgrades?

We should have been doing shadow forks for the past four years. They’re great; they’re really cool. I essentially take a number of nodes that we control — call it like 10, 20, 30 — and they think a fork’s coming, so they’re on mainnet or one of these testnets and then at some fork condition, like block height, they all go, “Okay, we’re on the new network.” And they fork and they then hang out in their own reality, but they have the mainnet-size state.

And for a while you can pipe transactions from mainnet onto this forked reality to get a reasonable amount of what looks like organic user activity, which is really good. It allows us to test what ended up being highly organic processes that are hard to simulate. And that’s been great. Pari [Jayanthi] and others who work on the DevOps team at EF have been orchestrating these, and we learned so much from them. I think if you ask anyone, they’d be like, “Well, yeah, it would have been great if we were doing this three years ago, four years ago on every upgrade.”

But I will say another thing. I’ve been saying it [since] a year ago and now we’re in the long tail in security and testing: It’s really pummeling this thing, making sure all the edge cases are correct, making sure that when it comes, it happens — we take one shot at it and it works. And it turns out, the way that the software is constructed with consensus-execution layer clients, there’s just a lot to build in terms of testing. Shadow forks is one of them. Utilizing other simulation environments that can test these two things together, like Kurtosis, Antithesis, and others. 

There’s some other stuff we need to do, like rewiring Hive, which is our integration nightly build test framework, so that it can handle both of these types of clients and so that you can write tests where different complexities are happening on both sides of the aisle. All that had to happen. First, the frameworks had to be developed or modified. Then a lot of the tests had to be written. So the nice thing with the Merge is we’ve really enhanced the tools in our toolbelt to be able to test upgrades in such a way that the next upgrade will be much more about writing the tests rather than thinking about how to even test it and writing the frameworks to test it. 

What’s after proof of stake?

Since this has been going on for a long time, initially sharding was going to come first. But ecosystem developments meant you could first move to proof of stake. Were there other ecosystem developments that popped up during this process that might shift your approach toward future upgrades?

First of all, there are probably a number of reasons the proof-of-stake shift was prioritized. One was to stop overpaying for security with proof of work. And the other was that scale was beginning to come through these layer-two constructions. So, maybe if you have 10-100x scale coming from that, you can focus on this other thing and finish the job and unify these two disparate systems: the beacon chain and the current mainnet. 

There are some other things that have affected how we think about timelines and priorities. I mentioned earlier that the whole MEV world has thrown a wrench into some things. There are centralization and other security concerns that emerge when you start thinking about where MEV might go. And there’s been a whole lot of research over the past 12-plus months on how to mitigate some of these concerns with layer-one modifications. Depending on the analysis of threats coming from MEV world, that might prioritize certain security features and security additions to L1 over something else that maybe was expected to be the priority. 

I think something that is interesting is the sharding roadmap and the current expected construction, which is called danksharding, named after Dankrad [Feist], our researcher at the EF. The whole construction is actually simplified when you assume these highly incentivized MEV actors exist. Not only have some of these external actors altered how we think about security, but they also alter how we can think about the construction of these protocols. If you assume MEV exists, if you assume these highly incentivized actors are willing to do certain things because of MEV, then all of a sudden you have this third-party participant in the consensus that maybe you can offload things to, which in many ways can be simplifying. So there’s not only bad things that come, but there’s also new types of designs that open up.

We’ve really enhanced the tools in our toolbelt to be able to test upgrades in such a way that the next upgrade will be much more about writing the tests rather than thinking about how to even test it.

Is stateless Ethereum still being actively discussed and researched? 

Yes. The state — all of the accounts and contracts and balances and stuff — that’s the state of Ethereum. Given where you are in the blockchain, there’s a state of reality. That thing grows over time, grows linearly. And if you increase the gas limit, it grows even faster. So this is a concern. If it grows faster than the memory and hard drive space of consumer machines, then you have issues with actually being able to run nodes on home computers and consumer hardware, which has security and centralization concerns. Also, if you talk to some of the GETH [client] team members, the fact that the state keeps growing means that they have to keep optimizing stuff. So it’s hard.

Stateless Ethereum and things in that research direction are a potential solution path for this, where to execute a block I don’t actually need the entire state; there’s kind of this hidden input on executing the function of a block. I need the pre-state, I need the block, and then I get the post-state to know if the block is valid. Whereas with stateless Ethereum, the state requisites — the accounts and other things that you need to execute that particular block — are embedded in the block and are proofs that those are the correct state. Now executing a block and checking the validity of Ethereum becomes just [having] to have the block, which is really good. Now we can have full nodes that don’t necessarily have full state. It opens up a whole spectrum of how to construct nodes. So I might have a node that fully validates and doesn’t have the state, I might have a node that just keeps state relevant to me, or I might have very full nodes that have all the state and that kind of stuff.

This is actively being worked on. There is actually, I believe, currently a testnet up with all the other fun stuff that needs to happen to make this happen. My current assessment is that the demand for sharding and L1 scale is higher than the imminent threat of state growth. So it’s very likely, as one will be prioritized over the other, that the scale will be prioritized. 

That said, it’s hard to say. There’s “proto-danksharding,” which is kind of like a stepwise way to get a bit more scale. Maybe that happens and then stateless happens and then full sharding happens, depending on the needs and assessment of what’s going on and the threats involved. I think the general thought on state growth is that we must have a path and we must fix it, but [that] the imminent fires have been put out and that this isn’t a thing that will cripple Ethereum the next couple of years. But it’s a thing that must be fixed.

Walk me through the upgrades that we do know for after the Merge. Will there be a cleanup upgrade? Is that separate from the Shanghai upgrade? And when does sharding get introduced?

Shanghai is likely to be the name of whatever the fork is after the Merge. To actually withdraw your funds that you’ve been staking for almost two years now — [that does] not get enabled at the Merge. They initially were expected to be done, but given the complexity of the Merge, over time, it was decided to really strip it down and just get the Merge done and not add the extra functionality of withdrawals. I would very, very, very much expect that withdrawals are enabled in Shanghai — so, the first upgrade after the Merge. This has been promised to many, many people who have a lot of capital on the line and I don’t expect any issue with that. These are generally specified, there’s tests written, and that kind of thing. 

There’s a number of other EVM [Ethereum Virtual Machine] improvements that I think would make it into this system — different mathematical operations, some different extensibility things, a bit better versioning within the EVM, and other features. It’s a bit of a pressure-release valve on EVM improvements, which have been put to the side for multiple years now to do the Merge and other upgrades. And people really want to see some sort of minor scalability upgrade here. So it could be either proto-danksharding, which lays some of the foundation for full sharding and gets a little bit more scale, or potentially calldata gas-price reductions, which are very easy but aren’t really a sustainable solution. So that’s what we expect, hopefully, in Shanghai: withdrawals and a bit of scale.

Then the question is: What’s after that? And that’s hard to say. If we do get a bit of scale there, and it’s complementing the L2s really nicely and things are pretty good, then maybe there’s a demand to do stateless at that point. Or if L2s have an insatiable need for more scale, then maybe that sets up the stage for the full danksharding.

Read the first part of our conversation with Danny Ryan to learn how the Merge gives rise to new types of network actors.

This interview has been edited and condensed.