Scaling issues & Comparing Blockchain L1s and L2s
Are you ready to finally get your hands dirty and experience using decentralized applications (DApps) for the first time? Or, do you want to expand your understanding of what’s available outside your go-to blockchain/ecosystem? Maybe you are looking for the next technology trend within the blockchain space for potential investment opportunities.
With many blockchains today, you probably feel overwhelmed with where to start. Even if you understand the blockchain trilemma of layer-one (L1) chains and the basics of layer-two’s (L2s) built on top of them, the technical complexity and vast differences make them hard to evaluate.
Without understanding the system you’re potentially using or investing in, you cannot manage the risk and take advantage of the benefits, as each chain will have different pros and cons.
Stripping away the complexity to provide a more streamlined user experience is the goal eventually for all DApps, but it needs to be applied in a way that is still transparent. That way, those that wish to see behind the curtain can make better-informed decisions on what blockchains and DApps they interact with.
In this article, we hope to give you a peek behind the curtain and look at what problems blockchain developers have been working hard to solve. You’ll learn the differences between L1 blockchains and their L2 scaling solutions, including the tradeoffs, risks, and features to consider.
The Blockchain Trilemma
There are three things a blockchain can optimize for, scalability, security, and decentralization. So far, we have yet to see a proven design where you can have all three without compromise within a layer-one blockchain. We will cover blockchain layers, but first, we should understand the blockchain trilemma.
Scalability is the ability to handle a high number of transactions and users, often referenced by the transactions per second (TPS) the chain can produce. It’s worth mentioning that TPS is not a holistic metric for scalability. Security is the ability to defend from attacks, bugs, exploits, and overall resilience. Lastly, decentralization involves power distribution so that there is no central point of control over the blockchain.
Decentralization is not as simple as it is, or it isn’t. It resides on a spectrum, and different aspects of it also overlap with the security and resilience of the chain; more on this later. It’s a balance of these three characteristics that allow for peer-to-peer transactions without knowing the identity on the other side and without a middleman. However, the use cases go far beyond just that.
Blockchain layer-one transaction speeds and gas fees suffer when the network gets busier. The fees increase because users aim to outbid each other to get their transactions included in the block and verified ahead of others.
This congestion and competition drive up the cost of interacting with the network. Hence the need for greater scalability, but the tricky part is doing so without compromising the decentralization, accessibility, and security of the layer one chain.
How much data can we reasonably fit into a block, and how fast can we produce blocks? That is the heart of the trilemma. It is a foundational question that every blockchain answers in its own way.
Understanding the impact these previous two levers have on the accessibility of outsiders to be able to validate those blocks is a critical component of decentralizing a blockchain.
The relationship goes as so, the greater volume of transactions that an L1 blockchain can handle is inversely proportional to the time required to complete them (more data takes longer to validate), also leading to an increase in gas fees. Therefore each blockchain design indirectly limits the data amount and production time as part of the balancing act of the trilemma.
Another result of trying to fit more data into each block and producing blocks faster is fewer node operators. What is a node? Let’s use the Ethereum definition as it’s pretty much applicable everywhere else. A node refers to running a piece of client software.
The client, in this case, is an implementation of Ethereum that verifies all transactions in each block, keeping the network secure and proving the data accurate. So a node operator is someone who runs the Ethereum software, and the Ethereum software is the mechanism that produces its public ledger in the form of a blockchain. You can also think of node operators as accountants/validators for the network.
Having a large set of node operators is a core component of decentralization, as well as how resilient or secure the network is. Node operators are generally limited by these three constraints.
- Processing power (CPU)
- Bandwidth (Internet speed and uptime/reliability)
- Storage (disk space on the machine the node is running on)
So when a blockchain raises the data throughput, it increases the requirements of node operators. The increased cost and accessibility of maintaining a node results in fewer network participants checking the ledger for accuracy.
The fewer nodes there are, the easier it is to manipulate the blockchain, which could look like small parties of bad actors collaborating to produce and validate blocks however they see fit. This corrupts the nature of a blockchain, which is to have a distributed network work together to reach a consensus about data. The blockchain is worthless if that data can be manipulated and is not trusted.
Having immutable data is under the security aspect of the trilemma. This is especially relevant for blockchains that feature smart-contract functionality. The added complexity of smart-contract-enabled blockchains creates more opportunities for hackers to find exploits.
While the ledger of blocks on the main chain will be secure and continue to function accurately, hackers can find loopholes in the code of the smart contracts that run on the chain.
There are blockchains built from the ground up, completely centralized (controlled by one party) while still prioritizing security and, of course, scalability, but they serve different purposes (usually used by an enterprise) and aren’t what we will be discussing today.
Layer-One’s
L1 blockchains are the base network where the validation and final settlement occur. All L1 blockchains are limited by the blockchain trilemma. Bitcoin is widely considered the most decentralized and secure L1. Bitcoin is much less complex than many other chains due to not having native smart-contract capabilities.
Ethereum is arguably the most decentralized and secure L1 out of those that have smart contracts. Other chains like Cardano and Polkadot also strive to maintain a high degree of these two attributes, but with different designs.
Some L1 chains, however, have chosen to compromise their decentralization to varying degrees and methods, mainly in exchange for improved processing capabilities. The market has clearly shown high demand for chains like this to exist. Two that fit this description would be the BNB Chain (AKA Binance Smart Chain) and Solana.
Whether or not these design compromises will be a long-lasting product market fit is yet to be seen. Also, the likelihood for these chains to continue to become further decentralized is not only a question of will from developers and community voting participants but a technical discussion. However, they have captured much of the market share and established loyal user bases. This can be seen in the data below.
Scaling with Layer Two’s
Layer two’s are solutions designed to help scale blockchain applications by handling transactions off the main, layer-one chain. This is what Lyn Alden had to say in her recent deep dive into The Lightning Network (an L2 chain for Bitcoin).
“With only a few tens of millions of payments possible per month, how can Bitcoin potentially scale to a billion users? The answer is layers. Every successful financial system uses a layered approach, with each layer being optimal for a certain purpose.”
L2 blockchain scaling solutions enable better end-user experiences via higher throughput and speeds, lower gas fees, and the assurance that all transactions are eventually and irreversibly recorded to the mainnet.
By allowing the mainnet to handle the critical aspects of decentralization, data availability, and security, L2 solutions are able to de-congest the L1 chain by taking on the transactional burden with their parallel network (especially in the case of small transactions).
This begins to solve the scaling problem for L1 blockchains like Bitcoin and Ethereum while ensuring sufficient decentralized security standards are accessible to a wide range of applications.
L1s do not depend on L2s to function, but most L2s rely on the L1 in certain aspects to make their scalability solution not susceptible to the same blockchain trilemma as main chains. Because they act as third-party integrations, they have plenty more design options available that do not affect the structural integrity of the main chain.
To explain in detail how these L2s work would get too technical for most of us. Not to mention there are a lot of different types of L2 scaling solutions. However, here are links to a relatively beginner-friendly overview of several popular L2 scaling solutions from Binance Academy. Rollups | Sidechains | State Channels | Nested Blockchains
Layer-Two Challenges
There are several challenges with prominent L2s today, although these issues are being worked on, and some of them have solutions on paper not far away on their roadmaps.
Generally, some aspects of the L2s are operated by trusted entities to various degrees. This often results in settling for more trust in the entity recording execution of transactions, even more so than L1s like the BNB Chain or Solana.
A significant difference is that L2 transactions are finalized back to Ethereum. This means that the greater degree of trust required is only for a window of time as far as verification of the transactions.
This isn’t the same as holding assets on the L2 which exposes those assets to the risk of the L2 chain breaking. As long as you are keeping funds on the L2, they are reliant on its bridge mechanism to the L1 continuing to function.
Therefore, as long as the ability to move assets to and from is intact, they can improve the trade-offs of the blockchain trilemma. Users get close to the same security and confirmation assurances as the main chain once the ‘Sequencer’ sends the batch of transactions to finish verification on the L1. This is what it means when people describe L2s as inheriting the security of the L1.
Another con to using L2s is how long it takes to exchange tokens from the L2 version back to the L1 version of that token. The initial step of going from L1 to L2 is a matter of minutes. However, Arbitrum and Optimism withdrawals (back to the L1) can take up to seven days.
The sidechain, Polygon, uses a POS bridge to allow for withdrawals in around three hours. (Sidechains differ from L2s primarily in that they use their own consensus algorithms instead of adopting the L1s).
Cutting-edge crypto development is much like an innovative company with high growth potential that requires investors’ funding to develop despite operating at a loss (only some protocols operate at a loss). Because continuous funding, or even revenue, for development is not guaranteed, this is a risk factor to watch. Especially if the macro environment continues to deteriorate or the technology sees bigger than expected setbacks and challenges.
Choosing a suitable chain for your DApp needs
Amongst industry experts and builders, it’s generally believed that the future of blockchain networks will consist of at least several L1 chains and many L2 chains coexisting with different strengths and weaknesses. We will likely see networks specializing in certain use cases or niches.
The most obvious example is Bitcoin. Bitcoin is optimized for decentralization and security over scalability because its original design lends itself to the concept of “sound money” or digital gold. Its identity has evolved, but its stance on addressing the blockchain trilemma has stood the test of time (this is essentially what the Bitcoin “fork wars” in 2017 were all about).
On a different note, gaming, social community systems, and web3 applications would require high throughput, speed, and very cheap fees. They also can function well despite compromising on decentralization, as long as it provides enough improvement over traditional web2 options, regarding the user’s ownership rights, for example.
Alternatively, institutional-minded DeFi protocols would want a chain with a greater focus on security and less concern about the costs of gas fees (blockchain network fees are not percentage based). Right now, that’s Ethereum’s L1.
If you intend to move larger than testing amounts of funds over, you should consider the liquidity, that your desired use case is economically achievable and available, its security, and decentralization. Understanding these factors is key for your own research process when choosing any L1 or L2.
Liquidity and Utility
The liquidity is reasonably straightforward, it’s about whether or not the network has enough participants and volume to operate efficiently. For L2s, this also involves being able to move assets to and from their network and the mainnet without causing problems.
Liquidity issues occur when moving large amounts of crypto relative to the amount in the ‘liquidity pool.’ It causes price fluctuations that can affect your ability to exit a position in a coin or move that coin to mainnet without losing some of the value. This can affect even smaller traders if other big holders decide to make big moves within low-volume protocols or assets that you have funds in.
Blockchains can be thought of as shopping malls, with the stores representing the DApps built on the chain. The options available to us users (shoppers) are only as wide as there are developers willing to come and set up shop. (L2s are like adding a second floor to the mall to make room for more visitors).
For newer networks, it might take time to develop into a bustling shopping mall or flop and look more like a ghost chain. Before interacting with a new chain, it is a good idea to look at what DApps are available and interesting to you. A good sign early in your investigation is metrics like a growing number of developers building there.
Privacy is another aspect of utility that is something to consider. This refers to privacy for users’ transactions, which is not inherently present in Bitcoin, Ethereum, and most blockchains (they are pseudonymous). This can be a problem; for example, if you transfer crypto from any KYC (ID required) exchange to one of your wallets, they can now link your real-world identity to that wallet address, and nothing stops them from viewing your wallet’s past and future activity.
Another scenario would be for payments. Suppose you paid for a coffee with a cryptocurrency on a public blockchain. In that case, that coffee business now has access to your wallet’s activity, and you can probably see how that would potentially be seen as valuable information for them to analyze their customer’s spending behavior. Technically speaking, there are ways around these privacy issues, but protocols designed with privacy features introduce regulatory risk.
Fees and Speed (TPS)
This is the cost for validators to process your transaction, including the ‘gas fees’ paid for any smart-contract interactions. For example, on a DApp like Uniswap, you have to sign a smart contract the first time for each token to allow your wallet address to provide your crypto to the liquidity pool. Then you will pay another fee to send the funds.
If you are dealing with large amounts of funds, paying $100 or even a few hundred in fees to take out a loan with your crypto as collateral, then the fees aren’t really an issue.
Especially if it isn’t a high-frequency event, however, often for the average person, fees like that make it unusable. Indeed, gas fees on Ethereum aren’t that high now compared to most of 2021. Still, even during a bear market, gas prices spike as congestion spikes, and most of us assume a bull market will come eventually.
Because sharding (the major L1 scaling upgrade for Ethereum) is still a way out on the roadmap, it’s just a matter of time before we experience gas prices becoming a bigger problem again.
Due to the higher fees on Ethereum, an application like a blockchain game is often not economically feasible even at the current relatively low costs. It needs a chain that can keep up with the high volume of smart-contract interactions the game is outputting.
Once you find a DApp that you want to use (utility-wise), figure out if the fees and speeds make economic sense for you. Suppose it doesn’t, then there’s no need to look further. Instead, you can try to find the same or similar service on a different L1 blockchain or an Ethereum L2 chain. Then you can move on to evaluating the security and decentralization aspects.
Security and Decentralization
Users demand some security assurances to transfer their funds onto a new blockchain platform. The same goes for DApps or exchanges integrating support for new chains and developers building applications native to a specific chain.
The way these L2 networks validate transactions needs to be evaluated based on the level of security required for its use case and the possibility of validators on the L2 chain engaging in fraud. Some L2 chains try to be all-purpose scaling solutions, while others might focus on their niche.
Most of the top L2s currently have centralized ‘sequencers’ as operators. Sequencers are responsible for batching transactions recorded by the L2 and then committing them to the L1 for final settlement (at the speed of the L1 but often not quite as expensive, thanks to data compression concepts).
Just as scalability and decentralization are on two sides of the rope for L1 blockchains, L2 chains that obtain a high degree of security from the mainnet (i.e., Ethereum) combined with their scaling capabilities struggle to achieve decentralization.
This is mainly due to the tech being so new and untested, which requires the developers to have greater control. Across the board, blockchain developers recognize the need for their platforms to be upgradeable until the product matures.
If you’re familiar with the pros and cons of DAOs, you will see the similarities. Distribution of power (decentralized) runs into efficiency issues that are not yet conducive to any fast-paced, innovative industry, especially when users’ funds are on the line.
When bugs, exploits, failures, or DDoS attacks occur, it requires quick and decisive action to control the negative impact and fix the issue. In the case of a repeatable bug, exploit, or hack, the problem and solution need to be kept private not to draw in more abuse until the fix is implemented. Reaching consensus in a democratic and distributed manner is not practical for such scenarios.
The Lightning Network
The Lightning Network (LN) is an L2 for the Bitcoin blockchain. It’s a peer-to-peer (P2P) communication and transaction protocol that transfers bitcoin between nodes (users) by merely signing transactions and updating channel balances without recording to the blockchain.
It routes similar to the internet, but in a P2P manner where a payment might take many hops to reach its destination. Eventually, these channel states get broadcasted back to Bitcoin’s L1 for final settlement when a user wants to close the channel.
Within these state channels, bitcoin can be exchanged at speeds of 1–3 seconds and well under $0.01. Think of state channels like opening up a bar tab, with the end state balance of the transactions that occurred getting settled upon closing the tab.
The network’s total value-locked (TVL) peaked at $212M and is currently $113M. The Lightning Network made it feasible for El Salvador to implement Bitcoin as a legal tender, as it provides scaling to bitcoin and is designed for micropayments.
The important aspect of the Lightning Network is that you can send and receive payments without having price exposure to bitcoin the asset. If it were just a way to scale the transfer of bitcoin, it wouldn’t be as powerful a tool since bitcoin’s volatility makes for a poor day-to-day currency. Here’s another excerpt from Lyn Alden explaining how it’s done.
“The idea is that bitcoin is an increasingly liquid asset that trades in most large currencies. Someone can exchange dollars for bitcoin, send bitcoin over the Lightning network to another custodian in some other country, and then exchange it back into dollars, all within a couple of seconds. This allows someone to use the payments aspect of Lightning quite separately from using bitcoin the volatile asset.
This can be done with other currencies as well. Someone can exchange pound sterling for bitcoin, send the bitcoin over the Lightning network, and then exchange that bitcoin for euros within seconds.
That fiat-to-bitcoin-to-fiat method can eliminate tax issues associated with Lightning payments for the end user while making use of the fact that Lightning is more cost-efficient than most payment networks such as Visa and Mastercard.”
The tradeoff being made in the examples above is it requires a third party to have the liquidity and then facilitate the transfer. This third-party liquidity provider can offer services while you maintain ownership of your private keys, but more often than not, it is a company taking custody.
One reason the LN hasn’t seen faster adoption is that there is little incentive to run a lightning node as a service due to minimal revenue from fees. This isn’t a complicated fix to come up with, but changes in the Bitcoin ecosystem take time. A still active threat that will need to be solved if the TVL continues to grow is a channel jamming attack similar to a DDoS attack.
Arbitrum
Aribitrum is an L2 scaling solution built on top of Ethereum that uses optimistic rollups. Optimistic rollups rely on posting data that is assumed to be correct (hence the name, Optimistic) to the chain by allowing a grace period for the transaction to be challenged. Users can submit ‘fraud proofs’ during this window to signal that the data is incorrect.
Arbitrum is designed with decentralization in mind, and according to them, everything they need to move to decentralization is in the design. However, they admit that they have training wheels still in place with the goal of gradually rolling out further decentralization as they and the community gain more confidence in the technology’s maturity.
It’s a matter of responsibly developing a product with many users, operators, and investors relying on it working as intended. Arbitrum currently has the most TVL (Total Value Locked) of any L2 at $2.69B and peaked at $4.1B.
Arbitrum’s Sequencer publishes a feed for each transaction, which can be viewed by anyone and is usually produced within a second. The Co-founder of Arbitrum, Ed Felton, calls it a soft guarantee. Every few minutes, the sequencer batches the ordered feed of transactions and sends it in a compressed format to Ethereum.
It’s worth mentioning that Arbitrum is working on making its sequencer role a distributed one to improve decentralization significantly.
You can transfer assets from L1 Ethereum to L2 Arbitrum through the Arbitrum Bridge. Deposits will arrive in your wallet within 5 minutes. Withdrawing funds back to L1 takes seven days or more to process through the standard bridge.
Conclusion
Across high throughput L1 blockchains, L2 scaling solutions, and more not mentioned in this article, like sidechains, there are many ways to improve scalability. However, no matter what method is used, you lose some security and especially decentralization in the process. This is why users trust networks like Ethereum and Bitcoin for their resilience and secure track record.
There’s a good chance Ethereum will continue to remain at the top of smart-contract-enabled chains due to its massive user and developer community. On top of that, its many validators and network effects create a solid base for L2 solutions to build on.
In the long haul, we might see L1s focus on security and immutability, with some also emphasizing decentralization and censorship resistance, basically everything except scaling characteristics. L2 networks can then tailor their services to specific use cases and scaling of the L1s. This future has the potential to realize a self-sufficient financial ecosystem–a robust digital economy.
Bitcoin’s strong identity and design to match is a good model for blockchains to replicate according to their application. Many builders aim to evolve these scaling solutions to the point that they run themselves through proper incentives for the distributed contributors.
However, the experimentation process to reach such goals and solve difficult technical challenges requires some centralization for a time. With the understanding gained here, you should now have a basic framework for evaluating the trade-offs, choosing networks that complement your desired activities, and maybe even spotting the shifting trends early.