this post was submitted on 19 Sep 2024
2 points (100.0% liked)

Nano Currency

44 readers
1 users here now

Join the conversation on nano, an eco-friendly currency with ultrafast transactions and zero fees over a secure, decentralised network.

founded 1 year ago
MODERATORS
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/nanocurrency by /u/EnigmaticMJ on 2024-09-18 22:53:52+00:00.


One of the biggest talking points in cryptocurrency is "scalability." But what does this really mean?

Many cryptocurrency advocates boast about the scalability of their favorite coin without having much real understanding of the meaning. In most cases, the numbers being touted as the maximum transactions per second (TPS) of a network are nothing more than an artificial constraint due to protocol limitations. In reality, these "maximum TPS" numbers don't describe a network's scaling capabilities, but rather its scaling limitations.

If our goal is really to create a monetary foundation for a new global economy, it's critical that we establish a clear understanding of the meaning of scalability, and what is required to achieve scalability capable of serving the demand of a global monetary system.

Scalability isn't just about "max TPS" and protocol thresholds. It's a multifaceted challenge involving network design, resource management, and real-world performance considerations.

Let's take a look at some of the core principles of scalability for distributed ledger networks.

⎯⎯⎯⎯⎯⎯⎯⎯

Purpose-Driven Architecture

Perhaps the most fundamental principle of scalability is purpose-driven architecture. In the same philosophy as phrases such as "keep it simple, stupid!" (KISS) popularized by Lockheed engineer Kelly Johnson, and "do one thing, and do it well" (DOTADIW) popularized by Unix developer Doug McIlroy, purpose-driven architecture emphasizes focus on optimization of a system for its primary function. For the sake of this discussion, that primary function is monetary payments.

Imagine using a Swiss Army knife as your sole tool for driving screws, cutting, etc. While versatile, it's not the most efficient tool for any specific job. Similarly, many distributed ledger networks aim to be all-encompassing, offering functionalities such as smart contracts and decentralized applications. While this versatility can be attractive and induce demand and investment, it often comes at the expense of efficiency, having a detrimental effect on the processing of monetary payments.

By concentrating solely on payments, a network can allocate resources more effectively, reduce operational costs, and handle a higher volume of transactions without incurring prohibitive expenses.

When a network supports non-monetary use cases, monetary transactions must compete for network resources and priority. Unfortunately, monetary payments are often less profitable compared to just about every alternative use case. This competition results in monetary transactions being deprioritized, leading to higher fees, slower processing times, and an overall degraded user experience.

Support for non-monetary use cases can even be unintentional. Networks that allow storage of arbitrary data can be exploited for non-monetary purposes. This misuse increases resource consumption (computation and storage) and operational costs, which are ultimately passed on to users through increased fees, inflation, or degraded network performance. This has been observable even in Bitcoin, with "NFT" exploits for storing arbitrary data such as Ordinal Inscriptions, Bitcoin Stamps, and BRC-20 tokens causing exponential surges in fees and confirmation times.

⎯⎯⎯⎯⎯⎯⎯⎯

Asynchronous Data Structures and Consensus Protocols

Traditional blockchains process transactions sequentially, creating a linear chain of blocks. This sequence means that unrelated transactions can bottleneck the network because their processing is blocked by the processing of preceding transactions. This design inherently limits scalability, as all transactions are processed one after another.

Asynchronous data structures, like Directed Acyclic Graphs (DAGs), allow for parallel processing of transactions that aren't dependent on each other. Multiple transactions can be processed simultaneously, significantly increasing throughput and reducing confirmation times. By enabling asynchronous processing, networks can better handle the high transaction volumes required in a global economy.

The type of consensus protocol also plays a crucial role in scalability. Leader-based consensus protocols, such as Bitcoin's Nakamoto Consensus, rely on a single node (the "leader") to propose the next block of transactions. Miners compete to solve a cryptographic puzzle, and the first to solve it adds the next block to the chain. This is a synchronous process that forms bottlenecks in the system's overall performance.

In contrast, leaderless consensus protocols, especially those utilizing vote propagation in Byzantine Fault Tolerant (BFT) systems, distribute the consensus process across multiple nodes without a central authority. Nodes collaborate to reach agreement on the order and validity of transactions through weighted voting mechanisms. This can be done asynchronously, ensuring that no transaction processing is blocked by the processing of other unrelated transactions.

This leaderless approach reduces single points of failure and allows for more efficient processing of transactions. By not relying on a single leader, the network can achieve lower latency and higher throughput, as multiple nodes contribute to consensus simultaneously. This method is particularly effective when combined with asynchronous data structures, further enhancing the network's ability to scale and handle global transaction volumes.

⎯⎯⎯⎯⎯⎯⎯⎯

Vertical vs. Horizontal Scaling and Decentralization

In traditional computing, scaling is often achieved by adding more servers (horizontal scaling) or enhancing existing ones (vertical scaling). However, in distributed ledger networks requiring consensus among nodes, these concepts don't apply in quite the same way.

Adding more nodes to a distributed ledger network doesn't necessarily improve throughput. In fact, it can introduce additional latency because more nodes need to communicate and agree on the network's state. For distributed ledger networks, real-world throughput is actually inversely correlated to level of decentralization. As the number of nodes increases, the time required to reach consensus can also increase, slowing down the formation of consensus.

While decentralization is foundational to blockchain technology, it presents a scalability challenge. The more decentralized a network is, the more complex and time-consuming the consensus process becomes. This complexity can hinder the network's ability to process transactions quickly and efficiently, which is crucial for a global-scale monetary system.

⎯⎯⎯⎯⎯⎯⎯⎯

Optimized Data Dissemination

Even if a network can theoretically process thousands of transactions per second, real-world throughput depends on how quickly data can be propagated and disseminated across the network. The latency for data dissemination - the time it takes for transaction data to reach all nodes - is a critical factor in network performance.

Efficient data propagation ensures that all nodes receive transaction data promptly, facilitating quicker consensus and higher throughput. Implementing optimized communication protocols, such as modern gossip protocols, can help minimize latency and improve the network's ability to handle a large volume of transactions.

Unfortunately, the majority of distributed ledger networks use relatively naive mechanisms for data dissemination, such as traditional gossip protocols, causing network latency to be orders of magnitude greater than necessary.

⎯⎯⎯⎯⎯⎯⎯⎯

Node Synchronization and Quality of Service (QoS)

As networks scale up to handle more transactions, node synchronization becomes crucial for maintaining efficiency. This means ensuring that all network nodes agree on the order in which they process incoming transactions. This is commonly referred to as the determination of prioritization for quality of service (QoS).

Under normal conditions, nodes can easily stay synchronized because they have enough time to communicate and align on transaction ordering. However, when the network reaches maximum capacity (i.e. "saturation") keeping nodes in sync becomes much more challenging. If nodes start processing transactions in different orders due to timing differences or delays, it can create compounding backlogs and increases in latency. This misalignment results in severely degraded network performance.

To prevent this, it's essential for networks to establish a common protocol for transaction ordering, especially under heavy load. By following standardized rules, nodes can maintain synchronization and process transactions efficiently, ensuring smooth network performance even when demand is high.

⎯⎯⎯⎯⎯⎯⎯⎯

Minimal Protocol-Level Constraints

Most cryptocurrencies have protocol-level constraints - such as block size and block time - that effectively create a maximum theoretical throughput. While these limitations are often in place for security and stability, they can become bottlenecks as network demand grows.

To achieve true scalability, as many throughput constraints as possible should be removed from the protocol. This approach allo...


Content cut off. Read original on https://old.reddit.com/r/nanocurrency/comments/1fk61vh/the_core_principles_of_cryptocurrency_scalability/

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here