Maximum Gini coefficient: a rough measure of how decentralized a blockchain is

Can we combine these sample measures of subsystem decentralization to measure the degree of decentralization of a system? A simple approach is to simply take the maximum gini coefficient of all the basic subsystems, as follows:

Thus, the maximum gini coefficient for both Bitcoin and Ethereum is about 0.92 by this metric, because both nodes have a high concentration of clients in one code base (Bitcoin Core, Ethereum GEth).

Crucially, these values are different depending on the base subsystem you choose. For example, one might think that a single code base is not an obstacle to system decentralization. If this is the case, then bitcoin’s maximum Gini coefficient will improve to 0.84, and the new decentralized bottleneck will be the distribution of nodes between countries.

Of course we

Don’t
indeed

Minimum Satoshi coefficient: an improved measure of blockchain decentralization

Biggest gini coefficient, however, there is an obvious question: although a higher gini coefficient corresponding to the more centralized system accord with our intuition, but since each gini coefficient is limited in the range of 0 to 1, means that the coefficient is not directly measured damage a system needed to control the number of the individual or entity.

To be specific, for a blockchain, suppose you have an exchange subsystem with 1,000 participants and a Gini coefficient of 0.8, and another subsystem with 10 miners and a Gini coefficient of 0.7. It may turn out that breaking only three miners instead of 57 exchanges is enough to break the decentralization of the system, which means that using the maximum gini coefficient of the system to find bottlenecks may point to exchanges rather than the actual distribution of miners.

There are several ways to overcome this difficulty. For example, we can propose the principle weights of the Gini coefficients of different subsystems before combining them.

An alternative approach is to define a spiritually similar metric based on the Lorentz curve to calculate the Gini coefficient, which we call the “Satoshi coefficient”. As shown in the figure below. In this example, the Satoshi coefficient for a given subsystem is 8, because eight entities will be required to achieve 51% control.

In other words, we define the Satoshi coefficient as the minimum number of entities required to gain control of 51% of the total capacity of a subsystem. Combined with the above measures, by taking the minimum value of the minimum Gini coefficient of each subsystem in the system, we can get the “minimum medium and intrinsic hearing coefficient”, which is the number of entities that we need to destroy in order to destroy the system as a whole.

The Satoshi coefficient represents the minimum number of entities required to destroy a given subsystem. The minimum Satoshi coefficient is the minimum value of the satoshi coefficient of all subsystems.

If the threshold for subsystem failure is not 51%, we can also define a “modified Satoshi coefficient”. For example, to degrade a system so badly that 75% of the exchanges need to operate, only 51% of the miners are needed.

We can now use the Lorentz curve in the previous section to calculate the Satoshi coefficients for Ethereum and Bitcoin. This is an example of calculating the Lorentz curve of Geth, ethereum’s standard client. As we can see, it only takes two developers to control 51% of the code contribution in GEth development, so the Satoshi coefficient is 2.

The figure above illustrates this concept. Here’s a chart of all the subsystems of Bitcoin and Ethereum, this time calculating the Satoshi coefficient:

In the table below, we gather the Satoshi coefficients of all subsystems:

As we can see, with these basic subsystems, we can say that both Bitcoin and Ethereum have a satc of 1. Specifically, a breach of the Bitcoin Core or GEth codebase would destroy more than 51% of clients, which would cause damage to their respective networks.

Improving Ethereum means that other parity-like clients need to gain a higher market share, and after that, centralization of developers or mining becomes the next bottleneck. Similarly, bitcoin’s improvement will require wider adoption of clients like BTCD and BCoin.

The minimum Satoshi coefficient depends on the subsystem definition

We know that some may argue that bitcoin’s high concentration of a single standard client does not affect its decentralization, or that this degree of centralization is inevitable. We will not take a position on this issue, as different measures of decentralization are obtained under different definitions of the basic subsystem.

For example, if one considers “founders and speakers” to be a basic subsystem, then Ethereum’s minimum intermediate hearing coefficient is 1, because Vitalik Buterin and Ethereum are internecine.

Conversely, if one were to consider “the number of different countries with substantial mining capacity” to be a basic subsystem, then bitcoin’s minimum SATC would again be 1, because… 51% of mining power will be destroyed.

How to choose the basic subsystem that best represents a particular kind of decentralized system will be the subject of some debate, which we think is beyond the scope of this post. However, it is worth observing that the destruction of Founders and Speakers and Chinese Miners is two different attacks on two different chains. As such, if one wants to compare the minimum SATC of different cryptocurrencies, ecosystem diversification may increase decentralization in a quantitative way to some extent.

conclusion

Many say decentralization is the most important attribute of systems like Bitcoin and Ethereum. If this is true, quantification of decentralization is crucial. One such measure is the minimum medium – to – native hearing coefficient; As this coefficient increases, so does the minimum number of entities needed to break the system. We think this corresponds to the intuitive concept of decentralization.

Clear measures to quantify decentralisation are important for three reasons.

  1. To measure

    . First, such quantitative measures can be explicitly calculated, recorded over a period of time and displayed on a dashboard. This gives us the ability to track the history of decentralization at the system level within the subsystem.

  2. To improve

    . Second, just as we measure performance, measures such as Nakamoto’s Satoshi coefficient allow us to measure increasing or decreasing decentralization. This allows us to start attributing decentralized changes to separate deployments of code or other types of network activity. For example, in a resource-constrained situation, we could measure whether deploying 1000 nodes or hiring 2 new client developers would result in a significant increase in decentralization.

  3. To optimize the

    . And last but not least, quantifiableThe objective function(in a mathematical sense) determine anyTo optimize theThe result of the process. Seemingly similar objective functions yield vastly different solutions. If our goal is to optimize the degree of decentralization across systems as well as within systems, we will need quantitative measures like the Lorentz curve, gini coefficient, and Satoshi coefficient.

We acknowledge that there is plenty of room for debate about what basic subsystems are needed for a decentralized system. However, given a proposed basic subsystem, we can now generate a Lorentz curve and a nakoshi coefficient, and determine whether this subsystem is really a decentralized bottleneck for the system as a whole.

In this light, we consider the minimum MEDIUM Ben coefficient to be a useful first step towards decentralized quantization.

Vitalik’s comment:

There are two things I think these 4 charts miss about the developer and client decentralization of Bitcoin and Ethereum:

  • Many of the so-called “other Bitcoin client options” are actually forks from the same codebar as Bitcoin Core, but all Ethereum implementations are completely separate codebars created from scratch. Therefore, it is debatable whether Core and BU should count as two completely separate clients.

  • Ethereum doesn’t really have the concept of a “standard client.” If you take it literally as “a client that people refer to in order to deepen their understanding of protocol rules,” then in many cases that’s actually Pyethereum because Python is easier to understand. The C++ client is the client that generates the test suite. So in my opinion, the amount of code that optical computing contributes to Geth is an inadequate representation of the decentralization of the ecosystem.

This is just an illustration of how elusive the concept of a subsystem can be. During last year’s denial-of-service (DoS) attack, Geth was unusable for a while, and most people switched to Parity. Therefore, some subsystems are not critical but not negligible.

At the same time, I think you ignore the gini coefficient is not entirely appropriate reasons: in the real world, the gini coefficient is usually used as a measure of a country’s wealth inequality between full-time residents, so it’s actually measured results of inequality, the encryption world, inequality between the accounts can be based on two conditions: (1) Differences in the ability of different users to succeed in the system under a given degree of participation; And (2) differences in their degree of participation.

The Gini index for cello production worldwide may be above 0.99, but apparently no one cares. The problem, in terms of mining and wealth, is that there is such a long list of amateurs with little interest that the Gini coefficient here seems to measure the length of the abscissa (counting users) rather than anything else. (Note: Due to the large number of amateur users, the gini coefficient will be higher when the “professional users” have a certain amount of mining involvement and wealth.) Therefore, it is surely preferable to focus on the Satoshi coefficient or similar measures such as the share of the top 100 digital currencies.

Original author’s Reply:

Of course, I agree with all of the above. I still think the general idea of listing subsystems and using things like the Satoshi coefficient makes our intuition about decentralization clear.

As an analogy, it’s a bit like assessment game site: benchmarksgame.alioth.debian.org/why-measure…

Each subsystem we picked was like a different benchmark. Any single benchmark is flawed, but a set of benchmarks helps us determine where a given language tends to be fast or slow.

More detailed reply:

Many of the so-called “other Bitcoin client options” are actually forks from the same codebar as Bitcoin Core, but all Ethereum implementations are completely separate codebars created from scratch.

Agree, and this is a possible improvement over that subsystem metric (” truly independent code base “). If we use that definition, then Ethereum’s client is more decentralized than Bitcoin’s, so truly independent codebase like BTCD and BCoin doesn’t have as much of a share of Bitcoin as Parity. Still, the two systems will be largely centralized by this metric.

Ethereum doesn’t really have the concept of a “standard client.” If you take it literally as “a client that people refer to in order to deepen their understanding of protocol rules,” then in many cases that’s actually Pyethereum because Python is easier to understand. The C++ client is the client that generates the test suite. So in my opinion, the amount of code that optical computing contributes to Geth is an inadequate representation of the decentralization of the ecosystem.

Of course, in the sense of “developer code contribution distribution in comparison to ethereum’s most popular clients versus Bitcoin’s most popular clients,” this is basically a roughly like-for-like comparison. You can use another definition, such as the distribution of code contributions for all individual code bases used for production. Or one could argue that the calculation of code contributions is not really important.

Still, like reviewing games, the discussion has at least begun to crystallise concrete and quantitative measures of what decentralisation means.

The problem, in terms of mining and wealth, is that there is such a long list of amateurs with little interest that the Gini coefficient here seems to measure the length of the abscissa (counting users) rather than anything else.

We did think about this — you’re right, if we calculate across all ETH or BTC addresses, then this is a problem, and the Gini coefficient is very close to 1.0 (since most addresses have 0 BTC/ETH, as do most of the world’s inhabitants).

In this case, we limit the wealth/address calculation to the first N addresses of ETH and BTC, so that anyone can have a measure of the degree of wealth decentralization in the first N addresses. Let’s not argue that this is a key metric, just as an illustration. Although you don’t want the Gini coefficient of BTC or ETH to be 1.0 (because then only one person will own all the digital currency and no one will be motivated to help improve the network), what actually happens is that a fairly high degree of wealth centralization is still compatible with the operation of a decentralized protocol.

There is a natural limit to the time window for mining/block reward calculations. So we didn’t delve into it here.

Therefore, it is surely preferable to focus on the Satoshi coefficient or similar measures such as the share of the top 100 digital currencies.

Yes, The Satoshi coefficient is useful for intuitive reading (” minimum number of entities needed to destroy a system “), but the Gini coefficient is not so specific.

The original link: https://news.earn.com/quantifying-decentralization-eTranslated & proofread by Balaji S. Srinivasan and Leland Lee Zhang Ling & Elisa articles: the etheric fang lovers (https://ethfans.org/posts/quantifying-decentralization-part-2)Copy the code