PPT 2
PPT 2
PPT 2
1
Image courtesy: http://beetfusion.com/
• Stale blocks are created when a block is solved and every other miner who
is still working to find a solution to the hash puzzle is working on that block.
• Orphan blocks, often referred to as stale blocks, are blocks that are not
accepted into the blockchain network due to a time lag in the acceptance of
the block. Orphan blocks are valid and verified blocks but have been
rejected by the chain.
• Orphan blocks are also called detached blocks and were accepted at one
point in time by the network as valid blocks but were rejected when a proven
longer chain was created that did not include this initially accepted block.
NETWORK FORKS
T1 T2 T3 T4
BLOCK HEADER (REFERENCE: BITCOIN)
• Block identifier – the hash of the current block header (Hash algorithm: Double SHA256)
• Previous block hash is used to compute the current block hash
• If attacker making any change in the transaction, then Merkle root gets changed, then
corresponding hash value gets changed and finally the next block value gets changed.
• Ultimately, attacker need to change the entire blockchain, which is not possible
TRANSACTIONS IN A BLOCK (REFERENCE: BITCOIN)
This is not a transaction. Here, one miner got some BTC from mining procedure
• The Block contains two parts – the header and the data (the
transactions)
• The header of a block connects the transactions – any change in any
transaction will result in a change at the block header
• The headers of subsequent blocks are connected in a chain – the entire
blockchain needs to be updated if you want to make any change
anywhere
THE BLOCKCHAIN REPLICAS
• Ensure that different nodes in the network see the same data nearly at
the same point of time.
• All
nodes in the network need to agree or give their consent on a
regular basis, that the data stored by them is the same. (This
algorithm, we call it as Consensus Algorithm (CA))
• CA ensures that there is no single point of failure – the data is
decentralized
• The system can provide service even in the presence of failures until
and unless the network get disconnected.
THE NOTION OF DISTRIBUTED CONSENSUS
• Starting from early 90’s a large number of works have been devoted
on the development of consensus algorithms over a network
• The basic philosophy is based on message passing – inform your
current state to others so that everyone can match their current
state with others in the network and they validate their local data.
• You can check that you have the latest data that your peers have.
• However, this philosophy requires that the participants in the consensus
algorithm knows each other. Because you need to find out with which
node you need to validate or match your data.
THE NOTION OF DISTRIBUTED CONSENSUS
• Canwe achieve consensus even when the network is arbitrarily large ?,
and no participant in the network really know all other participants?
• Thiswe call as: An open network scenario or the permission-less
protocol – you do not record your identity while participating in the
consensus system
• Forthat: A challenge-response based system, where the network would
pose a challenge, and each node in the network would attempt to solve
the challenge
• So here nodes need not to reveal their identity and network giving them
the challenge.
• Traditional message passing algorithms not work directly here because you
don’t know from which node you validate your data
CHALLENGE-RESPONSE TO PERMISSION-LESS CONSENSUS
• Design of a good challenge – ensures that different nodes will win the
challenge at different runs.
• This ensures that no node would be able to control the network
• So, in one round one node will solve the challenge and, in another node,
will say I am ready to solve this challenge and this block is valid block,
add it in the BC.
• ThisIdea is known as: The Bitcoin Proof of Work (PoW) algorithm –
ensures consensus over a permission-less setting based on
challenge-response
THE ECONOMICS BEHIND BLOCKCHAIN CONSENSUS
• Scaling solutions come in two forms: on-chain and off-chain. Both come with pros and
cons, but as of now, there is no agreement as to which is more promising for future
growth.
• On-chain scaling: refers to the philosophy of changing something about the
blockchain itself to make it faster. For example, one approach to scaling includes
shrinking the amount of data used in each transaction so that more transactions fit
into a block. By altering how the transaction data is handled, this patch to Bitcoin
allowed a notable improvement to overall network capacity.
• Another way to potentially boost the TPS of a network is to increase the rate of
block generation. While this can be helpful up to a point, there are limitations to this
method relating to the time it takes to propagate a new block through the network.
Basically, you don’t want new blocks being created before the previous block was
communicated to all (or virtually all) of the nodes on the network, as it can cause
issues with consensus.
WHAT ARE SOME WAYS BLOCKCHAINS CAN SCALE?
On-chain scaling
• Then there’s a technique called sharding, in which transactions are broken up into “shards,” and different nodes
only confirm certain shards, effectively performing parallel processing to speed up the system. This can be
applied to proof-of-work or proof-of-stake systems and is going to form a major component of Ethereum 2.0.
This offers the potential to improve the capacity and speed of the network, and developers are hoping that
they will see upward of 100,000 TPS become a reality.
• Sharding increases the chances of a “double-spend” occurring as a result of an attack. The issue here is that it
takes notably fewer resources to take over individual shards than it does to perform a traditional 51% attack.
This can lead to transactions being confirmed that would otherwise be seen as invalid, such as the same Ether
(ETH) being sent to two different addresses.
• Some projects have attempted to improve network speeds by limiting the amount of validating nodes — a very
different philosophy from Ethereum’s. One example is EOS, which has limited its validators to just 21. These 21
validators are then voted on by token holders in an attempt to keep a fair, distributed form of governance —
with mixed results. This has given the network a reported 4,000 TPS,
• To scale a blockchain there is need to increase the size of individual blocks. For example,, the average block on
the Bitcoin Cash network is still under 1 MB, the debate on this is as of yet unsettled, and we will explore the
issue more thoroughly below.
OFF-CHAIN SCALING
Off-chain scaling
• There are also ways to improve network throughput that don’t directly change anything about the blockchain. These are
often called “second-layer solutions,” as they sit “on top of” the blockchain. One of the most well known of these projects
is the Lightning Network for Bitcoin. Basically, Lightning Network nodes can open up “channels” between each other and
transact back and forth directly, and only when the channel is closed does the Lightning Network transmit the final tally to
be recorded on-chain. These nodes can also be strung together, making a much faster, cheaper payment system that only
interacts with the main network a fraction of the time.
• Ethereum, of course, also has solutions along these lines. For one, there is the Raiden Network, designed to be Ethereum’s
version of the Lightning Network, as well as a more general blockchain product called the Celer Network. These projects
implement not only off-chain transactions but also state changes, which allow for the processing of smart contracts.
Currently, the biggest drawback with these systems is that they are a work in progress, and there are still bugs and other
technical issues that can arise if channels aren’t created or closed correctly.
• A similar idea is something called “sidechains.” These are basically blockchains that are “branched off” of the main
chain, with the ability to move the native asset between them. This means sidechains can be created for specific purposes,
which will keep that transaction activity off of the primary network, freeing up the overall bandwidth for things that need
to be settled on the main chain. This is implemented for Bitcoin through the Liquid sidechain, and Ethereum’s version is
known as Plasma. One downside here is that each sidechain itself needs to be secured by nodes, which can lead to issues
with trust and security if a user is unaware of who is running them behind the scenes.
WHAT ARE THE ARGUMENTS FOR AND AGAINST INCREASING BLOCK
SIZE
• Larger blocks not only improve capacity and speed but also push down fees. Detractors are concerned
that larger blocks will lead to greater centralization.
• As block size increases, not only can more transactions be confirmed in each block, but also the average
transaction fee will drop.
• Increasing the size of blocks does have some consequences. As simply buying time and not solving the
real issue, and that more sophisticated solutions are necessary. The reason they give for why larger blocks
are such a problem is that node operators need to download each new block as it is propagated, which
with current hardware is of no major issue if blocks are 1 MB, 4 MB or even 32 MB in size. However, if a
blockchain is to be adopted globally, then even this is not enough. Before long, blocks would need to be on
the scale of gigabytes, and this could be a roadblock for many. If most average users cannot afford
hardware or internet connections capable of handling this, then, presumably, fewer and fewer would do so,
leading to increased centralization. As Bitcoin Core developer Gregory Maxwell has stated:
WHAT ARE THE ARGUMENTS FOR AND AGAINST INCREASING BLOCK
SIZE
• “There’s an inherent tradeoff between scale and decentralization when you talk about transactions on the
network. […] You’d need a lot of bandwidth, on the order of a gigabit connection. It would work. The
problem is that it wouldn’t be very decentralized, because who is going to run a node?”
• Ultimately, the ones who decide on these changes to a network are the miners, who can “signal” that they
support an upgrade to the network’s protocol. Because many miners are grouped into large pools, which
ultimately all signal together, this can potentially be another form of centralization, as these conglomerates
have far more say than lone miners ever could. Fortunately, there is more than one way to approach this
issue, and not all projects want to see open-ended block sizes. Other developers negate this problem in
other, clever ways in the hopes of putting scaling to rest once and for all.
HOW HAVE DIFFERENT PROJECTS APPROACHED THE ISSUE?
• No single solution has emerged as the best one, and projects are still actively exploring creative versions of all
these philosophies in an attempt to make scalable networks.
• At the time of writing, Bitcoin hasn’t natively upgraded the nature of its blocks since the implementation of
SegWit. That being said, Lightning Network and sidechain research is still going strong, and many expect
some form of it to be what enables everyday purchasing with Bitcoin to become the norm. As mentioned,
projects such as Bitcoin Cash have embraced the creation of bigger blocks, and BitcoinSV has taken this
further with an upper limit on its blocks of a whopping 2 GB. This has, admittedly, led to an increase in the
cost of maintaining a node as well as more frequent issues with orphaned blocks.
• Not all projects are taking the larger block approach, of course. While networks such as Zilliqa have
joined Ethereum in looking to sharding as their primary means of creating a scalable platform, Ethereum
itself is looking to migrate over to a new proof-of-stake system that is being labeled Casper. On the other
hand, the project Cardano has developed a new approach called Hydra, which sees each user generating
10 “heads” and each head acting as a new channel for throughput on the network.