0% found this document useful (0 votes)
34 views12 pages

High SP (Eed Network Unit 2

The document discusses key metrics for evaluating network performance: bandwidth, throughput, latency, and bandwidth-delay product. It defines each metric and provides examples to illustrate how they are measured and related. Bandwidth refers to the maximum data transfer rate of a network connection, while throughput measures the actual amount of data transferred. Latency refers to the total delay for a message to be sent and received, comprising propagation, transmission, queuing, and processing delays. The bandwidth-delay product represents the maximum number of bits that can fill a network connection at a given bandwidth and latency.

Uploaded by

Harshit Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views12 pages

High SP (Eed Network Unit 2

The document discusses key metrics for evaluating network performance: bandwidth, throughput, latency, and bandwidth-delay product. It defines each metric and provides examples to illustrate how they are measured and related. Bandwidth refers to the maximum data transfer rate of a network connection, while throughput measures the actual amount of data transferred. Latency refers to the total delay for a message to be sent and received, comprising propagation, transmission, queuing, and processing delays. The bandwidth-delay product represents the maximum number of bits that can fill a network connection at a given bandwidth and latency.

Uploaded by

Harshit Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Network Performance Evaluation

Models
Performance of a network pertains to the measure of service quality of a
network as perceived by the user. There are different ways to measure the
performance of a network, depending upon the nature and design of the
network. The characteristics that measure the performance of a network are :
 Bandwidth
 Throughput
 Latency (Delay)
 Bandwidth – Delay Product
 Jitter
BANDWIDTH
One of the most essential conditions of a website’s performance is the amount
of bandwidth allocated to the network. Bandwidth determines how rapidly the
webserver is able to upload the requested information. While there are different
factors to consider with respect to a site’s performance, bandwidth is every now
and again the restricting element.
Bandwidth is characterized as the measure of data or information that can be
transmitted in a fixed measure of time. The term can be used in two different
contexts with two distinctive estimating values. In the case of digital devices, the
bandwidth is measured in bits per second(bps) or bytes per second. In the case
of analogue devices, the bandwidth is measured in cycles per second, or Hertz
(Hz).
Bandwidth is only one component of what an individual sees as the speed of a
network. People frequently mistake bandwidth with internet speed in light of the
fact that internet service providers (ISPs) tend to claim that they have a fast
“40Mbps connection” in their advertising campaigns. True internet speed is
actually the amount of data you receive every second and that has a lot to do
with latency too.
“Bandwidth” means “Capacity” and “Speed” means “Transfer rate”.
More bandwidth does not mean more speed. Let us take a case where we have
double the width of the tap pipe, but the water rate is still the same as it was
when the tap pipe was half the width. Hence, there will be no improvement in
speed. When we consider WAN links, we mostly mean bandwidth but when we
consider LAN, we mostly mean speed. This is on the grounds that we are
generally constrained by expensive cable bandwidth over WAN rather than
hardware and interface data transfer rates (or speed) over LAN.
Bandwidth in Hertz: It is the range of frequencies contained in a composite
signal or the range of frequencies a channel can pass. For example, let us
consider the bandwidth of a subscriber telephone line as 4 kHz.
Bandwidth in Bits per Seconds: It refers to the number of bits per second that a
channel, a link, or rather a network can transmit. For example, we can say the
bandwidth of a Fast Ethernet network is a maximum of 100 Mbps, which means
that the network can send 100 Mbps of data.
Note: There exists an explicit relationship between the bandwidth in hertz and
the bandwidth in bits per second. An increase in bandwidth in hertz means an
increase in bandwidth in bits per second. The relationship depends upon
whether we have baseband transmission or transmission with modulation.
THROUGHPUT
Throughput is the number of messages successfully transmitted per unit time. It
is controlled by available bandwidth, the available signal-to-noise ratio and
hardware limitations. The maximum throughput of a network may be
consequently higher than the actual throughput achieved in everyday
consumption. The terms ‘throughput’ and ‘bandwidth’ are often thought of as
the same, yet they are different. Bandwidth is the potential measurement of a
link, whereas throughput is an actual measurement of how fast we can send
data.
Throughput is measured by tabulating the amount of data transferred between
multiple locations during a specific period of time, usually resulting in the unit of
bits per second(bps), which has evolved to bytes per second(Bps), kilobytes per
second(KBps), megabytes per second(MBps) and gigabytes per second(GBps).
Throughput may be affected by numerous factors, such as the hindrance of the
underlying analogue physical medium, the available processing power of the
system components, and end-user behaviour. When numerous protocol
expenses are taken into account, the use rate of the transferred data can be
significantly lower than the maximum achievable throughput.
Let us consider: A highway which has a capacity of moving, say, 200 vehicles
at a time. But at a random time, someone notices only, say, 150 vehicles
moving through it due to some congestion on the road. As a result, the capacity
is likely to be 200 vehicles per unit time and the throughput is 150 vehicles at a
time.
Example:
Input:A network with bandwidth of 10 Mbps can pass only an average of
12, 000 frames
per minute where each frame carries an average of 10, 000 bits. What
will be the
throughput for this network?

Output: We can calculate the throughput as-


Throughput = (12, 000 x 10, 000) / 60 = 2 Mbps
The throughput is nearly equal to one-fifth of the bandwidth in this
case.
For the difference between Bandwidth and Throughput, refer.
LATENCY
In a network, during the process of data communication, latency(also known as
delay) is defined as the total time taken for a complete message to arrive at the
destination, starting with the time when the first bit of the message is sent out
from the source and ending with the time when the last bit of the message is
delivered at the destination. The network connections where small delays occur
are called “Low-Latency-Networks” and the network connections which suffer
from long delays are known as “High-Latency-Networks”.
High latency leads to the creation of bottlenecks in any network communication.
It stops the data from taking full advantage of the network pipe and conclusively
decreases the bandwidth of the communicating network. The effect of the
latency on a network’s bandwidth can be temporary or never-ending depending
on the source of the delays. Latency is also known as a ping rate and is
measured in milliseconds(ms).
In simpler terms: latency may be defined as the time required to successfully
send a packet across a network.
 It is measured in many ways like round trip, one way, etc.
 It might be affected by any component in the chain which is utilized to
vehiculate data, like workstations, WAN links, routers, LAN, servers and
eventually may be limited for large networks, by the speed of light.
Latency = Propagation Time + Transmission Time + Queuing Time +
Processing Delay
Propagation Time: It is the time required for a bit to travel from the source to
the destination. Propagation time can be calculated as the ratio between the
link length (distance) and the propagation speed over the communicating
medium. For example, for an electric signal, propagation time is the time taken
for the signal to travel through a wire.
Propagation time = Distance / Propagation speed
Example:
Input: What will be the propagation time when the distance between
two points is
12, 000 km? Assuming the propagation speed to be 2.4 * 10^8 m/s in
cable.

Output: We can calculate the propagation time as-


Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms
Transmission Time: Transmission time is a time based on how long it takes to
send the signal down the transmission line. It consists of time costs for an EM
signal to propagate from one side to the other, or costs like the training signals
that are usually put on the front of a packet by the sender, which helps the
receiver synchronize clocks. The transmission time of a message relies upon
the size of the message and the bandwidth of the channel.
Transmission time = Message size / Bandwidth
Example:
Input:What will be the propagation time and the transmission time for
a 2.5-kbyte
message when the bandwidth of the network is 1 Gbps? Assuming the
distance between
sender and receiver is 12, 000 km and speed of light is 2.4 * 10^8
m/s.

Output: We can calculate the propagation and transmission time as-


Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms
Transmission time = (2560 * 8) / 10^9 = 0.020 ms

Note: Since the message is short and the bandwidth is high, the
dominant factor is the
propagation time and not the transmission time(which can be ignored).
Queuing Time: Queuing time is a time based on how long the packet has to sit
around in the router. Quite frequently the wire is busy, so we are not able to
transmit a packet immediately. The queuing time is usually not a fixed factor,
hence it changes with the load thrust in the network. In cases like these, the
packet sits waiting, ready to go, in a queue. These delays are predominantly
characterized by the measure of traffic on the system. The more the traffic, the
more likely a packet is stuck in the queue, just sitting in the memory, waiting.
Processing Delay: Processing delay is the delay based on how long it takes
the router to figure out where to send the packet. As soon as the router finds it
out, it will queue the packet for transmission. These costs are predominantly
based on the complexity of the protocol. The router must decipher enough of
the packet to make sense of which queue to put the packet in. Typically the
lower-level layers of the stack have simpler protocols. If a router does not know
which physical port to send the packet to, it will send it to all the ports, queuing
the packet in many queues immediately. Differently, at a higher level, like in IP
protocols, the processing may include making an ARP request to find out the
physical address of the destination before queuing the packet for transmission.
This situation may also be considered as a processing delay.
BANDWIDTH – DELAY PRODUCT
Bandwidth and delay are two performance measurements of a link. However,
what is significant in data communications is the product of the two, the
bandwidth-delay product.
Let us take two hypothetical cases as examples.
Case 1: Assume a link is of bandwidth 1bps and the delay of the link is 5s. Let
us find the bandwidth-delay product in this case. From the image, we can say
that this product 1 x 5 is the maximum number of bits that can fill the link. There
can be close to 5 bits at any time on the link.

Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that
there can be a maximum of 3 x 5 = 15 bits on the line. The reason is that, at
each second, there are 3 bits on the line and the duration of each bit is 0.33s.
For both examples, the product of bandwidth and delay is the number of bits
that can fill the link. This estimation is significant in the event that we have to
send data in bursts and wait for the acknowledgement of each burst before
sending the following one. To utilize the maximum ability of the link, we have to
make the size of our burst twice the product of bandwidth and delay. Also, we
need to fill up the full-duplex channel. The sender ought to send a burst of data
of (2*bandwidth*delay) bits. The sender at that point waits for the receiver’s
acknowledgement for part of the burst before sending another burst. The
amount: 2*bandwidth*delay is the number of bits that can be in transition at any
time.
JITTER
Jitter is another performance issue related to delay. In technical terms, jitter is a
“packet delay variance”. It can simply mean that jitter is considered as a
problem when different packets of data face different delays in a network and
the data at the receiver application is time-sensitive, i.e. audio or video data.
Jitter is measured in milliseconds(ms). It is defined as an interference in the
normal order of sending data packets. For example: if the delay for the first
packet is 10 ms, for the second is 35 ms, and for the third is 50 ms, then the
real-time destination application that uses the packets experiences jitter.
Simply, jitter is any deviation in, or displacement of, the signal pulses in a high-
frequency digital signal. The deviation can be in connection with the amplitude,
the width of the signal pulse or the phase timing. The major causes of jitter are
electromagnetic interference(EMI) and crosstalk between signals. Jitter can
lead to flickering of a display screen, affects the capability of a processor in a
desktop or server to proceed as expected, introducing clicks or other undesired
impacts in audio signals, and loss of transmitted data between network
devices.
Jitter is negative and causes network congestion and packet loss.
 Congestion is like a traffic jam on the highway. In a traffic jam, cars cannot
move forward at a reasonable speed. Like the traffic jam, in congestion, all
the packets come to a junction at the same time. Nothing can get loaded.
 The second negative effect is packet loss. When packets arrive at
unexpected intervals, the receiving system is not able to process the
information, which leads to missing information also called “packet loss”.
This has negative effects on video viewing. If a video becomes pixelated and
is skipping, the network is experiencing jitter. The result of the jitter is packet
loss. When you are playing a game online, the effect of packet loss can be
that a player begins moving around on the screen randomly. Even worse,
the game goes from one scene to the next, skipping over part of the
gameplay.

In the above image, it can be noticed that the time it takes for packets to be
sent is not the same as the time in which he will arrive at the receiver side. One
of the packets faces an unexpected delay on its way and is received after the
expected time. This is jitter.
A jitter buffer can reduce the effects of jitter, either in a network, on a router or
switch, or on a computer. The system at the destination receiving the network
packets usually receives them from the buffer and not from the source system
directly. Each packet is fed out of the buffer at a regular rate. Another approach
to diminish jitter in case of multiple paths for traffic is to selectively route traffic
along the most stable paths or to always pick the path that can come closest to
the targeted packet delivery rate.
What is Congestion Management?

Introduction
With the pace of energy transition increasing, grid operators are encountering an
increasing number of power quality issues and capacity bottlenecks. These
bottlenecks are caused by grid congestion. Traditionally this would have been
resolved by reinforcing the physical grid. I.e. building more transformers, installing
cables with more capacity. However, the speed at which the energy transition is
currently taking place is too fast for this traditional approach. As a result, other
solutions which can be deployed quicker are required. Congestion management is one
of them.
In this blog post, we will dive into what congestion management is and why is it
important. We will also look at how the Withthegrid new analytics feature enables
asset managers, maintenance engineers, and inspection personnel to reduce workload,
outages and extend asset lifetime. This can be done by using real-time sensor and
operational data to monitor the condition of the grid.

What is Congestion Management?


Congestion management steers supply or demand in periods when the maximum grid
capacity is reached. This can be due to too much supply (think sunny summer day) or
with too much demand (think winter period with low wind). Congestion management
can be in a form of direct control, such as the curtailment of excess renewable energy,
or in a form of a market-based mechanism, where price signals are used to incentivize
grid parties to adjust supply or demand.

Why is Congestion Management Important?


Without active congestion management, the electricity grid will not be able to deal
with increasing renewable energy production and increasing electricity consumption.
This will lead to more outages, unhappy customers, and a barrier to any economic
activity.
As grid operators do not have complete real-time insight into the grid, especially at
lower voltage levels, or can completely control anything connected to the grid some
parts of the grid will be declared as congested. When a part of the grid is marked as
congestion it does not mean there is no capacity at all. It means that based on certain
forecasts at specific moments in time there is no capacity. Around this period there
may be sufficient capacity as can be seen in the figure below.
Congestion management is a technique that can help utilize network capacity more
efficiently while supporting the security of supply to end-users. This way, grid
operators can keep the network within operating limits while increasing the share of
renewables.

How to monitor Congestion Management?


The importance and dynamics of congestion management shows that managing grid
infrastructure is becoming more complex by the day. From an “install and forget”
approach to an “upgrade, monitor and manage” requires a different operational model
and different tools. The challenge is even bigger when taking into account the general
aging of the electricity grid which was built mid-20th century leading to more
outages, issues, and inspections. Additionally, the availability of technical personnel
to resolve these issues is decreasing. There is an aging technical workforce and there
are more open jobs than skilled technical labor available.
Monitoring and implementing congestion management starts with having real-time
insights into the performance of the electricity grid. Analytics and THE internet of
things play a pivotal role. Below is an example of how to implement this.

Example of congestion management


Let’s use an example to show how congestion management can be applied:

1. We have a region in Utrecht with 3 lower voltage transformers.


2. 1 transformer is predicted to experience congestion in the next 24 hours due to
too much solar generation
o Estimated 0.5 MWh between 1-3 pm
3. Behind this transformer are households and businesses. 3 businesses have been
identified to have some flexibility to increase their consumption:
o Small office 1: 0 – 0.2 MWh
o Small factory 2: 0 – 0.5 MWh
o Office building 3: 0 – 0.3 MWh
4. The grid operator can do 2 things now:
o Provide a market incentive (i.e. price) to manage congestion with the
help of the businesses that offer flexibility
o Based on bilateral agreements curtail excess solar energy

Using Withthegrid Analytics, we can monitor that transformers are overloading


during the day. We devise a strategy to curtail solar between 1-3 pm because no
flexibility is available. We then monitor whether the transformer is not overloading
anymore with this new strategy. With Withthegrid both the outcome and the process
are tracked in real-time. The asset owner is in control, achieves its ROI, and maintains
its license to operate.

CONGESTION MANAGEMENT DEFINITION OF CONGESTION


Whenever the physical or operational constraints in a transmission network become active, the system
is said to be in a state of congestion. The possible limits that may be hit in case of congestion are: line
thermal limits, transformer emergency ratings, bus voltage limits, transient or oscillatory stability,
etc. These limits constrain the amount of electric power that can be transmitted between two locations
through a transmission network. Flows should not be allowed to increase to levels where a contingency
would cause the network to collapse because of voltage instability, etc. offs, that no constraint is
violated.

• The peculiar characteristics associated with electrical power prevent its direct comparison with other
marketable commodities.

• First, electrical energy can not be stored in large chunks. In other words, the demand of electric power
has to be satisfied on a real time basis.

• Due to other peculiarities, the flexibility of directly routing this commodity through a desired path is
very limited. The flow of electric current obeys laws of physics rather than the wish of traders or
operators.

• Thus, the system operator has to decide upon such a pattern of injections and take-off , that no
constraint is violated. How Transfer capability is limited? Congestion, as used in deregulation parlance,
generally refers to a transmission line hitting its limit. The ability of interconnected transmission
networks to reliably transfer electric power may be limited by the physical and electrical characteristics
of the systems including any or more of the following:

• Thermal Limits: Thermal limits establish the maximum amount of electrical current that a
transmission line or electrical facility can conduct over a specified time period before it sustains
permanent damage by overheating.

• Voltage Limits: System voltages and changes in voltages must be maintained within the range of
acceptable minimum and maximum limits. The lower voltage limits determine the maximum amount of
electric power that can be transferred.

• Stability Limits: The transmission network must be capable of surviving disturbances through the
transient and dynamic time periods (from milliseconds to several minutes, respectively). Immediately
following a system disturbance, generators begin to oscillate relative to each other, causing fluctuations
in system frequency, line loadings, and system voltages.
• For the system to be stable, the oscillations must diminish as the electric system attains a new stable
operating point. The line loadings prior to the disturbance should be at such a level that its tripping does
not cause system-wide dynamic instability.

• The limiting condition on some portions of the transmission network can shift among thermal, voltage,
and stability limits as the network operating conditions change over time. For example, for a short line,
the line loading limit is dominated by its thermal limit. On the other hand, for a long line, stability limit is
the main concern. Such differing criteria further lead to complexities while determining transfer
capability limits. Importance of congestion management in the deregulated environment

• If the network power carrying capacity is infinite and if there are ample resources to keep the system
variables within limits, the most efficient generation dispatch will correspond to the least cost
operation. Kirchoff’s laws combined with the magnitude and location of the generations and loads, the
line impedances and the network topology determine the flows in each line. In real life, however, the
power carrying capacity of a line is limited by various limits as explained earlier. These power system
security constraints may therefore necessitate a change in the generator schedules away from the most
efficient dispatch. In the traditional vertically integrated utility environment, the generation patterns are
fairly stable.

• From a short term perspective, the system operator may have to deviate from the efficient dispatch in
order to keep line flows within limits

• However, the financial implications of such re-dispatch does not surface because the monopolist can
easily socialize these costs amongst the various participants, which in turn, are under his direct control.
From planning perspective also, a definite approach can be adopted for network augmentation

\. • However, in deregulated structures, with generating companies competing in an open transmission


access environment, the generation / flow patterns can change drastically over small time periods with
the market forces. In such situations, it becomes necessary to have a congestion management scheme in
place to ensure that the system stays secure. However, being a competitive environment, the re-
dispatch will have direct financial implications affecting most of the market players, creating a set of
winners and losers. Moreover, the congestion bottlenecks would encourage some strategic players to
exploit the situation.

• Effects of Congestion
• Market Inefficiency: Market efficiency, in the short term, refers to a market outcome that maximizes
the sum of the producer surplus and consumer surplus, which is generally known as social welfare. With
respect to generation, market efficiency will result when the most cost-effective generation resources
are used to serve the load. The difference in social welfare between a perfect market and a real market
is a measure of the efficiency of the real market. The effect of transmission congestion is to create
market inefficiency.
• Market Power: If the generator can successfully increase its profits by strategic bidding or by any
means other than lowering its costs, it is said to have market power. Imagine a two area system with
cheaper generation in area 1 and relatively costlier generation in area 2. Buyers in both the areas
would prefer the generation in area 1 and eventually the tie-lines between the two areas would start
operating at full capacity such that no further power transfer from area 1 to 2 is possible.

• The sellers in area 2 are then said to possess market power. By exercising market power, these sellers
can charge higher price to buyers if the loads are inelastic. Thus, congestion may lead to market power
which ultimately results in market inefficiency.

• In multi-seller / multi-buyer environment, the operator has to look after some additional issues which
crop up due to congestion. For example, in a centralized dispatch structure, the system operator
changes schedules of generators by raising generation of some while decreasing that of othersThe
operator compensates the parties who were asked to generate more by paying them for their additional
power production and giving lost opportunity payments to parties who were ordered to step down. The
operator has to share additional workload of commercial settlements arising due to network constraints
which, otherwise, would have been absent.

• One important thing to be noted is that creation of market inefficiency arising due to congestion in a
perfectly competitive market acts as an economic signal for network reinforcement. The market design
should be such that the players are made to take a clue from these signals so as to reinforce the
network, thus mitigating market inefficiency.

DESIRED FEATURES OF CONGESTION MANAGEMENT SCHEMES


Any congestion management scheme should try to accommodate the following features:

• Economic Efficiency: Congestion management should minimize its intervention into a competitive
market. In other words, it should achieve system security, forgoing as little social welfare as possible.
The scheme should lead to both, short term and long term efficiency. The short term efficiency is
associated with generator dispatch, while long term efficiency pertains to investments in new
transmission and generation facilities

• Non discriminative: Each market participant should be treated equally. For this, the network operator
should be independent of market parties and he should not derive any kind of benefit from occurrence
of congestion. Otherwise it provides perverse signals for network expansion.

• Be transparent: The implementation should be well defined and transparent for all participants.

• Be robust: Congestion management scheme should be robust with respect to strategic manipulation
by the market entities. This again refers back to principle of economic efficiency

You might also like