High SP (Eed Network Unit 2
High SP (Eed Network Unit 2
Models
Performance of a network pertains to the measure of service quality of a
network as perceived by the user. There are different ways to measure the
performance of a network, depending upon the nature and design of the
network. The characteristics that measure the performance of a network are :
Bandwidth
Throughput
Latency (Delay)
Bandwidth – Delay Product
Jitter
BANDWIDTH
One of the most essential conditions of a website’s performance is the amount
of bandwidth allocated to the network. Bandwidth determines how rapidly the
webserver is able to upload the requested information. While there are different
factors to consider with respect to a site’s performance, bandwidth is every now
and again the restricting element.
Bandwidth is characterized as the measure of data or information that can be
transmitted in a fixed measure of time. The term can be used in two different
contexts with two distinctive estimating values. In the case of digital devices, the
bandwidth is measured in bits per second(bps) or bytes per second. In the case
of analogue devices, the bandwidth is measured in cycles per second, or Hertz
(Hz).
Bandwidth is only one component of what an individual sees as the speed of a
network. People frequently mistake bandwidth with internet speed in light of the
fact that internet service providers (ISPs) tend to claim that they have a fast
“40Mbps connection” in their advertising campaigns. True internet speed is
actually the amount of data you receive every second and that has a lot to do
with latency too.
“Bandwidth” means “Capacity” and “Speed” means “Transfer rate”.
More bandwidth does not mean more speed. Let us take a case where we have
double the width of the tap pipe, but the water rate is still the same as it was
when the tap pipe was half the width. Hence, there will be no improvement in
speed. When we consider WAN links, we mostly mean bandwidth but when we
consider LAN, we mostly mean speed. This is on the grounds that we are
generally constrained by expensive cable bandwidth over WAN rather than
hardware and interface data transfer rates (or speed) over LAN.
Bandwidth in Hertz: It is the range of frequencies contained in a composite
signal or the range of frequencies a channel can pass. For example, let us
consider the bandwidth of a subscriber telephone line as 4 kHz.
Bandwidth in Bits per Seconds: It refers to the number of bits per second that a
channel, a link, or rather a network can transmit. For example, we can say the
bandwidth of a Fast Ethernet network is a maximum of 100 Mbps, which means
that the network can send 100 Mbps of data.
Note: There exists an explicit relationship between the bandwidth in hertz and
the bandwidth in bits per second. An increase in bandwidth in hertz means an
increase in bandwidth in bits per second. The relationship depends upon
whether we have baseband transmission or transmission with modulation.
THROUGHPUT
Throughput is the number of messages successfully transmitted per unit time. It
is controlled by available bandwidth, the available signal-to-noise ratio and
hardware limitations. The maximum throughput of a network may be
consequently higher than the actual throughput achieved in everyday
consumption. The terms ‘throughput’ and ‘bandwidth’ are often thought of as
the same, yet they are different. Bandwidth is the potential measurement of a
link, whereas throughput is an actual measurement of how fast we can send
data.
Throughput is measured by tabulating the amount of data transferred between
multiple locations during a specific period of time, usually resulting in the unit of
bits per second(bps), which has evolved to bytes per second(Bps), kilobytes per
second(KBps), megabytes per second(MBps) and gigabytes per second(GBps).
Throughput may be affected by numerous factors, such as the hindrance of the
underlying analogue physical medium, the available processing power of the
system components, and end-user behaviour. When numerous protocol
expenses are taken into account, the use rate of the transferred data can be
significantly lower than the maximum achievable throughput.
Let us consider: A highway which has a capacity of moving, say, 200 vehicles
at a time. But at a random time, someone notices only, say, 150 vehicles
moving through it due to some congestion on the road. As a result, the capacity
is likely to be 200 vehicles per unit time and the throughput is 150 vehicles at a
time.
Example:
Input:A network with bandwidth of 10 Mbps can pass only an average of
12, 000 frames
per minute where each frame carries an average of 10, 000 bits. What
will be the
throughput for this network?
Note: Since the message is short and the bandwidth is high, the
dominant factor is the
propagation time and not the transmission time(which can be ignored).
Queuing Time: Queuing time is a time based on how long the packet has to sit
around in the router. Quite frequently the wire is busy, so we are not able to
transmit a packet immediately. The queuing time is usually not a fixed factor,
hence it changes with the load thrust in the network. In cases like these, the
packet sits waiting, ready to go, in a queue. These delays are predominantly
characterized by the measure of traffic on the system. The more the traffic, the
more likely a packet is stuck in the queue, just sitting in the memory, waiting.
Processing Delay: Processing delay is the delay based on how long it takes
the router to figure out where to send the packet. As soon as the router finds it
out, it will queue the packet for transmission. These costs are predominantly
based on the complexity of the protocol. The router must decipher enough of
the packet to make sense of which queue to put the packet in. Typically the
lower-level layers of the stack have simpler protocols. If a router does not know
which physical port to send the packet to, it will send it to all the ports, queuing
the packet in many queues immediately. Differently, at a higher level, like in IP
protocols, the processing may include making an ARP request to find out the
physical address of the destination before queuing the packet for transmission.
This situation may also be considered as a processing delay.
BANDWIDTH – DELAY PRODUCT
Bandwidth and delay are two performance measurements of a link. However,
what is significant in data communications is the product of the two, the
bandwidth-delay product.
Let us take two hypothetical cases as examples.
Case 1: Assume a link is of bandwidth 1bps and the delay of the link is 5s. Let
us find the bandwidth-delay product in this case. From the image, we can say
that this product 1 x 5 is the maximum number of bits that can fill the link. There
can be close to 5 bits at any time on the link.
Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that
there can be a maximum of 3 x 5 = 15 bits on the line. The reason is that, at
each second, there are 3 bits on the line and the duration of each bit is 0.33s.
For both examples, the product of bandwidth and delay is the number of bits
that can fill the link. This estimation is significant in the event that we have to
send data in bursts and wait for the acknowledgement of each burst before
sending the following one. To utilize the maximum ability of the link, we have to
make the size of our burst twice the product of bandwidth and delay. Also, we
need to fill up the full-duplex channel. The sender ought to send a burst of data
of (2*bandwidth*delay) bits. The sender at that point waits for the receiver’s
acknowledgement for part of the burst before sending another burst. The
amount: 2*bandwidth*delay is the number of bits that can be in transition at any
time.
JITTER
Jitter is another performance issue related to delay. In technical terms, jitter is a
“packet delay variance”. It can simply mean that jitter is considered as a
problem when different packets of data face different delays in a network and
the data at the receiver application is time-sensitive, i.e. audio or video data.
Jitter is measured in milliseconds(ms). It is defined as an interference in the
normal order of sending data packets. For example: if the delay for the first
packet is 10 ms, for the second is 35 ms, and for the third is 50 ms, then the
real-time destination application that uses the packets experiences jitter.
Simply, jitter is any deviation in, or displacement of, the signal pulses in a high-
frequency digital signal. The deviation can be in connection with the amplitude,
the width of the signal pulse or the phase timing. The major causes of jitter are
electromagnetic interference(EMI) and crosstalk between signals. Jitter can
lead to flickering of a display screen, affects the capability of a processor in a
desktop or server to proceed as expected, introducing clicks or other undesired
impacts in audio signals, and loss of transmitted data between network
devices.
Jitter is negative and causes network congestion and packet loss.
Congestion is like a traffic jam on the highway. In a traffic jam, cars cannot
move forward at a reasonable speed. Like the traffic jam, in congestion, all
the packets come to a junction at the same time. Nothing can get loaded.
The second negative effect is packet loss. When packets arrive at
unexpected intervals, the receiving system is not able to process the
information, which leads to missing information also called “packet loss”.
This has negative effects on video viewing. If a video becomes pixelated and
is skipping, the network is experiencing jitter. The result of the jitter is packet
loss. When you are playing a game online, the effect of packet loss can be
that a player begins moving around on the screen randomly. Even worse,
the game goes from one scene to the next, skipping over part of the
gameplay.
In the above image, it can be noticed that the time it takes for packets to be
sent is not the same as the time in which he will arrive at the receiver side. One
of the packets faces an unexpected delay on its way and is received after the
expected time. This is jitter.
A jitter buffer can reduce the effects of jitter, either in a network, on a router or
switch, or on a computer. The system at the destination receiving the network
packets usually receives them from the buffer and not from the source system
directly. Each packet is fed out of the buffer at a regular rate. Another approach
to diminish jitter in case of multiple paths for traffic is to selectively route traffic
along the most stable paths or to always pick the path that can come closest to
the targeted packet delivery rate.
What is Congestion Management?
Introduction
With the pace of energy transition increasing, grid operators are encountering an
increasing number of power quality issues and capacity bottlenecks. These
bottlenecks are caused by grid congestion. Traditionally this would have been
resolved by reinforcing the physical grid. I.e. building more transformers, installing
cables with more capacity. However, the speed at which the energy transition is
currently taking place is too fast for this traditional approach. As a result, other
solutions which can be deployed quicker are required. Congestion management is one
of them.
In this blog post, we will dive into what congestion management is and why is it
important. We will also look at how the Withthegrid new analytics feature enables
asset managers, maintenance engineers, and inspection personnel to reduce workload,
outages and extend asset lifetime. This can be done by using real-time sensor and
operational data to monitor the condition of the grid.
• The peculiar characteristics associated with electrical power prevent its direct comparison with other
marketable commodities.
• First, electrical energy can not be stored in large chunks. In other words, the demand of electric power
has to be satisfied on a real time basis.
• Due to other peculiarities, the flexibility of directly routing this commodity through a desired path is
very limited. The flow of electric current obeys laws of physics rather than the wish of traders or
operators.
• Thus, the system operator has to decide upon such a pattern of injections and take-off , that no
constraint is violated. How Transfer capability is limited? Congestion, as used in deregulation parlance,
generally refers to a transmission line hitting its limit. The ability of interconnected transmission
networks to reliably transfer electric power may be limited by the physical and electrical characteristics
of the systems including any or more of the following:
• Thermal Limits: Thermal limits establish the maximum amount of electrical current that a
transmission line or electrical facility can conduct over a specified time period before it sustains
permanent damage by overheating.
• Voltage Limits: System voltages and changes in voltages must be maintained within the range of
acceptable minimum and maximum limits. The lower voltage limits determine the maximum amount of
electric power that can be transferred.
• Stability Limits: The transmission network must be capable of surviving disturbances through the
transient and dynamic time periods (from milliseconds to several minutes, respectively). Immediately
following a system disturbance, generators begin to oscillate relative to each other, causing fluctuations
in system frequency, line loadings, and system voltages.
• For the system to be stable, the oscillations must diminish as the electric system attains a new stable
operating point. The line loadings prior to the disturbance should be at such a level that its tripping does
not cause system-wide dynamic instability.
• The limiting condition on some portions of the transmission network can shift among thermal, voltage,
and stability limits as the network operating conditions change over time. For example, for a short line,
the line loading limit is dominated by its thermal limit. On the other hand, for a long line, stability limit is
the main concern. Such differing criteria further lead to complexities while determining transfer
capability limits. Importance of congestion management in the deregulated environment
• If the network power carrying capacity is infinite and if there are ample resources to keep the system
variables within limits, the most efficient generation dispatch will correspond to the least cost
operation. Kirchoff’s laws combined with the magnitude and location of the generations and loads, the
line impedances and the network topology determine the flows in each line. In real life, however, the
power carrying capacity of a line is limited by various limits as explained earlier. These power system
security constraints may therefore necessitate a change in the generator schedules away from the most
efficient dispatch. In the traditional vertically integrated utility environment, the generation patterns are
fairly stable.
• From a short term perspective, the system operator may have to deviate from the efficient dispatch in
order to keep line flows within limits
• However, the financial implications of such re-dispatch does not surface because the monopolist can
easily socialize these costs amongst the various participants, which in turn, are under his direct control.
From planning perspective also, a definite approach can be adopted for network augmentation
• Effects of Congestion
• Market Inefficiency: Market efficiency, in the short term, refers to a market outcome that maximizes
the sum of the producer surplus and consumer surplus, which is generally known as social welfare. With
respect to generation, market efficiency will result when the most cost-effective generation resources
are used to serve the load. The difference in social welfare between a perfect market and a real market
is a measure of the efficiency of the real market. The effect of transmission congestion is to create
market inefficiency.
• Market Power: If the generator can successfully increase its profits by strategic bidding or by any
means other than lowering its costs, it is said to have market power. Imagine a two area system with
cheaper generation in area 1 and relatively costlier generation in area 2. Buyers in both the areas
would prefer the generation in area 1 and eventually the tie-lines between the two areas would start
operating at full capacity such that no further power transfer from area 1 to 2 is possible.
• The sellers in area 2 are then said to possess market power. By exercising market power, these sellers
can charge higher price to buyers if the loads are inelastic. Thus, congestion may lead to market power
which ultimately results in market inefficiency.
• In multi-seller / multi-buyer environment, the operator has to look after some additional issues which
crop up due to congestion. For example, in a centralized dispatch structure, the system operator
changes schedules of generators by raising generation of some while decreasing that of othersThe
operator compensates the parties who were asked to generate more by paying them for their additional
power production and giving lost opportunity payments to parties who were ordered to step down. The
operator has to share additional workload of commercial settlements arising due to network constraints
which, otherwise, would have been absent.
• One important thing to be noted is that creation of market inefficiency arising due to congestion in a
perfectly competitive market acts as an economic signal for network reinforcement. The market design
should be such that the players are made to take a clue from these signals so as to reinforce the
network, thus mitigating market inefficiency.
• Economic Efficiency: Congestion management should minimize its intervention into a competitive
market. In other words, it should achieve system security, forgoing as little social welfare as possible.
The scheme should lead to both, short term and long term efficiency. The short term efficiency is
associated with generator dispatch, while long term efficiency pertains to investments in new
transmission and generation facilities
• Non discriminative: Each market participant should be treated equally. For this, the network operator
should be independent of market parties and he should not derive any kind of benefit from occurrence
of congestion. Otherwise it provides perverse signals for network expansion.
• Be transparent: The implementation should be well defined and transparent for all participants.
• Be robust: Congestion management scheme should be robust with respect to strategic manipulation
by the market entities. This again refers back to principle of economic efficiency