CN Unit 1
CN Unit 1
CN Unit 1
Unit 1
[Unit 1] 6 Hrs
Introduction: Applications of computer networks, Network hardware, Network software:
Protocol Hierarchy, Design Issue, connection oriented vs. connectionless, Service Primitives,
Reference models: OSI and TCP/IP, Example networks: Internet, Network standardization,
Performance: Bandwidth and Latency, Delay and bandwidth product, High-Speed Network,
Application Performance Needs.
INTRODUCTION
In a computer network, computers or devices are connected together via communication devices and
transmission media. Examples of transmission media are cables and WiFi.
1. Communication
People can communicate with each other around the world through computer Networks. They can talk
and share information with each other using different network services such as email, social networking,
video conferencing, groupware, wikis, blogs, and SMS services.
2. Data Sharing
Computer Network plays a vital role in data sharing. Different users connected to the computer network
can share data among them. For Example, on the Internet, a large number of users can access the same
database in the network.
3. Software Sharing
In a computer network, usually, application software is installed on a centralized computer (Server
Computer). This software can be shared over the network instead of purchasing a separate copy of the
software for each other.
4. Hardware Sharing
In a Computer Network, Hardware devices such as Printers, Access points, HDD, SATA, SSD, Drivers,
etc. For example, many users can share a single printer connected to the network.
Hardware Sharing Application of computer Networks
An organization can save a lot of money by sharing different devices over a network. Without the facility
of the network, these devices have to be arranged separately for each user which becomes very costly for
an organization.
5. Internet Sharing
In a computer network, many users can access the internet through single internet and can use different
services.
There is no need to store the application programs and data files on an individual’s computer on the
newtroks. In this way, disk space on each computer is saved.
9. Performance Enhancement
A network can be used to improve the performance of different applications by using distributed
computing. In distributed computing, A computation task is divided into various computers on the
network. In this way, the performance of any application increases.
10. Entertainment
Computer Network provides many sources of entertainment to people. For example, we can play
different types of games, see movies, and listen to music. We can also make new friends on the internet.
Basic types
LAN (Local Area Network)
Group of interconnected computers within a small area. (room, building, campus) Two or more pc's can
from a LAN to share files, folders, printers, applications and other devices.
Coaxial or CAT 5 cables are normally used for connections.
Due to short distances, errors and noise are minimum.
Data transfer rate is 10 to 100 mbps.
Example: A computer lab in a school.
MAN (Metropolitan Area Network)
Design to extend over a large area.
Connecting number of LAN's to form larger network, so that resources can be shared.
Networks can be up to 5 to 50 km.
Owned by organization or individual.
Data transfer rate is low compare to LAN.
Example: Organization with different branches located in the city.
WAN (Wide Area Network)
Are country and worldwide network.
Contains multiple LAN's and MAN's.
Distinguished in terms of geographical range.
Uses satellites and microwave relays.
Data transfer rate depends upon the ISP provider and varies over the location.
Best example is the internet.
Other types
WLAN (Wireless LAN)
A LAN that uses high frequency radio waves for communication.
Provides short range connectivity with high speed data transmission.
PAN (Personal Area Network)
Network organized by the individual user for its personal use.
SAN (Storage Area Network)
Connects servers to data storage devices via fiber-optic cables.
E.g.: Used for daily backup of organization or a mirror copy
A transmission medium can be broadly defined as anything that can carry information from a source to a
destination.
Guided Media: Guided media, which are those that provide a medium from one device to another, include
twisted-pair cable, coaxial cable, and fiber-optic cable. Twisted-Pair Cable: A twisted pair consists of two
conductors (normally copper), each with its own plastic insulation, twisted together. One of the wires is used to
carry signals to the receiver, and the other is used only as a ground reference.
Unshielded Versus Shielded Twisted-Pair Cable The most common twisted-pair cable used in communications
is referred to as unshielded twisted-pair (UTP). STP cable has a metal foil or braided mesh covering that encases
each pair of insulated conductors. Although metal casing improves the quality of cable by preventing the
penetration of noise or crosstalk, it is bulkier and more expensive
.
The most common UTP connector is RJ45 (RJ stands for registered jack)
Applications Twisted-pair cables are used in telephone lines to provide voice and data channels. Local-
area networks, such as l0Base-T and l00Base-T, also use twisted-pair cables.
Coaxial Cable Coaxial cable (or coax) carries signals of higher frequency ranges than those in twisted
pair cable. coax has a central core conductor of solid or stranded wire (usuallycopper) enclosed in an
insulating sheath, which is, in turn, encased in an outer conductor of metal foil, braid, or a combination of
the two. The outer metallic wrapping serves both as a shield against noise and as the second conductor,
which completes the circuit.This outer conductor is also enclosed in an insulating sheath, and the whole
cable is protected by a plastic cover.
The most common type of connector used today is the Bayone-Neill-Concelman (BNC), connector.
Applications
Coaxial cable was widely used in analog telephone networks,digital telephone networks
Cable TV networks also use coaxial cables.
Another common application of coaxial cable is in traditional Ethernet LANs
Fiber-Optic Cable
A fiber-optic cable is made of glass or plastic and transmits signals in the form of light. Light travels in a
straight line as long as it is moving through a single uniform substance.
If a ray of light traveling through one substance suddenly enters another substance(of a different density),
the ray changes direction.
Optical fibers use reflection to guide light through a channel. A glass or plastic core is surrounded by a
cladding of less dense glass or plastic.
Propagation Modes
Multimode is so named because multiple beams from a light source move through the core in different
paths. How these beams move within the cable depends on the structure of the core, as shown in Figure.
Advantages and Disadvantages of Optical Fiber Advantages Fiber-optic cable has several advantages
over metallic cable (twisted pair or coaxial)
. 1 Higher bandwidth.
2 Less signal attenuation. Fiber-optic transmission distance is significantly greaterthan that of other
guided media. A signal can run for 50 km without requiring regeneration. We need repeaters every 5 km
for coaxial or twistedpair cable.
3 Immunity to electromagnetic interference. Electromagnetic noise cannot affect fiber-optic cables.
4 Resistance to corrosive materials. Glass is more resistant to corrosive materials than copper.
5 Light weight. Fiber-optic cables are much lighter than copper cables.
6 Greater immunity to tapping. Fiber-optic cables are more immune to tapping than copper cables.
Copper cables create antenna effects that can easily be tapped.
Disadvantages There are some disadvantages in the use of optical fiber.
1Installation and maintenance
2 Unidirectional light propagation. Propagation of light is unidirectional. If we need bidirectional
communication, two fibers are needed.
3 Cost. The cable and the interfaces are relatively more expensive than those of other guided media. If the
demand for bandwidth is not high, often the use of optical fiber cannot be justified.
Unguided signals can travel from the source to destination in several ways: ground propagation, sky
propagation, and line-of-sight propagation, as shown in Figure
Radio Waves
Electromagnetic waves ranging in frequencies between 3 kHz and 1 GHz are normally called radio
waves. Radio waves are omni directional. When an antenna transmits radio waves, they are propagated in
all directions. This means that the sending and receiving antennas do not have to be aligned. A sending
antenna sends waves that can be received by any receiving antenna. The omni directional property has a
disadvantage, too. The radio waves transmitted by one antenna are susceptible to interference by another
antenna that may send signals using the same frequency or band.
The Omni directional characteristics of radio waves make them useful for multicasting, in which there is
one sender but many receivers. AM and FM radio, television, maritime radio, cordless phones, and
paging are examples of multicasting.
Microwaves Electromagnetic waves having frequencies between 1 and 300 GHz are called microwaves.
Microwaves are unidirectional. The sending and receiving antennas need to be aligned. The
unidirectional property has an obvious advantage. A pair of antennas can be aligned without interfering
with another pair of aligned antennas
Microwaves need unidirectional antennas that send out signals in one direction. Two types of antennas
are used for microwave communications: the parabolic dish and the horn
Applications: Microwaves are used for unicast communication such as cellular telephones, satellite
networks, and wireless LANs
Infrared
Infrared waves, with frequencies from 300 GHz to 400 THz (wavelengths from 1 mm to 770 nm), can be
used for short-range communication. Infrared waves, having high frequencies, cannot penetrate walls.
This advantageous characteristic prevents interference between one system and another; a shortrange
communication system in one room cannot be affected by another system in the next room. When we use
our infrared remote control, we do not interfere with the use of the remote by our neighbors.
Infrared signals useless for long-range communication. In addition, we cannot use infrared waves
outside a building because the sun's rays contain infrared waves that can interfere with the
communication. Applications: Infrared signals can be used for short-range communication in a closed
area using line-of-sight propagation.
NETWORK SOFTWARE
The first computer networks were designed with the hardware as the main concern and the software as an
afterthought. This strategy no longer works. Network software is now highly structured. In the following
sections we examine the software structuring technique in some detail. The approach described here
forms the keystone of the entire book and will occur repeatedly later on.
A service is formally specified by a set of primitives (operations) available to user processes to access the service.
These primitives tell the service to perform some action or report on an action taken by a peer entity. If the
protocol stack is
located in the operating system, as it often is, the primitives are normally system calls. These calls cause a trap to
kernel mode, which then turns control of the machine over to the operating system to send the necessary packets.
The set of primitives available depends on the nature of the service being provided. The primitives for connection-
oriented service are different from those of connectionless service. As a minimal example of the service primitives
that might provide a reliable byte stream, consider the primitives listed in Fig. 1-17.
They will be familiar to fans of the Berkeley socket interface, as the primitives are a simplified version of that
interface.
These primitives might be used for a request-reply interaction in a client-server environment. To illustrate how, We
sketch a simple protocol that implements the service using acknowledged datagrams.
First, the server executes LISTEN to indicate that it is prepared to accept incoming connections. A common way
to implement LISTEN is to make it a blocking system call. After executing the primitive, the server process is
blocked until
a request for connection appears.
Next, the client process executes CONNECT to establish a connection with theserver. The CONNECT call needs
to specify who to connect to, so it might have a parameter giving the server’s address. The operating system then
typically sends a packet to the peer asking it to connect, as shown by (1) in Fig. 1-18. The client process is
suspended until there is a response.
When the packet arrives at the server, the operating system sees that the packet is requesting a connection. It
checks to see if there is a listener, and if so it unblocks the listener. The server process can then establish the
connection with
the ACCEPT call. This sends a response (2) back to the client process to accept connection. The arrival of this
response then releases the client.
At this point the client and server are both running and they have a connection established.
The obvious analogy between this protocol and real life is a customer (client) calling a company’s customer
service manager. At the start of the day, the service manager sits next to his telephone in case it rings. Later, a
client places a call.
When the manager picks up the phone, the connection is established.
The next step is for the server to execute RECEIVE to prepare to accept the first request. Normally, the server
does this immediately upon being released from the LISTEN, before the acknowledgement can get back to the
client. The RECEIVE call blocks the server.
Then the client executes SEND to transmit its request (3) followed by the execution of RECEIVE to get the reply.
The arrival of the request packet at the server machine unblocks the server so it can handle the request. After it has
done the
work, the server uses SEND to return the answer to the client (4). The arrival of this packet unblocks the client,
which can now inspect the answer. If the client has additional requests, it can make them now.
When the client is done, it executes DISCONNECT to terminate the connection (5). Usually, an initial
DISCONNECT is a blocking call, suspending the client and sending a packet to the server saying that the
connection is no longer needed. When the server gets the packet, it also issues a DISCONNECT of its own,
acknowledging the client and releasing the connection (6). When the server’s packet gets back to the client
machine, the client process is released and the connection is broken. In a nutshell, this is how connection-oriented
communication works.
Of course, life is not so simple. Many things can go wrong here. The timing can be wrong (e.g., the CONNECT is
done before the LISTEN), packets can get lost, and much more. We will look at these issues in great detail later,
but for the
moment, Fig. 1-18 briefly summarizes how client-server communication might work with acknowledged
datagrams so that we can ignore lost packets.
Given that six packets are required to complete this protocol, one might wonder why a connectionless protocol is
not used instead. The answer is that in a perfect world it could be, in which case only two packets would be
needed: one for the request and one for the reply. However, in the face of large messages in either direction (e.g., a
megabyte file), transmission errors, and lost packets, the situation changes. If the reply consisted of hundreds of
packets, some of which could be lost during transmission, how would the client know if some pieces were
missing? How would the client know whether the last packet actually received was really the last packet sent?
Suppose the client wanted a second file. How could it tell packet 1 from the second file from a lost packet 1 from
the first file that suddenly found its way to the client? In short, in the real world, a simple request-reply protocol
over an unreliable network is often inadequate.
Reference models: OSI and TCP/IP
OSI model was first introduced in 1984 by the International Organization for
Standardization (ISO).
– Outlines WHAT needs to be done to send data from one computer to another.
– Not HOW it should be done.
– Protocols stacks handle how data is prepared for transmittal (to be transmitted)
● In the OSI model, The specification needed – are contained in 7 different layers that interact with each
other.
What is “THE MODEL?”
– It is also a model that helps develop standards so that all of our hardware and software talks nicely
to each other.
Top to bottom
–All People Seem To Need Data Processing Bottom to
top
–Please Do Not Throw Sausage Pizza Away
Physical Layer
Deals with all aspects of physically moving data from one computer to the next
Converts data from the upper layers into 1s and 0s for transmission over media
Defines how data is encoded onto the media to transmit the data
Defined on this layer: Cable standards, wireless standards, and fiber optic standards.
Copper wiring, fiber optic cable, radio frequencies, anything that can be used to transmit data is defined
on the Physical layer of the OSI Model
Device example: Hub
Used to transmit data
Data Link Layer
Is responsible for moving frames from node to node or computer to computer
Can move frames from one adjacent computer to another, cannot move frames across routers
Encapsulation = frame
Requires MAC address or physical address
Protocols defined include Ethernet Protocol and Point-to-Point Protocol (PPP)
Device example: Switch
Two sublayers: Logical Link Control (LLC) and the Media Access Control (MAC)
o Logical Link Control (LLC)
–Data Link layer addressing, flow control, address notification, error control
o Media Access Control (MAC)
–Determines which computer has access to the network media at any given time
–Determines where one frame ends and the next one starts, called frame
synchronization
Network Layer
Responsible for moving packets (data) from one end of the network to the other, called end-to-
end communications
Requires logical addresses such as IP addresses
Device example: Router
–Routing is the ability of various network devices and their related software to move data
packets from source to destination
Transport Layer
Takes data from higher levels of OSI Model and breaks it into segments that can be sent to
lower-level layers for data transmission
Conversely, reassembles data segments into data that higher-level protocols
and applications can use
Also puts segments in correct order (called sequencing ) so they can be reassembled in correct
order at destination
Concerned with the reliability of the transport of sent data
May use a connection-oriented protocol such as TCP to ensure destination received segments
May use a connectionless protocol such as UDP to send segments without assurance of
delivery
Uses port addressing
Session Layer
Responsible for managing the dialog between networked devices
Establishes, manages, and terminates connections
Provides duplex, half-duplex, or simplex communications between devices
Provides procedures for establishing checkpoints, adjournment, termination, and restart or
recovery procedures
Presentation Layer
Concerned with how data is presented to the network
Handles three primary tasks: –Translation , –Compression , –Encryption
Application Layer
Contains all services or protocols needed by application software or operating system to
communicate on the network
Examples
o –Firefox web browser uses HTTP (Hyper-Text Transport Protocol)
o –E-mail program may use POP3 (Post Office Protocol version 3) to read e-mails and SMTP
(Simple Mail Transport Protocol) to send e-mails
The interaction between layers in the OSI model
–A protocol suite is a large number of related protocols that work together to allow networked
computers to communicate
Relationship of layers and addresses in TCP/IP
Application Layer
Application layer protocols define the rules when implementing specific network applications
Rely on the underlying layers to provide accurate and efficient data delivery
Typical protocols:
o FTP – File Transfer Protocol
For file transfer
o Telnet – Remote terminal protocol
For remote login on any other computer on the network
o SMTP – Simple Mail Transfer Protocol
For mail transfer
o HTTP – Hypertext Transfer Protocol
For Web browsing
Encompasses same functions as these OSI Model layers Application Presentation
Session
Transport Layer
TCP is a connection-oriented protocol
o Does not mean it has a physical connection between sender and receiver
o TCP provides the function to allow a connection virtually exists – also called virtual circuit
UDP provides the functions:
o Dividing a chunk of data into segments
o Reassembly segments into the original chunk
o Provide further the functions such as reordering and data resend
Offering a reliable byte-stream delivery service
Functions the same as the Transport layer in OSI
Synchronize source and destination computers to set up the session between the respective
Computers\
Internet Layer
The network layer, also called the internet layer, deals with packets and connects independent
networks to transport the packets across network boundaries. The network layer protocols are
the IP and the Internet Control Message Protocol (ICMP), which is used for error reporting.
Host-to-network layer
The Host-to-network layer is the lowest layer of the TCP/IP reference model. It combines
the link layer and the physical layer of the ISO/OSI model. At this layer, data is transferred
between adjacent network nodes in a WAN or between nodes on the same LAN.
THE INTERNET
The Internet has revolutionized many aspects of our daily lives. It has affected the way we do
business as well as the way we spend our leisure time. Count the ways you've used the Internet
recently. Perhaps you've sent electronic mail (e-mail) to a business associate, paid a utility bill, read a
newspaper from a distant city, or looked up a local movie schedule-all by using the Internet. Or
maybe you researched a medical topic, booked a hotel reservation, chatted with a fellow Trekkie, or
comparison-shopped for a car. The Internet is a communication system that has brought a wealth of
information to our fingertips and organized it for our use.
A Brief History
The Internet has come a long way since the 1960s. The Internet today is not a simple hierarchical
structure. It is made up of many wide- and local-area networks joined by connecting devices and
switching stations. It is difficult to give an accurate representation of the Internet because it is
continually changing-new networks are being added, existing networks are adding addresses, and
networks of defunct companies are being removed. Today most end users who want Internet
connection use the services of Internet service providers (lSPs). There are international service
providers, national service providers, regional service providers, and local service providers. The
Internet today is run by private companies, not the government. Figure 1.13 shows a conceptual
(not geographic) view of the Internet.
At the top of the hierarchy are the international service providers that connect nations together.
National Internet Service Providers:
The national Internet service providers are backbone networks created and maintained by
specialized companies. There are many national ISPs operating in North America; some of the
most well known are SprintLink, PSINet, UUNet Technology, AGIS, and internet Mel. To
provide connectivity between
the end users, these backbone networks are connected by complex switching stations (normally
run by a third party) called network access points (NAPs). Some national ISP networks are also
connected to one another by private switching stations called peering points. These normally
operate at a high data rate (up to 600 Mbps).
Regional Internet Service Providers:
Regional internet service providers or regional ISPs are smaller ISPs that are connected to one
or more national ISPs. They are at the third level of the hierarchy with a smaller data rate. Local
Internet Service Providers:
Local Internet service providers provide direct service to the end users. The local ISPs can be
connected to regional ISPs or directly to national ISPs. Most end users are connected to the local
ISPs. Note that in this sense, a local ISP can be a company that just provides Internet services, a
corporation with a network that supplies services to its own employees, or a nonprofit
organization, such as a college or a university, that runs its own network. Each of these local
ISPs can be connected to a regional or national service provider.
BTCOC602
Performance
Performance of a network pertains to the measure of service quality of a network as perceived by the user.
There are different ways to measure the performance of a network, depending upon the nature and design of
the network. The characteristics that measure the performance of a network are :
Bandwidth
Throughput
Latency (Delay)
Bandwidth – Delay Product
Jitter
BANDWIDTH
One of the most essential conditions of a website’s performance is the amount of bandwidth allocated to the
network. Bandwidth determines how rapidly the webserver is able to upload the requested information.
While there are different factors to consider with respect to a site’s performance, bandwidth is every now and
again the restricting element.
Bandwidth is characterized as the measure of data or information that can be transmitted in a fixed measure
of time. The term can be used in two different contexts with two distinctive estimating values. In the case of
digital devices, the bandwidth is measured in bits per second(bps) or bytes per second. In the case of
analogue devices, the bandwidth is measured in cycles per second, or Hertz (Hz).
Bandwidth is only one component of what an individual sees as the speed of a network. People frequently
mistake bandwidth with internet speed in light of the fact that internet service providers (ISPs) tend to claim
that they have a fast “40Mbps connection” in their advertising campaigns. True internet speed is actually the
amount of data you receive every second and that has a lot to do with latency too.
“Bandwidth” means “Capacity” and “Speed” means “Transfer rate”.
More bandwidth does not mean more speed. Let us take a case where we have double the width of the tap
pipe, but the water rate is still the same as it was when the tap pipe was half the width. Hence, there will be
no improvement in speed. When we consider WAN links, we mostly mean bandwidth but when we consider
LAN, we mostly mean speed. This is on the grounds that we are generally constrained by expensive cable
bandwidth over WAN rather than hardware and interface data transfer rates (or speed) over LAN.
Bandwidth in Hertz: It is the range of frequencies contained in a composite signal or the range of frequencies
a channel can pass. For example, let us consider the bandwidth of a subscriber telephone line as 4 kHz.
Bandwidth in Bits per Seconds: It refers to the number of bits per second that a channel, a link, or rather a
network can transmit. For example, we can say the bandwidth of a Fast Ethernet network is a maximum of
100 Mbps, which means that the network can send 100 Mbps of data.
Note: There exists an explicit relationship between the bandwidth in hertz and the bandwidth in bits per
second. An increase in bandwidth in hertz means an increase in bandwidth in bits per second. The
relationship depends upon whether we have baseband transmission or transmission with modulation.
THROUGHPUT
Throughput is the number of messages successfully transmitted per unit time. It is controlled by available
bandwidth, the available signal-to-noise ratio and hardware limitations. The maximum throughput of a
network may be consequently higher than the actual throughput achieved in everyday consumption. The
terms ‘throughput’ and ‘bandwidth’ are often thought of as the same, yet they are different. Bandwidth is the
potential measurement of a link, whereas throughput is an actual measurement of how fast we can send data.
Throughput is measured by tabulating the amount of data transferred between multiple locations during a
specific period of time, usually resulting in the unit of bits per second(bps), which has evolved to bytes per
second(Bps), kilobytes per second(KBps), megabytes per second(MBps) and gigabytes per second(GBps).
BTCOC602
Throughput may be affected by numerous factors, such as the hindrance of the underlying analogue physical
medium, the available processing power of the system components, and end-user behaviour. When numerous
protocol expenses are taken into account, the use rate of the transferred data can be significantly lower than
the maximum achievable throughput.
Let us consider: A highway which has a capacity of moving, say, 200 vehicles at a time. But at a random
time, someone notices only, say, 150 vehicles moving through it due to some congestion on the road. As a
result, the capacity is likely to be 200 vehicles per unit time and the throughput is 150 vehicles at a time.
Example:
Input:A network with bandwidth of 10 Mbps can pass only an average of 12, 000 frames
per minute where each frame carries an average of 10, 000 bits. What will be the
throughput for this network?
Transfer of data by
3. Concerned with some means. Communication between two entities
It refers to the
maximum amount of It is considered as the actual
the data that can be measurement of the data that is being
passed from one point moved through the media at any
6. Definition to another. particular time.
It is not affected by
physical obstruction It can be easily affected by change in
because it is a interference, traffic in network,
theoretical unit to some network devices, transmission errors
7. Effect extent. and the host of other type.
LATENCY
In a network, during the process of data communication, latency(also known as delay) is defined as the total
time taken for a complete message to arrive at the destination, starting with the time when the first bit of the
message is sent out from the source and ending with the time when the last bit of the message is delivered at
the destination. The network connections where small delays occur are called “Low-Latency-Networks” and
the network connections which suffer from long delays are known as “High-Latency-Networks”.
High latency leads to the creation of bottlenecks in any network communication. It stops the data from taking
full advantage of the network pipe and conclusively decreases the bandwidth of the communicating network.
The effect of the latency on a network’s bandwidth can be temporary or never-ending depending on the
source of the delays. Latency is also known as a ping rate and is measured in milliseconds(ms).
In simpler terms: latency may be defined as the time required to successfully send a packet across a network.
Note: Since the message is short and the bandwidth is high, the dominant factor is the
propagation time and not the transmission time(which can be ignored).
Queuing Time: Queuing time is a time based on how long the packet has to sit around in the router. Quite
frequently the wire is busy, so we are not able to transmit a packet immediately. The queuing time is
usually not a fixed factor, hence it changes with the load thrust in the network. In cases like these, the
packet sits waiting, ready to go, in a queue. These delays are predominantly characterized by the measure
of traffic on the system. The more the traffic, the more likely a packet is stuck in the queue, just sitting in
the memory, waiting.
Processing Delay: Processing delay is the delay based on how long it takes the router to figure out where
to send the packet. As soon as the router finds it out, it will queue the packet for transmission. These costs
are predominantly based on the complexity of the protocol. The router must decipher enough of the packet
to make sense of which queue to put the packet in. Typically the lower-level layers of the stack have
simpler protocols. If a router does not know which physical port to send the packet to, it will send it to all
the ports, queuing the packet in many queues immediately. Differently, at a higher level, like in IP
protocols, the processing may include making an ARP request to find out the physical address of the
destination before queuing the packet for transmission. This situation may also be considered as a
processing delay.
Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that there can be a maximum of
3 x 5 = 15 bits on the line. The reason is that, at each second, there are 3 bits on the line and the duration of
each bit is 0.33s.
BTCOC602
For both examples, the product of bandwidth and delay is the number of bits that can fill the link. This
estimation is significant in the event that we have to send data in bursts and wait for the acknowledgement
of each burst before sending the following one. To utilize the maximum ability of the link, we have to
make the size of our burst twice the product of bandwidth and delay. Also, we need to fill up the full-
duplex channel. The sender ought to send a burst of data of (2*bandwidth*delay) bits. The sender at that
point waits for the receiver’s acknowledgement for part of the burst before sending another burst. The
amount: 2*bandwidth*delay is the number of bits that can be in transition at any time.
JITTER
Jitter is another performance issue related to delay. In technical terms, jitter is a “packet delay variance”. It
can simply mean that jitter is considered as a problem when different packets of data face different delays
in a network and the data at the receiver application is time-sensitive, i.e. audio or video data. Jitter is
measured in milliseconds(ms). It is defined as an interference in the normal order of sending data packets.
For example: if the delay for the first packet is 10 ms, for the second is 35 ms, and for the third is 50 ms,
then the real-time destination application that uses the packets experiences jitter.
Simply, jitter is any deviation in, or displacement of, the signal pulses in a high-frequency digital signal.
The deviation can be in connection with the amplitude, the width of the signal pulse or the phase timing.
The major causes of jitter are electromagnetic interference(EMI) and crosstalk between signals. Jitter can
lead to flickering of a display screen, affects the capability of a processor in a desktop or server to proceed
as expected, introducing clicks or other undesired impacts in audio signals, and loss of transmitted data
between network devices.
Jitter is negative and causes network congestion and packet loss.
Congestion is like a traffic jam on the highway. In a traffic jam, cars cannot move forward at a
reasonable speed. Like the traffic jam, in congestion, all the packets come to a junction at the same
time. Nothing can get loaded.
The second negative effect is packet loss. When packets arrive at unexpected intervals, the receiving
system is not able to process the information, which leads to missing information also called “packet
loss”. This has negative effects on video viewing. If a video becomes pixelated and is skipping, the
network is experiencing jitter. The result of the jitter is packet loss. When you are playing a game
online, the effect of packet loss can be that a player begins moving around on the screen randomly.
Even worse, the game goes from one scene to the next, skipping over part of the gameplay.
In the above image, it can be noticed that the time it takes for packets to be sent is not the same as the time
in which he will arrive at the receiver side. One of the packets faces an unexpected delay on its way and is
received after the expected time. This is jitter.
A jitter buffer can reduce the effects of jitter, either in a network, on a router or switch, or on a computer.
The system at the destination receiving the network packets usually receives them from the buffer and not
from the source system directly. Each packet is fed out of the buffer at a regular rate. Another approa ch to
diminish jitter in case of multiple paths for traffic is to selectively route traffic along the most stable paths
or to always pick the path that can come closest to the targeted packet delivery rate.
BTCOC602
High-Speed Networks
The seeming continual increase in bandwidth causes network designers to start thinking about what happens
in the limit or, stated another way, what is the impact on network design of having infinite bandwidth
available.
Although high-speed networks bring a dramatic change in the bandwidth available to applications, in many
respects their impact on how we think about networking comes in what does not change as bandwidth
increases: the speed of light. To quote Scotty from Star Trek, “Ye cannae change the laws of physics.” In
other words, “high speed” does not mean that latency improves at the same rate as bandwidth; the
transcontinental RTT of a 1-Gbps link is the same 100 ms as it is for a 1-Mbps link.
To appreciate the significance of ever-increasing bandwidth in the face of fixed latency, consider what is
required to transmit a 1-MB file over a 1-Mbps network versus over a 1-Gbps network, both of which have
an RTT of 100 ms. In the case of the 1-Mbps network, it takes 80 round-trip times to transmit the file; during
each RTT, 1.25% of the file is sent. In contrast, the same 1-MB file doesn’t even come close to filling
1 RTT’s worth of the 1-Gbps link, which has a delay × bandwidth product of 12.5 MB.
Figure 19 illustrates the difference between the two networks. In effect, the 1-MB file looks like a stream of
data that needs to be transmitted across a 1-Mbps network, while it looks like a single packet on a 1-Gbps
network. To help drive this point home, consider that a 1-MB file is to a 1-Gbps network what a 1-
KB packet is to a 1-Mbps network.
Figure 19. Relationship between bandwidth and latency. A 1-MB file would fill the 1-Mbps link 80 times but
only fill 1/12th of a 1-Gbps link.
BTCOC602
Another way to think about the situation is that more data can be transmitted during each RTT on a high-
speed network, so much so that a single RTT becomes a significant amount of time. Thus, while you
wouldn’t think twice about the difference between a file transfer taking 101 RTTs rather than 100 RTTs (a
relative difference of only 1%), suddenly the difference between 1 RTT and 2 RTTs is significant—a 100%
increase. In other words, latency, rather than throughput, starts to dominate our thinking about network
design.
Perhaps the best way to understand the relationship between throughput and latency is to return to basics.
The effective end-to-end throughput that can be achieved over a network is given by the simple relationship
where TransferTime includes not only the elements of one-way identified earlier in this section, but also any
additional time spent requesting or setting up the transfer. Generally, we represent this relationship as
We use in this calculation to account for a request message being sent across the network and the data being
sent back. For example, consider a situation where a user wants to fetch a 1-MB file across a 1-Gbps with a
round-trip time of 100 ms. This includes both the transmit time for 1 MB (1 / 1 Gbps × 1 MB = 8 ms) and the
100-ms RTT, for a total transfer time of 108 ms. This means that the effective throughput will be
not 1 Gbps. Clearly, transferring a larger amount of data will help improve the effective throughput, where in
the limit an infinitely large transfer size will cause the effective throughput to approach the network
bandwidth. On the other hand, having to endure more than 1 RTT—for example, to retransmit missing
packets—will hurt the effective throughput for any transfer of finite size and will be most noticeable for
small transfers.
Application Requirements
The discussion in this section has taken a network-centric view of performance; that is, we have talked in
terms of what a given link or channel will support. The unstated assumption has been that application
programs have simple needs—they want as much bandwidth as the network can provide. This is certainly
true of the aforementioned digital library program that is retrieving a 250-MB image; the more bandwidth
that is available, the faster the program will be able to return the image to the user.
However, some applications are able to state an upper limit on how much bandwidth they need. Video
BTCOC602
applications are a prime example. Suppose one wants to stream a video that is one quarter the size of a
standard TV screen; that is, it has a resolution of 352 by 240 pixels. If each pixel is represented by 24 bits of
information, as would be the case for 24-bit color, then the size of each frame would be (352 × 240 × 24) / 8
= 247.5 KB If the application needs to support a frame rate of 30 frames per second, then it might request a
throughput rate of 75 Mbps. The ability of the network to provide more bandwidth is of no interest to such an
application because it has only so much data to transmit in a given period of time.
Unfortunately, the situation is not as simple as this example suggests. Because the difference between any
two adjacent frames in a video stream is often small, it is possible to compress the video by transmitting only
the differences between adjacent frames. Each frame can also be compressed because not all the detail in a
picture is readily perceived by a human eye. The compressed video does not flow at a constant rate, but
varies with time according to factors such as the amount of action and detail in the picture and the
compression algorithm being used. Therefore, it is possible to say what the average bandwidth requirement
will be, but the instantaneous rate may be more or less.
The key issue is the time interval over which the average is computed. Suppose that this example video
application can be compressed down to the point that it needs only 2 Mbps, on average. If it transmits 1
megabit in a 1-second interval and 3 megabits in the following 1-second interval, then over the 2-second
interval it is transmitting at an average rate of 2 Mbps; however, this will be of little consolation to a channel
that was engineered to support no more than 2 megabits in any one second. Clearly, just knowing the average
bandwidth needs of an application will not always suffice.
Generally, however, it is possible to put an upper bound on how large a burst an application like this is likely
to transmit. A burst might be described by some peak rate that is maintained for some period of time.
Alternatively, it could be described as the number of bytes that can be sent at the peak rate before reverting to
the average rate or some lower rate. If this peak rate is higher than the available channel capacity, then the
excess data will have to be buffered somewhere, to be transmitted later. Knowing how big of a burst might
be sent allows the network designer to allocate sufficient buffer capacity to hold the burst.
Analogous to the way an application’s bandwidth needs can be something other than “all it can get,” an
application’s delay requirements may be more complex than simply “as little delay as possible.” In the case
of delay, it sometimes doesn’t matter so much whether the one-way latency of the network is 100 ms or
500 ms as how much the latency varies from packet to packet. The variation in latency is called jitter.
Consider the situation in which the source sends a packet once every 33 ms, as would be the case for a video
application transmitting frames 30 times a second. If the packets arrive at the destination spaced out exactly
33 ms apart, then we can deduce that the delay experienced by each packet in the network was exactly the
same. If the spacing between when packets arrive at the destination—sometimes called the inter-packet
gap—is variable, however, then the delay experienced by the sequence of packets must have also been
BTCOC602
variable, and the network is said to have introduced jitter into the packet stream, as shown in Figure 20. Such
variation is generally not introduced in a single physical link, but it can happen when packets experience
different queuing delays in a multihop packet-switched network. This queuing delay corresponds to the
component of latency defined earlier in this section, which varies with time.
To understand the relevance of jitter, suppose that the packets being transmitted over the network contain
video frames, and in order to display these frames on the screen the receiver needs to receive a new one
every 33 ms. If a frame arrives early, then it can simply be saved by the receiver until it is time to display it.
Unfortunately, if a frame arrives late, then the receiver will not have the frame it needs in time to update the
screen, and the video quality will suffer; it will not be smooth. Note that it is not necessary to eliminate jitter,
only to know how bad it is. The reason for this is that if the receiver knows the upper and lower bounds on
the latency that a packet can experience, it can delay the time at which it starts playing back the video (i.e.,
displays the first frame) long enough to ensure that in the future it will always have a frame to display when
it needs it. The receiver delays the frame, effectively smoothing out the jitter, by storing it in a buffer.