Data communication and computer networks
Data communication and computer networks
Unit IV :
Internet Working : Principles of Internet Working – Routing Principles – Internetwork
Protocols (IP) – Shortcoming of IPv4 – IP Next Generations.
TCP Reliable Transport Services : Transport Protocols - the service TCP provides to
Applications – End – to End Service and Datagrams – Transmission Control Protocols – User
datagram Protocols.
Unit V :
Network Applications : C/S Model – Domain Name System – telnet – File Transfer and
Remote File Access – E-Mail – WWW .
Network Management : Goal of Network Management – Network Management Standards
- Network Management Model – Infrastructure for Network Management – Simple Network
Management Protocol(SNMP).
Text Book:
Data communications & Computer Networks, Brijendra Singh, Second Edition PHI, 2006
Unit I : Introduction - Computer Networks
Computer networks exist on various scales, from links between two computers in one room
to connecting computers in a building or campus to national and global networks. Various
media are used to carry the communications signals: copper wire, fibre-optic cables and
wireless or radio transmissions etc. Similarly, the network connecting an organization‟s
computers might be owned and managed by the organization itself (typically in small-scale
networks linking machines in a room or building) or capacity can be rented from a firm
providing telecommunications services (typically in wider area networks).
There can be one or more systems acting as Server. Other being Client, requests the
Server to serve requests. Server takes and processes request on behalf of Clients.
There can be hybrid network which involves network architecture of both the above
types.
Applications of Computer Networks
Computer systems and peripherals are connected to form a network.They provide numerous
advantages:
IP phones
Video conferences
Parallel computing
Instant messaging
Types of Networks
Based on their scale, networks can be classified as Local Area Network (LAN), Wide
Area Network (WAN), Metropolitan Area Network (MAN), Personal Area Network
(PAN),Virtual Private Network (VPN) etc.
Computer networks can also be classified according to the hardware and software technology
that is used to interconnect the individual devices in the network, such as Optical fiber,
Ethernet, Wireless LAN, Home PNA, or Power line communication.
Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs,
switches, bridges and/or routers.
Wireless LAN technology is designed to connect devices without wiring. These devices use
radio waves or infrared signals as a transmission medium.
Computer networks may be classified according to the functional relationships which exist
among the elements of the network, e.g., Active Networking, Client-server and Peer-to-peer
(workgroup) architecture.
Network topology
Computer networks may be classified according to the network topology upon which
the network is based, such as Bus network, Star network, Ring network, Mesh network, Star-
bus network, Tree or Hierarchical topology network,
Network Topology signifies the way in which devices in the network see their logical
relations to one another. The use of the term "logical" here is significant. That is, network
topology is independent of the "physical" layout of the network. Even if networked
computers are physically placed in a linear arrangement, if they are connected via a hub, the
network has a Star topology, rather than a Bus Topology. In this regard the visual and
operational characteristics of a network are distinct; the logical network topology is not
necessarily the same as the physical layout.
Types of networks
Below is a list of the most common types of computer networks in order of scale.
A personal area network (PAN) is a computer network used for communication among
computer devices close to one person. Some examples of devices that are used in a PAN are
printers, fax machines, telephones, PDAs and scanners. The reach of a PAN is typically about
20-30 feet (approximately 6-9 meters), but this is expected to increase with technology
improvements.
Personal area networks may be wired with computer buses such as USB and FireWire. A
wireless personal area network (WPAN) can also be made possible with network
technologies such as IrDA and Bluetooth.
This is a network covering a small geographic area, like a home, office, or building. Current
LANs are most likely to be based on Ethernet technology. For example, a library may have a
wired or wireless LAN for users to interconnect local devices (e.g., printers and servers) and
to connect to the internet. On a wired LAN, PCs in the library are typically connected by
category 5 (Cat5) cable, running the IEEE 802.3 protocol through a system of interconnected
devices and eventually connect to the Internet. The cables to the servers are typically on Cat
5e enhanced cable, which will support IEEE 802.3 at 1 Gbit/s. A wireless LAN may exist
using a different IEEE protocol, 802.11b, 802.11g or possibly 802.11n. The staff computers
(bright green in the figure) can get to the color printer, checkout records, and the academic
network and the Internet. All user computers can get to the Internet and the card catalog.
Each workgroup can get to its local printer. Note that the printers are not accessible from
outside their workgroup.
This is a network that connects two or more LANs but that is limited to a specific and
contiguous geographical area such as a college campus, industrial complex, office building,
or a military base. A CAN may be considered a type of MAN (metropolitan area network),
but is generally limited to a smaller area than a typical MAN. This term is most often used to
discuss the implementation of networks for a contiguous area. This should not be confused
with a Controller Area Network. A LAN connects network devices over a relatively short
distance. A networked office building, school, or home usually contains a single LAN,
though sometimes one building will contain a few small LANs (perhaps one per room), and
occasionally a LAN will span a group of nearby buildings. In TCP/IP networking, a LAN is
often but not always implemented as a single IP subnet.
Metropolitan Area Network (MAN)
A Metropolitan Area Network is a network that connects two or more Local Area Networks
or Campus Area Networks together but does not extend beyond the boundaries of the
immediate town/city. Routers, switches and hubs are connected to create a Metropolitan Area
Network.
A WAN is a data communications network that covers a relatively broad geographic area (i.e.
one city to another and one country to another country) and that often uses transmission
facilities provided by common carriers, such as telephone companies. WAN technologies
generally function at the lower three layers of the OSI reference model: the physical layer, the
data link layer, and the network layer.
Global Area networks (GAN) specifications are in development by several groups, and there
is no common definition. In general, however, a GAN is a model for supporting mobile
communications across an arbitrary number of wireless LANs, satellite coverage areas, etc.
The key challenge in mobile communications is "handing off" the user communications from
one local coverage area to the next. In IEEE Project 802, this involves a succession of
terrestrial Wireless local area networks (WLAN).
Presentation Layer: This layer defines how data in the native format of remote host
should be presented in the native format of host.
Session Layer: This layer maintains sessions between remote hosts. For example,
once user/password authentication is done, the remote host maintains this session for
a while and does not ask for authentication again in that time span.
Transport Layer: This layer is responsible for end-to-end delivery between hosts.
Network Layer: This layer is responsible for address assignment and uniquely
addressing hosts in a network.
Data Link Layer: This layer is responsible for reading and writing data from and
onto the line. Link errors are detected at this layer.
Physical Layer: This layer defines the hardware, cabling wiring, power output, pulse
rate etc.
Application Layer: This layer defines the protocol which enables user to interact
with the network. For example, FTP, HTTP etc.
Transport Layer: This layer defines how data should flow between hosts. Major
protocol at this layer is Transmission Control Protocol (TCP). This layer ensures data
delivered between hosts is in-order and is responsible for end-to-end delivery.
Internet Layer: Internet Protocol (IP) works on this layer. This layer facilitates host
addressing and recognition. This layer defines routing.
Link Layer: This layer provides mechanism of sending and receiving actual data.
Unlike its OSI Model counterpart, this layer is independent of underlying network
architecture and hardware.
Analog Transmission
To send the digital data over an analog media, it needs to be converted into analog
signal.There can be two cases according to data formatting.
Bandpass:The filters are used to filter and pass frequencies of interest. A bandpass is a band
of frequencies which can pass the filter.
When digital data is converted into a bandpass analog signal, it is called digital-to-analog
conversion. When low-pass analog signal is converted into bandpass analog signal, it is
called analog-to-analog conversion.
Digital-to-Analog Conversion
When data from one computer is sent to another via some analog carrier, it is first converted
into analog signals. Analog signals are modified to reflect digital data.
An analog signal is characterized by its amplitude, frequency, and phase. There are three
kinds of digital-to-analog conversions:
When binary data represents digit 1, the amplitude is held; otherwise it is set to 0.
Both frequency and phase remain same as in the original carrier signal.
In this conversion technique, the frequency of the analog carrier signal is modified to
reflect binary data.
This technique uses two frequencies, f1 and f2. One of them, for example f1, is
chosen to represent binary digit 1 and the other one is used to represent binary digit
0. Both amplitude and phase of the carrier wave are kept intact.
In this conversion scheme, the phase of the original carrier signal is altered to reflect
the binary data.
When a new binary symbol is encountered, the phase of the signal is altered.
Amplitude and frequency of the original carrier signal is kept intact.
QPSK alters the phase to reflect two binary digits at once. This is done in two
different phases. The main stream of binary data is divided equally into two sub-
streams. The serial data is converted in to parallel in both sub-streams and then each
stream is converted to digital signal using NRZ technique. Later, both the digital
signals are merged together.
Analog-to-Analog Conversion
Analog signals are modified to represent analog data. This conversion is also known as
Analog Modulation. Analog modulation is required when bandpass is used. Analog to
analog conversion can be done in three ways:
Amplitude Modulation
In this modulation, the amplitude of the carrier signal is modified to reflect the
analog data.
Frequency Modulation
In this modulation technique, the frequency of the carrier signal is modified to reflect
the change in the voltage levels of the modulating signal (analog data).
The amplitude and phase of the carrier signal are not altered.
Phase Modulation
Digital Transmission
Data or information can be stored in two ways, analog and digital. For a computer to use the
data, it must be in discrete digital form.Similar to data, signals can also be in analog and
digital form. To transmit data digitally, it needs to be first converted to digital form.
Digital-to-Digital Conversion
This section explains how to convert digital data into digital signals. It can be done in two
ways, line coding and block coding. For all communications, line coding is necessary
whereas block coding is optional.
Line Coding
The process for converting digital data into digital signal is said to be Line Coding. Digital
data is found in binary format.It is represented (stored) internally as series of 1s and 0s.
Digital signal is denoted by discreet signal, which represents digital data.There are three
types of line coding schemes available:
Uni-polar Encoding
Unipolar encoding schemes use single voltage level to represent data. In this case, to
represent binary 1, high voltage is transmitted and to represent 0, no voltage is transmitted.
It is also called Unipolar-Non-return-to-zero, because there is no rest condition i.e. it either
represents 1 or 0.
Polar Encoding
Polar encoding scheme uses multiple voltage levels to represent binary values. Polar
encodings is available in four types:
It uses two different voltage levels to represent binary values. Generally, positive
voltage represents 1 and negative value represents 0. It is also NRZ because there is
no rest condition.
NRZ-L changes voltage level at when a different bit is encountered whereas NRZ-I
changes voltage when a 1 is encountered.
Manchester
This encoding scheme is a combination of RZ and NRZ-L. Bit time is divided into
two halves. It transits in the middle of the bit and changes phase when a different bit
is encountered.
Differential Manchester
This encoding scheme is a combination of RZ and NRZ-I. It also transit at the
middle of the bit but changes phase only when 1 is encountered.
Bipolar Encoding
Bipolar encoding uses three voltage levels, positive, negative and zero. Zero voltage
represents binary 0 and bit 1 is represented by altering positive and negative voltages.
Block Coding
To ensure accuracy of the received data frame redundant bits are used. For example, in
even-parity, one parity bit is added to make the count of 1s in the frame even. This way the
original number of bits is increased. It is called Block Coding.
Block coding is represented by slash notation, mB/nB.Means, m-bit block is substituted with
n-bit block where n > m. Block coding involves three steps:
Division,
Substitution
Combination.
Analog-to-Digital Conversion
Microphones create analog voice and camera creates analog videos, which are treated is
analog data. To transmit this analog data over digital signals, we need analog to digital
conversion.
Analog data is a continuous stream of data in the wave form whereas digital data is discrete.
To convert analog wave into digital data, we use Pulse Code Modulation (PCM).
PCM is one of the most commonly used method to convert analog data into digital form. It
involves three steps:
Sampling
Quantization
Encoding.
Sampling
The analog signal is sampled every T interval. Most important factor in sampling is the rate
at which analog signal is sampled. According to Nyquist Theorem, the sampling rate must
be at least two times of the highest frequency of the signal.
Quantization
Sampling yields discrete form of continuous analog signal. Every discrete pattern shows the
amplitude of the analog signal at that instance. The quantization is done between the
maximum amplitude value and the minimum amplitude value. Quantization is
approximation of the instantaneous analog value.
Encoding
Transmission Modes
The transmission mode decides how data is transmitted between two computers.The binary
data in the form of 1s and 0s can be sent in two different modes: Parallel and Serial.
Parallel Transmission
The binary bits are organized in-to groups of fixed length. Both sender and receiver are
connected in parallel with the equal number of data lines. Both computers distinguish
between high order and low order data lines. The sender sends all the bits at once on all
lines.Because the data lines are equal to the number of bits in a group or data frame, a
complete group of bits (data frame) is sent in one go. Advantage of Parallel transmission is
high speed and disadvantage is the cost of wires, as it is equal to the number of bits sent in
parallel.
Serial Transmission
In serial transmission, bits are sent one after another in a queue manner. Serial transmission
requires only one communication channel.
In a simplex transmission, one device acts as the transmitter and a second device acts as the
receiver. Data flows in one direction only, whereas in a duplex channel, the communication is
bi-directional. Full-duplex transmission uses two separate communication channels so that
two communicating devices can transmit and receive data at the same time. Data can flow in
both directions simultaneously. Half-duplex transmission is a compromise between simplex
and full-duplex transmission. A single channel is shared between the devices wishing to
communicate, and the devices must take turns to transmit data. Data can flow in both
directions, but not simultaneously.
TRANSMISSION MEDIA
The transmission media is nothing but the physical media over which communication takes
place in computer networks.
Magnetic Media
One of the most convenient way to transfer data from one computer to another, even before
the birth of networking, was to save it on some storage media and transfer physical from one
station to another. Though it may seem old-fashion way in today‟s world of high speed
internet, but when the size of data is huge, the magnetic media comes into play.
For example, a bank has to handle and transfer huge data of its customer, which stores a
backup of it at some geographically far-away place for security reasons and to keep it from
uncertain calamities. If the bank needs to store its huge backup data then its,transfer through
internet is not feasible. The WAN links may not support such high speed.Even if they do;
the cost too high to afford.
In these cases, data backup is stored onto magnetic tapes or magnetic discs, and then shifted
physically at remote places.
STP cables comes with twisted wire pair covered in metal foil. This makes it more
indifferent to noise and crosstalk.
UTP has seven categories, each suitable for specific use. In computer networks, Cat-5, Cat-
5e, and Cat-6 cables are mostly used. UTP cables are connected by RJ45 connectors.
Coaxial Cable
Coaxial cable has two wires of copper. The core wire lies in the center and it is made of
solid conductor.The core is enclosed in an insulating sheath.The second wire is wrapped
around over the sheath and that too in turn encased by insulator sheath.This all is covered by
plastic cover.
Because of its structure,the coax cable is capable of carrying high frequency signals than
that of twisted pair cable.The wrapped structure provides it a good shield against noise and
cross talk. Coaxial cables provide high bandwidth rates of up to 450 mbps.
There are three categories of coax cables namely, RG-59 (Cable TV), RG-58 (Thin
Ethernet), and RG-11 (Thick Ethernet). RG stands for Radio Government.
Cables are connected using BNC connector and BNC-T. BNC terminator is used to
terminate the wire at the far ends.
Power Lines
Power Line communication (PLC) is Layer-1 (Physical Layer) technology which uses power
cables to transmit data signals.In PLC, modulated data is sent over the cables. The receiver
on the other end de-modulates and interprets the data.
Because power lines are widely deployed, PLC can make all powered devices controlled and
monitored. PLC works in half-duplex.
Narrow band PLC provides lower data rates up to 100s of kbps, as they work at lower
frequencies (3-5000 kHz).They can be spread over several kilometers.
Broadband PLC provides higher data rates up to 100s of Mbps and works at higher
frequencies (1.8 – 250 MHz).They cannot be as much extended as Narrowband PLC.
Fiber Optics
Fiber Optic works on the properties of light. When light ray hits at critical angle it tends to
refracts at 90 degree. This property has been used in fiber optic. The core of fiber optic cable
is made of high quality glass or plastic. From one end of it light is emitted, it travels through
it and at the other end light detector detects light stream and converts it to electric data.
Fiber Optic provides the highest mode of speed. It comes in two modes, one is single mode
fiber and second is multimode fiber. Single mode fiber can carry a single ray of light
whereas multimode is capable of carrying multiple beams of light.
Fiber Optic also comes in unidirectional and bidirectional capabilities. To connect and
access fiber optic special type of connectors are used. These can be Subscriber Channel
(SC), Straight Tip (ST), or MT-RJ.
Multiplexing
Communication is possible over the air (radio frequency), using a physical media (cable),
and light (optical fiber). All mediums are capable of multiplexing.
When multiple senders try to send over a single medium, a device called Multiplexer divides
the physical channel and allocates one to each. On the other end of communication, a De-
multiplexer receives data from a single medium, identifies each, and sends to different
receivers.
When the carrier is frequency, FDM is used. FDM is an analog technology. FDM divides
the spectrum or carrier bandwidth in logical channels and allocates one user to each channel.
Each user can use the channel frequency independently and has exclusive access of it. All
channels are divided in such a way that they do not overlap with each other. Channels are
separated by guard bands. Guard band is a frequency which is not used by either channel.
TDM is applied primarily on digital signals but can be applied on analog signals as well. In
TDM the shared channel is divided among its user by means of time slot. Each user can
transmit data within the provided time slot only. Digital signals are divided in frames,
equivalent to time slot i.e. frame of an optimal size which can be transmitted in given time
slot.
TDM works in synchronized mode. Both ends, i.e. Multiplexer and De-multiplexer are
timely synchronized and both switch to next channel simultaneously.
When channel A transmits its frame at one end,the De-multiplexer provides media to
channel A on the other end.As soon as the channel A‟s time slot expires, this side switches
to channel B. On the other end, the De-multiplexer works in a synchronized manner and
provides media to channel B. Signals from different channels travel the path in interleaved
manner.
Light has different wavelength (colors). In fiber optic mode, multiple optical carrier signals
are multiplexed into an optical fiber by using different wavelengths. This is an analog
multiplexing technique and is done conceptually in the same manner as FDM but uses light
as signals.
Multiple data signals can be transmitted over a single frequency by using Code Division
Multiplexing. FDM divides the frequency in smaller channels but CDM allows its users to
full bandwidth and transmit signals all the time using a unique code. CDM uses orthogonal
codes to spread signals.
Each station is assigned with a unique code, called chip. Signals travel with these codes
independently, inside the whole bandwidth.The receiver knows in advance the chip code
signal it has to receive.
There are many reasons such as noise, cross-talk etc., which may help data to get corrupted
during transmission. The upper layers work on some generalized view of network
architecture and are not aware of actual hardware data processing.Hence, the upper layers
expect error-free transmission between the systems. Most of the applications would not
function expectedly if they receive erroneous data. Applications such as voice and video
may not be that affected and with some errors they may still function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams)
are transmitted with certain level of accuracy. But to understand how errors is controlled, it
is essential to know what types of errors may occur.
Types of Errors
Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy
Check (CRC). In both cases, few extra bits are sent along with actual data to confirm that
bits received at other end are same as they were sent. If the counter-check at receiver‟ end
fails, the bits are considered corrupted.
Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in case of
even parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even parity
is used and number of 1s is even then one bit with value 0 is added. This way number of 1s
remains even.If the number of 1s is odd, to make it even a bit with value 1 is added.
The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even
parity is used, the frame is considered to be not-corrupted and is accepted. If the count of 1s
is odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But
when more than one bits are erro neous, then it is very hard for the receiver to detect the
error.
At the other end, the receiver performs division operation on codewords using the same
CRC divisor. If the remainder contains all zeros the data bits are accepted, otherwise it is
considered as there some data corruption occurred in transit.
Error Correction
Flow Control
When a data frame (Layer-2 data) is sent from one host to another over a single medium, it
is required that the sender and receiver should work at the same speed. That is, sender sends
at a speed on which the receiver can process and accept the data. What if the speed
(hardware/software) of the sender or receiver differs? If sender is sending too fast the
receiver may be overloaded, (swamped) and data may be lost.
Two types of mechanisms can be deployed to control the flow:
Stop and Wait
This flow control mechanism forces the sender after transmitting a data frame to stop
and wait until the acknowledgement of the data-frame sent is received.
Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of
data-frames after which the acknowledgement should be sent. As we learnt, stop and
wait flow control mechanism wastes resources, this protocol tries to make use of
underlying resources as much as possible.
Error Control
When data-frame is transmitted, there is a probability that data-frame may be lost in the
transit or it is received corrupted. In both cases, the receiver does not receive the correct
data-frame and sender does not know anything about any loss.In such case, both sender and
receiver are equipped with some protocols which helps them to detect transit errors such as
loss of data-frame. Hence, either the sender retransmits the data-frame or the receiver may
request to resend the previous data-frame.
Requirements for error control mechanism:
Error detection - The sender and receiver, either both or any, must ascertain that
there is some error in the transit.
Positive ACK - When the receiver receives a correct frame, it should acknowledge
it.
Negative ACK - When the receiver receives a damaged frame or a duplicate frame,
it sends a NACK back to the sender and the sender must retransmit the correct frame.
Retransmission: The sender maintains a clock and sets a timeout period. If an
acknowledgement of a data-frame previously transmitted does not arrive before the
timeout the sender retransmits the frame, thinking that the frame or it‟s
acknowledgement is lost in transit.
There are three types of techniques available which Data-link layer may deploy to control
the errors by Automatic Repeat Requests (ARQ):
Stop-and-wait ARQ
The following transition may occur in Stop-and-Wait ARQ:
PROTOCOLS
A protocol is a set of rules which governs how data is sent from one point to another.
In data communications, there are widely accepted protocols for sending data. Both the
sender and receiver must use the same protocol when communicating. One such rule is . . . .
Asynchronous Transmission
The purpose of the start and stop bits was introduced for the old electromechanical
Tele-typewriters. These used motors driving cams that actuated solenoids that sampled the
signal at specific time intervals. The motors took a while to get up to speed, thus by prefixing
the first data bit with a start bit, this gave time for the motors to get up to speed. The cams
generate a reference point for the start of the first data bit.
This method of transmission is suitable for slow speeds less than about 32000 bits per second.
In addition, notice that the signal that is sent does not contain any information that can be
used to validate if it was received without modification. This means that this method does not
contain error detection information, and is susceptible to errors.
In addition, for every character that is sent, an additional two bits is also sent. Consider the
sending of a text document which contains 1000 characters. Each character is eight bits, thus
the total number of bits sent are 10000 (8 bits per character plus a start and stop bit for each
character). This 10000 bits is actually 1250 characters, meaning that an additional 250
equivalent characters are sent due to the start and stop bits. This represents a large overhead
in sending data, clearly making this method an inefficient means of sending large amounts of
data.
Synchronous Transmission
One of the problems associated with asynchronous transmission is the high overhead
associated with transmitting data. For instance, for every character of 8 bits transmitted, at
least an additional overhead of 2 bits is required. For large amounts of data, this quickly adds
up. For example, to transmit 1000 characters, this requires 12000 bits, an extra 2000 bits for
the start and stops. This is equivalent to an overhead of 250 characters. Another problem is
the complete lack of any form of error detection. This means the sender has no method of
knowing whether the receiver is correctly recognizing the data being transmitted.
A start type sequence, called a header, prefixes each block of characters, and a stop
type sequence, called a tail, suffixes each block of characters. The tail is expanded to include
a check code, inserted by the transmitter, and used by the receiver to determine if the data
block of characters was received without errors. In this way, synchronous transmission
overcomes the two main deficiencies of the asynchronous method, that of inefficiency and
lack of error detection.
There are variations of synchronous transmission, which are split into two groups,
namely character orientated and bit orientated. In character orientated, information is encoded
as characters. In bit orientated, information is encoded using bits or combinations of bits, and
is thus more complex than the character orientated version. Binary synchronous is an
example of character orientated, and High Level Data Link Control (HDLC) is an example of
bit orientated.
The header field is used to convey address information (sender and receiver), packet type and
control data. The data field contains the users data (if it can't fit in a single packet, then use
multiple packets and number them). Generally, it has a fixed size. The tail field contains
checksum information which the receiver uses to check whether the packet was corrupted
during transmission.
High Level Link Control (HDLC) Protocol
The HDLC protocol is a general purpose protocol which operates at the data link layer
of the OSI reference model. The protocol uses the services of a physical layer, and provides
either a best effort or reliable communications path between the transmitter and receiver (i.e.
with acknowledged data transfer). The type of service provided depends upon the HDLC
mode which is used.
Each piece of data is encapsulated in an HDLC frame by adding a trailer and a header.
The header contains an HDLC address and an HDLC control field. The trailer is found at the
end of the frame, and contains a Cyclic Redundancy Check (CRC) which detects any errors
which may occur during transmission. The frames are separated by HDLC flag sequences
which are transmitted between each frame and whenever there is no data to be transmitted.
HDLC Frame Structure showing flags, header (address and control), data and trailer (CRC-
16).
Unit II - LOCAL AREA NETWORKS
A local area network (LAN) covers a limited geographical area (typically a single
building or campus), and is usually wholly owned and maintained by a single organisation.
LANs are widely used to connect personal computers and workstations in homes, company
offices and factories in order to share resources and exchange information. They are
distinguished from other kinds of network by their size, their transmission technology, and
their topology. Because of their (relatively) small size, management of the network is
comparatively straightforward, and LANs generally enjoy high data rates, small propagation
delays, and low error rates. The diagram below shows some of the elements you could expect
to find in a typical local area network.
NETWORK TOPOLOGY
If the hosts are connected point-to-point logically, then may have multiple intermediate
devices. But the end hosts are unaware of underlying network and see each other as if they
are connected directly.
Bus Topology
In case of Bus topology, all devices share single communication line or cable.Bus topology
may have problem while multiple hosts sending data at the same time. Therefore, Bus
topology either uses CSMA/CD technology or recognizes one host as Bus Master to solve
the issue. It is one of the simple forms of networking where a failure of a device does not
affect the other devices. But failure of the shared communication line can make all other
devices stop functioning.
Both ends of the shared channel have line terminator. The data is sent in only one direction
and as soon as it reaches the extreme end, the terminator removes the data from the line.
Star Topology
All hosts in Star topology are connected to a central device, known as hub device, using a
point-to-point connection. That is, there exists a point to point connection between hosts and
hub. The hub device can be any of the following:
As in Bus topology, hub acts as single point of failure. If hub fails, connectivity of all hosts
to all other hosts fails. Every communication between hosts, takes place through only the
hub. Star topology is not expensive as to connect one more host, only one cable is required
and configuration is simple.
Ring Topology
In ring topology, each host machine connects to exactly two other machines, creating a
circular network structure. When one host tries to communicate or send message to a host
which is not adjacent to it, the data travels through all intermediate hosts. To connect one
more host in the existing structure, the administrator may need only one more extra cable.
Failure of any host results in failure of the whole ring.Thus, every connection in the ring is a
point of failure. There are methods which employ one more backup ring.
Mesh Topology
In this type of topology, a host is connected to one or multiple hosts.This topology has hosts
in point-to-point connection with every other host or may also have hosts which are in point-
to-point connection to few hosts only.
Hosts in Mesh topology also work as relay for other hosts which do not have direct point-to-
point links. Mesh technology comes into two types:
Full Mesh: All hosts have a point-to-point connection to every other host in the
network. Thus for every new host n(n-1)/2 connections are required. It provides the
most reliable network structure among all network topologies.
Partially Mesh: Not all hosts have point-to-point connection to every other host.
Hosts connect to each other in some arbitrarily fashion. This topology exists where we
need to provide reliability to some hosts out of all.
Tree Topology
Also known as Hierarchical Topology, this is the most common form of network topology in
use presently. This topology imitates as extended Star topology and inherits properties of
bus topology.
All neighboring hosts have point-to-point connection between them.Similar to the Bus
topology, if the root goes down, then the entire network suffers even.though it is not the
single point of failure. Every connection serves as point of failure, failing of which divides
the network into unreachable segment.
Daisy Chain
This topology connects all the hosts in a linear fashion. Similar to Ring topology, all hosts
are connected to two hosts only, except the end hosts.Means, if the end hosts in daisy chain
are connected then it represents Ring topology.
Each link in daisy chain topology represents single point of failure. Every link failure splits
the network into two segments.Every intermediate host works as relay for its immediate
hosts.
Hybrid Topology
A network structure whose design contains more than one topology is said to be hybrid
topology. Hybrid topology inherits merits and demerits of all the incorporating topologies.
The above picture represents an arbitrarily hybrid topology. The combining topologies may
contain attributes of Star, Ring, Bus, and Daisy-chain topologies. Most WANs are connected
by means of Dual-Ring topology and networks connected to them are mostly Star topology
networks. Internet is the best example of largest Hybrid topology
Local Area Networks - Basic Hardware Components
All networks are made up of basic hardware building blocks to interconnect network nodes,
such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers. In addition,
some method of connecting these building blocks is required, usually in the form of galvanic
cable (most commonly Category 5 cable). Less common are microwave links (as in IEEE
802.11) or optical cable ("optical fiber").
A network card, network adapter or NIC (network interface card) is a piece of computer
hardware designed to allow computers to communicate over a computer network. It
provides physical access to a networking medium and often provides a low-level addressing
system through the use of MAC addresses. It allows users to connect to each other either by
using cables or wirelessly.
Repeaters
A repeater is an electronic device that receives a signal and retransmits it at a higher power
level, or to the other side of an obstruction, so that the signal can cover longer distances
without degradation. In most twisted pair Ethernet configurations, repeaters are required for
cable runs longer than 100 meters away from the computer.
Hubs
A hub contains multiple ports. When a packet arrives at one port, it is copied to all the ports
of the hub for transmission. When the packets are copied, the destination address in the frame
does not change to a broadcast address. It does this in a rudimentary way: It simply copies the
data to all of the Nodes connected to the hub.
Bridges
A network bridge connects multiple network segments at the data link layer (layer 2) of the
OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs do, but learn
which MAC addresses are reachable through specific ports. Once the bridge associates a port
and an address, it will send traffic for that address only to that port. Bridges do send
broadcasts to all ports except the one on which the broadcast was received.
Bridges learn the association of ports and addresses by examining the source address of
frames that it sees on various ports. Once a frame arrives through a port, its source address is
stored and the bridge assumes that MAC address is associated with that port. The first time
that a previously unknown destination address is seen, the bridge will forward the frame to all
ports other than the one on which the frame arrived.
Switches
A switch is a device that performs switching. Specifically, it forwards and filters OSI layer 2
datagrams (chunk of data communication) between ports (connected cables) based on the
MAC addresses in the packets.[4] This is distinct from a hub in that it only forwards the
datagrams to the ports involved in the communications rather than all ports connected.
Strictly speaking, a switch is not capable of routing traffic based on IP address (layer 3)
which is necessary for communicating between network segments or within a large or
complex LAN. Some switches are capable of routing based on IP addresses but are still called
switches as a marketing term. A switch normally has numerous ports, with the intention being
that most or the entire network is connected directly to the switch, or another switch that is in
turn connected to a switch.
Switch is a marketing term that encompasses routers and bridges, as well as devices that may
distribute traffic on load or by application content (e.g., a Web URL identifier). Switches may
operate at one or more OSI model layers, including physical, data link, network, or transport
(i.e., end-to-end). A device that operates simultaneously at more than one of these layers is
called a multilayer switch.
Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to
understand networking. Many experienced network designers and operators recommend
starting with the logic of devices dealing with only one protocol level, not all of which are
covered by OSI. Multilayer device selection is an advanced topic that may lead to selecting
particular implementations, but multilayer switching is simply not a real-world design
concept.
Routers
Router e: Routers are networking devices that forward data packets between networks using
headers and forwarding tables to determine the best path to forward the packets. Routers
work at the network layer of the TCP/IP model or layer 3 of the OSI model. Routers also
provide interconnectivity between like and unlike media (RFC 1812). This is accomplished
by examining the Header of a data packet, and making a decision on the next hop to which it
should be sent (RFC 1812) they use preconfigured static routes, status of their hardware
interfaces, and routing protocols to select the best route between any two subnets. A router is
connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's
network. Some DSL and cable modems, for home (and even office) use, have been integrated
with routers to allow multiple home/office computers to access the Internet through the same
connection. Many of these new devices also consist of wireless access points (WAPs) or
wireless routers to allow for IEEE 802.11g/b wireless enabled devices to connect to the
network without the need for cabled connections.
Ethernet introduction Standards Ethernet data frame structure 100 Base T Gigabit
Ethernet, 1GE Cables How to buy Ethernet cables Power over Ethernet, PoE
Ethernet, IEEE 802.3, is one of the most widely used standards for computer networking and
general data communications.
The Ethernet standard has been used for many years, being steadily updated to meet the
requirements of growing technology. Data communication speeds have steadily risen and
Ethernet, IEEE 802.3 has increased its speeds accordingly. Although too many, Ethernet is
familiar because Ethernet connections are used for wired connectivity for computers, but it
also provides the backbone for many other data communications systems large and small.
The Ethernet Version 1 specification that arose from this development formed the
basis for the first IEEE 802.3 standard that was approved in 1983, and finally published as an
official standard in 1985.
Since the first Ethernet standards were written and approved, many updates have been
introduced to keep the Ethernet standard it in line with the latest technologies that are
becoming available.
The Ethernet IEEE 802.3 LAN can be considered to consist of two main elements:
Interconnecting media: The media through which the signals propagate is of great
importance within the Ethernet network system. It governs the majority of the properties
that determine the speed at which the data may be transmitted. There are a number of
options that may be used:
Coaxial cable: This was one of the first types of interconnecting media to be used
for Ethernet. Typically the characteristic impedance was around 110 ohms and
therefore the cables normally used for radio frequency applications were not
applicable.
Twisted Pair Cables Type types of twisted pair may be used: Unshielded Twisted
Pair (UTP) or a Shielded Twisted Pair (STP). Generally the shielded types are better
as they limit stray pickup more and therefore data errors are reduced.
Fibre optic cable: Fibre optic cable is being used increasingly as it provides very
high immunity to pickup and radiation as well as allowing very high data rates to be
communicated.
Network nodes The network nodes are the points to and from which the
communication takes place. The network nodes also fall into categories
Data Terminal Equipment - DTE: These devices are either the source or destination
of the data being sent. Devices such as PCs, file servers, print servers and the like fall
into this category.
Data Communications Equipment - DCE: Devices that fall into this category receive
and forward the data frames across the network, and they may often be referred to as
'Intermediate Network Devices' or Intermediate Nodes. They include items such as
repeaters, routers, switches or even modems and other communications interface
units.
There are several network topologies that can be used for Ethernet communications. The
actual form used will depend upon the requirements.
Point to point: This is the simplest configuration as only two network units are used. It
may be a DTE to DTE, DTE to DCE, or even a DCE to DCE. In this simple structure the
cable is known as the network link. Links of this nature are used to transport data from
one place to another and where it is convenient to use Ethernet as the transport
mechanism.
Coaxial bus: This type of Ethernet network is rarely used these days. The systems used
a coaxial cable where the network units were located along the length of the cable. The
segment lengths were limited to a maximum of 500 metres, and it was possible to place
up to 1024 DTEs along its length. Although this form of network topology is not installed
these days, a very few legacy systems might just still be in use.
Star network: This type of Ethernet network has been the dominant topology since the
early 1990s. It consists of a central network unit, which may be what is termed a multi-
port repeater or hub, or a network switch. All the connections to other nodes radiate out
from this and are point to point links.
Ethernet Technologies
It can be seen from the diagram above that the data can be split into several elements:
PRE This is the Preamble and it is seven bytes long and it consists of a series of alternating
ones and zeros. This warns the receivers that a data frame is coming and it allows them to
synchronise to it.
SOF This is the Start Of Frame delimiter. This is only one byte long and comprises a
pattern of alternating ones and zeros ending with two bits set to logical "one". This indicates
that the next bit in the frame will be the destination address.
DA This is the Destination Address and it is six bytes in length. This identifies the receiver
that should receive the data. The left-most bit in the left-most byte of the destination address
immediately follows the SOF.
SA This is the Source Address and again it is six bytes in length. As the name implies it
identifies the source address.
Length / Type This two byte field indicates the payload data length. It may also provide
the frame ID if the frame is assembled using an alternative format.
Data This section has a variable length according to the amount of data in the payload. It
may be anywhere between 46 and 1500 bytes. If the length of data is below 46 bytes, then
dummy data is transmitted to pad it out to reach the minimum length.
FCS This is the Frame Check Sequence which is four bytes long. This contains a 32 bit
cyclic redundancy check (CRC) that is used for error checking.
Although the 1000Base-T version of Gigabit Ethernet is probably the most widely used, the
802.3ab specification also details versions that can operate over other media:
1000Base-CX This was intended for connections over short distances up to 25 metres per
segment and using a balanced shielded twisted pair copper cable. However it was
succeeded by 1000Base-T.
1000Base-LX This is a fiber optic version that uses a long wavelength
1000Base-SX This is a fiber optic version of the standard that operates over multi-mode
fiber using a 850 nanometer, near infrared (NIR) light wavelength
1000Base-T Also known as IEEE 802.3ab, this is a standard for Gigabit Ethernet over
copper wiring, but requires Category 5 (Cat 5) cable as a minimum.
The specification for Gigabit Ethernet provides for a number of requirements to be met.
These can be summarized as the points below:
Provide for half and full duplex operation at speeds of 1000 Mbps.
Use the CSMA/CD access method with support for one repeater per collision domain.
Like 10Base-T and 100Base-T, the predecessors of Gigabit Ethernet, the system is a physical
(PHY) and media access control (MAC) layer technology, specifying the Layer 2 data link
layer of the OSI protocol model. It complements upper-layer protocols TCP and IP, which
specify the Layer 4 transport and Layer 3 network portions and enable communications
between applications.
Token Bus is described in the IEEE 802.4 specification, and is a Local Area Network (LAN)
in which the stations on the bus or tree form a logical ring. Each station is assigned a place in
an ordered sequence, with the last station in the sequence being followed by the first, as
shown below. Each station knows the address of the station to its "left" and "right" in the
sequence.
A Token Bus network
This type of network, like a Token Ring network, employs a small data frame only a few
bytes in size, known as a token, to grant individual stations exclusive access to the network
transmission medium. Token-passing networks are deterministic in the way that they control
access to the network, with each node playing an active role in the process. When a station
acquires control of the token, it is allowed to transmit one or more data frames, depending on
the time limit imposed by the network. When the station has finished using the token to
transmit data, or the time limit has expired, it relinquishes control of the token, which is then
available to the next station in the logical sequence. When the ring is initialized, the station
with the highest number in the sequence has control of the token.
Token Bus networks were conceived to meet the needs of automated industrial
manufacturing systems and owe much to a proposal by General Motors for a networking
system to be used in their own manufacturing plants - Manufacturing Automation
Protocol (MAP). Ethernet was not considered suitable for factory automation systems
because of the contention-based nature of its medium access control protocol, which meant
that the length of time a station might have to wait to send a frame was unpredictable.
Ethernet also lacked a priority system, so there was no way to ensure that more important
data would not be held up by less urgent traffic.
A token-passing system in which each station takes turns to transmit a frame was
considered a better option, because if there are n stations, and each station takes T seconds to
send a frame, no station has to wait longer than nT seconds to acquire the token. The ring
topology of existing token-passing systems, however, was not such an attractive idea, since a
break in the ring would cause a general network failure. A ring topology was also considered
to be incompatible with the linear topology of assembly-line or process control systems.
Token Bus was a hybrid system that provided the robustness and linearity of a bus or tree
topology, whilst retaining the known worst-case performance of a token-passing medium
access control method.
The transmission medium most often used for broadband Token Bus networks is 75 Ohm
coaxial cable (the same type of cable used for cable TV), although alternative cabling
configurations are available. Both single and dual cable systems may be used, with or without
head-ends. Transmission speeds vary, with data rates of 1, 5 and 10 Mbps being common.
The analogue modulation schemes that can be used include:
with the highest. The token itself is passed from higher to lower addresses. Once a station
acquires the token, it has a fixed time period during which it may transmit frames, and the
number of frames which can be transmitted by each station during this time period will
depend on the length of each frame. If a station has no data to send, it simply passes the token
The Token Bus standard defines four classes of priority for traffic - 0, 2, 4, and 6 -
with 6 representing the highest priority and 0 the lowest. Each station maintains four internal
queues that correspond to the four priority levels. As a frame is passed down to the MAC
sublayer from a higher-layer protocol, its priority level is determined, and it is assigned to the
appropriate queue. When a station acquires the token, frames are transmitted from each of the
four queues in strict order of priority. Each queue is allocated a specific time slot, during
which frames from that queue may be transmitted. If there are no frames waiting in a
particular queue, the token immediately becomes available to the next queue. If the token
reaches level 0 and there are no frames waiting, it is immediately passed to the next station in
the logical ring. The whole process is controlled by timers that are used to allocate time slots
to each priority level. If any queue is empty, its time slot may be allocated for use by the
remaining queues.
The priority scheme guarantees level 6 data a known fraction of the network bandwith, and
can therefore be used to implement a real-time control system. As an example, if a network
running at 10 Mbps and having fifty stations has been configured so that level 6 traffic is
allocated one-third of the bandwidth, each station has a guaranteed bandwidth of 67 kbps for
level 6 traffic. The available high priority bandwidth could thus be used to synchronise robots
on an assembly line, or to carry one digital voice channel per station, with some bandwidth
left over for control information.
The Token Bus frame format is shown above. The Preamble field is used to synchronise the
receiver's clock. The Start Delimeter and End Delimeter fields are used to mark the start and
end of the frame, and contain an analogue encoding of symbols other than 0s and 1s that
cannot occur accidentally within the frame data. For this reason, a length field is not required.
The Frame Control field identifies the frame as either a data frame or a control frame. For
data frames, it includes the priority level of the frame, and may also include an indicator
requiring the destination station to acknowledge correct or incorrect receipt of the frame. For
control frames, the field specifies the frame type.
The Destination and Source address fields contain either a 2-byte or a 6-byte hardware
address for the destination and source stations respectively (a given network must use either
2-byte or 6-byte addresses consistently, not a mixture of the two). If 2-byte addresses are
used, the Data Field can be up t0 8,182 bytes. If 6-byte addresses are used, it is limited to
8,174 bytes. The Checksum is used to detect transmission errors. The various control frames
used are shown in the table below.
Token Ring was developed by IBM in the 1970s and is described in the IEEE 802.5
specification. It is no longer widely used in LANs. Token passing is the method of medium
access, with only one token allowed to exist on the network at any one time. Network devices
must acquire the token to transmit data, and may only transmit a single frame before releasing
the token to the next station on the ring. When a station has data to transmit, it acquires the
token at the earliest opportunity, marks it as busy, and attaches the data and control
information to the token to create a data frame, which is then transmitted to the next station
on the ring. The frame will be relayed around the ring until it reaches the destination station,
which reads the data, marks the frame as having been read, and sends it on around the ring.
When the sender receives the acknowledged data frame, it generates a new token, marks it as
being available for use, and sends it to the next station. In this way, each of the other stations
on the ring will get an opportunity to transmit data.
Token Ring networks provide a priority system that allows administrators to designate
specific stations as having a higher priority than others, allowing those stations to use the
network more frequently by setting the priority level of the token so that only stations with
the same priority or higher can use the token (or reserve the token for future use). Stations
that raise a token's priority must reinstate the priority level previously in force once they have
used the token. In a Token Ring network, one station is arbitrarily selected to be the active
monitor. The active monitor acts as a source of timing information for other stations, and
performs various maintenance functions, such as generating a new token as and when
required, or preventing rogue data frames from endlessly circulating around the ring. All of
the stations on the ring have a role to play in managing the network, however. Any station
that detects a serious problem will generate a beacon frame that alerts other stations to the
fault condition, prompting them to carry out diagnostic activities and attempt to re-configure
the network.
Frame format
Two basic frame types are used - tokens, and data/command frames. The token is
three bytes long and consists of a start delimiter, an access control byte, and an end delimiter.
The format of the token is shown below.
A data/command frame has the same fields as the token, plus several additional fields. The
format of the data/command frame is shown below.
Access control byte - contains the priority field, the reservation field, the token
bit and a monitor bit.
Frame control byte - indicates whether the frame contains data or control
information. In a control frame, this byte specifies the type of control
information carried.
Destination and source addresses - two six-byte fields that identify the
destination and source station MAC addresses.
Data - the maximum length is limited by the ring token holding time, which
defines the maximum time a station can hold the token
Frame check sequence (FCS) - filled by the source station with a calculated
value dependent on the frame contents. The destination station recalculates the
value to determine whether the frame was damaged in transit. If so, the frame
is discarded.
End delimiter - signals the end of the token or frame, and contains bits that
may be used to indicate a damaged frame, and to identify the last frame in a
logical sequence.
Frame status - a one-byte field that terminates a frame, and includes the one-
bit address-recognized and frame-copied fields. These one-bit fields, if set,
provide confirmation that the frame has been delivered to the source address
and the data read. Both fields are duplicated within the frame status byte.
If the network is quiet and none of the stations has any data to transmit, the token simply
circulates around the ring continuously. When a station has data to transmit, it waits until it
receives the token, marks it as "busy" by setting the token bit, adds the data and/or control
information to create a data or command frame, and transmits the frame to the next station.
Each station that receives the frame will re-transmit the frame to the next station until it
reaches the destination station. This station reads the data, sets the address
recognized and frame copied bits in the frame status field, and transmits the frame to the next
node. When the frame arrives back at its point of origin, the originating station generates a
new token, which it transmits to the next station, even if it has further data to send. In this
way, each station network has an equal number of opportunities to transmit data. Because
only one token is allowed to exist on the network, only one station can transmit at any one
time, and collisions cannot occur. Although the IEEE 802.5 specification reflects IBM's
Token Ring technology, the specifications differ slightly. IBM specifies a star topology, with
all end stations star-wired to a multi-station access unit (MSAU), wheras IEEE 802.5 does
not specify a topology (although virtually all IEEE 802.5 implementations were based on a
star). In addition, IEEE 802.5 does not specify a media type, while IBM originally specified
shielded twisted pair cable. The table below summarises the IBM and IEEE 802.5
specifications.
Fault Management
One station (it can be any station on the network) is selected to be the active monitor. The
active monitor acts as a central source of timing information for the other stations on the
network, and performs various maintenance functions, including making sure that there is
always a token available on the network. The active monitor also sets the monitor bit on any
data or command frame it encounters on the ring so that, in the event that a sending device
fails after transmitting a frame, the frame can be prevented from circling the ring endlessly
and thereby denying access to the network for other stations. If the active monitor receives a
frame with the monitor bit already set, it removes the frame from the ring and generates a
new token.
The use of a multi station access unit (or wiring center) in a star topology contributes to
network reliability, since these devices can be configured to check for problems and remove
faulty stations from the ring if necessary. A Token Ring algorithm called beaconingcan be
used to detect certain types of network fault. When a station detects a serious problem on the
network (a cable break, for example), it transmits a beacon frame which initiates an auto-
reconfiguration process. Stations that receive a beacon frame perform diagnostic procedures
and attempt to reconfigure the network around the failed areas. Much of this reconfiguration
process can be handled internally by the MSAU. The MSAU contains relays that switch a
computer into the ring when it is turned on, or out of the ring when the computer is powered
off. A MSAU has a number of ports to which network devices can be connected, a ring-
out port allowing the unit to be connected to another MSAU, and a ring-in port that can
accept an incoming connection from another MSAU. A number of MSAUs can thus be
connected together in daisy-chain fashion to create a larger network. The ring-out port of the
last MSAU in the chain must be connected back to the ring-in port of the first MSAU.
Fibre Distributed Data Interface (FDDI) was developed by ANSI in the mid-1980s and
specifies a 100-Mbps token-passing dual-ring LAN using fibre-optic cable, which is
frequently used as high-speed backbone technology because of its high bandwidth and the
distances it can span (up to 100 kilometres). Due to the advent of fast Ethernet and Gigabit
Ethernet, the complexity of station management in FDDI networks, and its high cost, FDDI
has never gained a foothold in the LAN market. The dual-ring system consists of a primary
and a secondary ring, in which traffic on each ring flows in opposite directions. In normal
operation, the primary ring is used for data transmission and the secondary ring remains idle.
Up to 1000 devices can be connected to an FDDI network, with up to two kilometres between
stations is using multi-mode fibre, and even longer distances using single-mode fibre. There
are various ways in which FDDI devices can be connected to the network. A single
attachment station (SAS) is attached to the primary ring, usually via a concentrator.
Concentrators are devices which are similar in many ways to hubs on an Ethernet network,
and are usually dual attachment concentrators, attached to both rings. A dual attachment
station (DAS) is attached directly to both the primary and secondary rings.
Frame format
FDDI frames are similar to Token Ring frames, and can be up to 4,500 bytes in length. The
FDDI frame fields are summarized below.
The FDDI frame format
Preamble - a unique sequence that prepares each station for an upcoming frame
Frame control - indicates the size of the address fields and whether the frame
contains asynchronous or synchronous data, together with other control
information
Source Address - contains a 6-byte address that identifies the sending station
End delimiter - a bit pattern that indicates the end of the frame
Frame status - allows the source station to determine whether an error occurred
and whether the frame was recognized and copied by a receiving station
Distributed Queue Dual Bus (DQDB) is a Data-link layer communication protocol for
Metropolitan Area Networks (MANs), specified in the IEEE 802.6 standard and designed for
use in MANs. DQDB is designed for data as well as voice and video transmission and is
based on cell switching technology (similar to ATM). DQDB, which permits multiple
systems to interconnect using two unidirectional logical buses, is an open standard that is
designed for compatibility with carrier transmission standards such as SMDS.
AppleTalk
Introduced more than a decade ago as Apple's first contribution to the field of
networked computing, AppleTalk was designed using the same "computing for the masses"
philosophy that had been so completely successful (at least initially) for their Macintosh line
of computer systems. It was easy to implement, featured relatively simple administrative
requirements, and in general caused fewer headaches for network administrators than did the
other network protocols popular at the time. Fortunately, the designers at Apple chose to
conform to the OSI open-standards model, which has made it much easier to administer,
troubleshoot, and to use networks running AppleTalk as their primary protocol.
Phase 2 went above and beyond the achievements of Phase 1 by including support for
a number of important technological advances, most notably Token Ring networks. Apple
achieved the advances for Phase 2 through the implementation of the TokenTalk Link Access
Protocol (TLAP), essentially as described by the IEEE's 802.5 frame standard. TLAP faced
mixed emotions in the networking community, for while it could support a low-end speed of
4Mbps or an Ethernet-crushing maximum of 16Mbps, the infrastructure needed to support
Token Ring networks was administratively and financially more demanding that either
LocalTalk or traditional Ethernet networks. Additionally, Apple used the opportunity to
introduce augmented variations on the standard ELAP and TLAP frame standards, based on a
number of specifications included in the IEEE's 802.2 Logical Link Control (LLC) header,
adding increased reliability and efficiency to AppleTalk's bag of tricks.
VINES
The Virtual Networking System (VINES), courtesy of Banyan, is one of a number of other
networking environments that have competed for market-share alongside AppleTalk. Unlike
many of the other systems that perform similar functions, VINES diverges in a number of
regards, with the most interesting being that VINES is based on the UNIX operating system.
At the time of its introduction, VINES was a fairly revolutionary entrant into the networking
arena. It was multi-tasking, robust, flexible, and quite powerful--just about everything you'd
ever want in a computer system. However, while the VINES server is dependent on UNIX,
clients are available for a variety of the more popular desktop operating systems, supporting
Macintosh, DOS, OS/2, and others.
Of course, Banyan didn't want to be left behind in the proprietary protocol field. Just as
Apple decided to supplement the IEEE-sanctioned protocols with LocalTalk, Banyan decided
to go hog-wild and include a significant number of additional (and proprietary) protocols as
part of their networking environment. Fortunately for the rest of us, however,
Banyan did stick quite closely to the seven layers of the OSI model, so despite the proprietary
nature of many of their protocols, it is still quite easy to understand the function and
importance of each piece of the protocol stack.
Based partly on it's UNIX lineage and partly on the ambition of it's designers, VINES can
tackle just about any networking job that you'd care to throw at it. Thanks to a particularly
flexible physical layer supplemented by a robust data link layer, VINES is capable of
supporting an incredible variety of networking hardware. You'll find that VINES will support
both LAN and WAN connections, including everything from High-Level Data Link Control
(HDLC) and Link Access Procedure-Balanced (LAPB) to the more familiar implementations
along the IEEE/802.x standard, including Ethernet, Token Ring networks, and so on.
Of course, the lower layers are useless if not complimented by equally powerful higher
layers. VINES' upper layers are quite the dichotomy, insofar as they can be readily separated
into two distinct categories, proprietary and open. Banyan chose to implement a number of
protocols that mimicked--sometimes superbly, sometimes not--the publicly available
protocols that are part of TCP/IP. VINES' network layer includes specific protocols for
address resolution (VARP), a proprietary flavor of IP (VIP), and of TCP (VICP), among
others. Of course, also supported by VINES' network layer are all the protocols that you've
come to know and love--TCP/IP, ICMP, ARP, and so on. This proprietary/open mix works its
way up through the entire OSI model, featuring a combination of VINES-only
implementations alongside the more common DOS, Macintosh, and OS/2 protocols.
Token Ring/SNA
Token Ring and SNA networking have survived and prospered by the good graces of Big
Blue. Throughout their development in the 1970s and 1980s, IBM was one of the staunch
supporters for the IEEE's 802.5 Token Ring network specification, and this support blended
with IBM's Systems Network Architecture (SNA) scheme to achieve a remarkable synergy in
the networked computing arena.
Token Ring networks are designed in such a way as to create a continuous loop for data
transmission. This is most often not in the form of a physical loop, but rather a closed
electrical circuit that travels in and out of every device that is attached to the network at a
given time. Unlike traditional Ethernet networks that rely on the collision avoidance routines
of CSMA/CD, the designers of Token Ring decided to avoid the possibility of collisions
entirely by implementing a game of network `hot potato.' It works like this: Station one
receives the token, giving it the opportunity to send data across the network. Assuming that
station one has data, it will encode the data onto the token, add destination and other
information, and send the token (with the new data) ahead to the destination. Once the
destination machine receives the token, it will strip off the data and return the token to the
originating station, who will strip any remaining information away from the token and release
it back into the loop, providing subsequent stations the opportunity to transmit their
information.
The attraction (for some, anyway) of Token Ring networks comes from its unique routing
methods, which diverge rather significantly from other architectures, especially
Ethernet/IEEE 802.3. Token Ring networks use the Source Routing Protocol (IEEE 802.1)
that allows the originating station to determine the optimum route to its designated recipient.
This is facilitated by one of two separate methodologies, specifically the All Routes
Explorer (ARE) and the Spanning Tree Explorer (STE), which rely on the broadcast
transmission of multiple TEST and XID frames. ARE, which is the preferred method utilized
by SNA, broadcasts its frame, along with a destination address, to all of the rings of the
network, accumulating routing information along the way. Once the frame reaches the
destination address, it is returned to the sending machine--complete with all the collected
routing information--allowing the host to select a path to the destination. STE works in a
similar fashion. STE will send a single TEST or XID frame to each ring, where the
destination host will respond with a data set including all available routes known between
source and destination. The originating station will then select a route and retransmit the
route to the destination, allowing both sides of the connection to be aware of the intended
route.
The SNA architecture is quite similar to the seven layers of the OSI reference model. The
minor differences in the layer names and layer functionality are due to the fact that SNA
predates the release of the ISO's open-standards initiative, with it's release in 1974 pre-dating
OSI by nearly a decade. However, despite this divergence, there remains a nearly one-to-one
relationship between the OSI and SNA layers, though some of the functions delegated to
specific OSI layers are somewhat shifted in the SNA stack.
NetWare
NetWare is probably the most popular network operating system (NOS) that is currently
available anywhere in the world. It controls the lion's share of the networking market with an
impressive (even mind-staggering) number of installations, despite the recent increase in
popularity of Microsoft's Windows NT product line. This strength has come, in part, from a
great marketing and promotional scheme. More to the point, the popularity of the NetWare
NOS has been earned as a result of the strong and versatile suite of protocols that serve as the
foundation for Novell's networking flagship.
Over the years, Novell has extensively rebuilt NetWare in response to the changing needs of
the networking community. In it's earlier 286 incarnation, NetWare used a variety of
protocols to enable network communication. These included the NetWare Control Protocol
(NCP) as the heart of the NetWare product, which controlled file services; the Sequenced
Packet Exchange (SPX) protocol that facilitated application-level communication; the
Routing Information Protocol (RIP), Internetwork Packet Exchange (IPX) protocol, and the
Service Advertising Protocol (SAP) for the routing of data. As time progressed and Novell
found the need to support additional functionality and services, new protocols were integrated
into the system in order to keep up with the increasing pace of internetworking.
NetWare 3 built upon the earlier version based on the 286 architecture, adding increased
capacity for workstations, increasing the customization options for the NetWare file services,
and expanding upon the overall number of communications methods that were previously
supported by Novell's products. One of the strengths of the NetWare 3.x products centers
around the Open Data-Link Interface (ODI) which substitutes for the physical and data link
layers as developed by OSI. The ODI facilitates multi-protocol communication between LAN
and WAN hardware and other adapters, allowing one or more protocols to be bound to the
same host adapter. The ODI has greatly contributed to the success of NetWare, due primarily
to its ability to integrate a wide variety of open and proprietary network and transport layer
protocols into the NetWare environment, including: TCP/IP; AppleTalk; IPX/SPX; as well as
others. NetWare 4 goes just a bit further, though it supplements some mildly enhanced
protocols with additional core functionality, robustness, and overall performance.
TCP/IP
Although NetWare is probably the most widely implemented proprietary network operating
system, there can be no doubt as to the king of the protocol hill: TCP/IP. Not only is TCP/IP
probably the best-known of all networking protocols, it is also the most widely implemented,
for the simple reason that it is a truly open standard. The benefits of TCP/IP are many:
Everyone has access to the protocols themselves; documentation is easily obtainable; it is
eminently flexible; and is supported by many legacy products as well as almost every new
product developed in recent years. Additionally, because it is the heart and soul of the
Internet, the future of TCP/IP is anything but dim.
The TCP/IP stack (also known as the DoD protocols) follows the OSI seven-layer model in
function, though not strictly in form. Where OSI divides the stack into seven distinct layers,
TCP/IP subdivides into only four, though the functionality of these four layers extends across
the full range of the OSI model. The TCP/IP network access layer serves double-duty, filling
in for both the physical and data link layers of OSI. Moving up, the Internet layer is the
TCP/IP equivalent of the OSI network layer, with OSI's transport layer replaced by the
TCP/IP host-to-host layer. Finally, the higher-level function of the OSI model are combined
into the TCP/IP process/application layer.
The strengths of TCP/IP are many and varied. As we've discussed, it is extraordinarily
flexible, supporting a variety of network interfaces, including: Ethernet; ARCNET; FDDI;
and broadband networking options. A number of dynamic routing protocols are an integral
part of TCP/IP as well, using the Xerox-designed Routing Information Protocol (RIP) and
(later) the Open Shortest Path First (OSPF), which would incorporate load-balancing and
other optimization functions into the process of data routing.
Moving to the heart of the matter, the two Host-to-Host level protocols that keep the TCP/IP
process moving along are the Transmission Control Protocol (TCP) and the Universal
Datagram Packet (UDP). TCP is the protocol used for transmissions that require highly
reliable connections, including electronic mail services (SMTP), file transfer (FTP), and so
on. TCP works in a streaming environment, breaking each data stream into 65K segments
(octets). These segments are then tracked and transmitted, allowing the stack to provide the
data-integrity services such as flow and error control. The downside to TCP is that there is a
tangible tradeoff for the increased reliability of the connection: overhead. All TCP
transmissions require a header whose length must be a minimum of 20 octets, which--
depending on what you're sending across the network--may or may not be a reasonable
amount of extra work for your machines.
UDP, on the other hand, is a perfect choice for certain applications that do not require
extreme degrees of reliability. UDP sacrifices flow control, error detection and handling, as
well as other functions to achieve a maximum header size of 8 octets, which allows for
significantly improved network performance for certain functions.
As we move into the higher TCP/IP layers, we find that the process/application layer of the
TCP/IP stack controls a number or protocols essential to the proper functioning of TCP/IP-
based networks. These protocols include: the electronic messaging Simple Mail Transfer
Protocol (SMTP); file transfer (FTP); the foundation of the World Wide Web, Hypertext
Transfer Protocol (HTTP); as well as network management via Simple Network Management
Protocol, or (SNMP). These protocols enable the feature-by-feature functionality that allows
users and administrators to interact with the lower layers and obtain the information that they
need.
The subnet consists of transmission lines and switching elements. The switching elements are
specialised computers used to interconnect two or more networks. When a data packet arrives
on an incoming transmission line, the switching element must select an outgoing transmission
line on which to forward it. If the required output line is not immediately available, the packet
will be stored until the line becomes free. These switching elements are also known
as intermediate systems, and more commonly as routers. If two routers are not directly
connected but need to communicate, they must do so indirectly via other routers. The
diagram below illustrates the relationship between the subnet and the individual LANs.
Introduction
Dial-up connections
Integrated Services Digital Network (ISDN)
Asymmetric digital subscriber line (ADSL)
Cable
Satellite
Internet connection via a LAN
Broadband Internet access technologies include those that utilise the subscriber loop of the
telephone system, and those that employ alternative transmission media such as cable or
satellite microwave. A broadband Internet connection can be loosely defined as one that
affords a higher data rate than a standard digital voice channel (i.e. greater than 64 kilobits
per second). Using this definition, ISDN would not normally be considered a broadband
technology, although it is significantly faster than dial-up, and has the added advantage of
being able to provide simultaneous voice and data communication. The most widely
deployed broadband technologies in the United Kingdom are ADSL, which utilises existing
analogue telephone lines, and cable, which utilises the broadband cable TV network
infrastructure developed from the mid 1990s onwards. Satellite-based Internet services are
also available, although current offerings are significantly more expensive than either ADSL
or cable services, and provide data rates that are significantly lower.
Dial-up connections
Dial-up Internet access is typically used by home users and small businesses who do not
require high-speed Internet access, or who cannot currently receive broadband services.
To connect a computer to the Internet using a standard twisted-pair telephone line, a device
known as a modem(MOdulator-DEModulator) is required. The modem provides an interface
between the digital computer system and the analogue telephone system, and is sometimes
referred to as an analogue modem to distinguish it from other types of modem. Binary data
from the computer is sent to the modem, which uses it to modulate an analogue carrier wave
with a signalling frequency (or baud rate) of 2400 hertz. Incoming signals must
be demodulated to extract the binary data encoded within them so that it can be interpreted by
the computer. The modem is connected to a telephone cable that plugs into a standard
telephone wall socket. There are two basic types of modem - external and internal.
An internal modem for a personal computer fits into an expansion slot on the
computer's motherboard in such a way that an RJ11 telephone socket is accessible at the back
of the computer (see below). A cable with the appropriate type of connectors at each end is
used to connect the modem to a telephone socket. The connector that plugs into the internal
modem is known as an RJ11 connector. The connector on the other end of the cable is a
standard BT431a telephone jack.
Internal modems are relatively inexpensive and are usually designed to fit into a
standard PCI expansion slot on the computer motherboard. Most PCI modems are software
modems, because they rely on the computer's CPU to do some of the processing that was
previously carried out by components built into the older hardware modems, which were
quite large, and designed to fit into the older ISA expansion slots. As a result, PCI modems
are usually much smaller.
Most modern telephone exchanges are fully digital, and the core telephone network
carries both digitised voice data and all other forms of binary data over high-bandwidth fibre
optic cables. Analogue voice signals arriving at a local telephone exchange via a
caller's subscriber line (or local loop) must be digitised before they can be sent through the
digital exchanges and trunk lines that make up the core telephone network. They must later
be converted back into an analogue signal in order to be sent along another analogue local
loop to the remote subscriber's telephone. ISDN essentially provides a digital subscriber loop,
replacing the bandwidth-restricted analogue circuit with a digital circuit using existing
twisted-pair telephone lines.
ISDN comes in two main flavors - Basic Rate Interface (BRI) and Primary Rate
Interface (PRI). PRI is used widely by businesses to provide multiple digital telephone lines
(up to 30 channels down a single 2.048 Mbps (E1) coaxial cable or optical fibre). BRI is
delivered over a single wire pair from a digital switch at the local exchange to the network
termination(NT) at the customer's premises, and consists of two B (bearer) channels of
64kbps each for data, and one D channel of 16kbps which is used for setting up and clearing
down calls, or communicating with the telephone network. The BRI service can be used to
provide two telephone lines, but the main reason we are interested in it is that it is commonly
used to provide a single digital telephone line and a 64kbps digital Internet connection
(although the two B channels can be bonded together to provide a single 128 kbps data
channel). The diagram below shows a BRI connection.
A BRI connection
At the subscriber's premises, the NT1 device provides the interface between the 2-wire
incoming telephone line and the 4-wire S/T interface, which is essentially a bus topology
terminated with a 100W resistor. Up to eight ISDN Terminal Equipment 1 (TE1) devices can
be connected to the S/T interface, although only two digital channels are available at any one
time. If an analogue Terminal Equipment 2 (TE2) device (such as an analogue telephone) is
required to be connected, a Terminal Adapter (TA) must be used between the S/T interface
and the TE2 device.
The connection time for an ISDN line is fast (approximately half a second or so) by
comparison with the connection time required for an analogue dial-up connection. In
addition, each channel has a guaranteed bandwidth of 24kbps, and telephone calls can be
made without disrupting Internet access (and vice versa). A computer is a non-ISDN device,
and as such requires a terminal adapter (an ISDN modem or router) in order to connect to the
ISDN network.
A typical ISDN setup
Basic rate ISDN services have been widely used in Europe to provide digital telephony and
Internet access. High installation costs and rental charges have discouraged their use for
Internet access in the UK, and cheap broadband cable and DSL services have rendered ISDN
largely obsolete as a viable alternative for Internet access.
Digital subscriber line broadband services in the UK take the form of ADSL (asymmetric
DSL) and operate over standard twisted pair telephone lines. The use of analogue telephone
lines imposes limitations on how far from the exchange subscribers can be and still receive
ADSL broadband service. ADSL is still not available everywhere, but installation and service
costs have fallen as connection speeds have increased, making ADSL a viable alternative to
cable for high-speed Internet access. ADSL provides an always-on connection for a standard
monthly charge. Furthermore, the channels used for Internet data use a completely different
set of frequencies to that used for voice communication, so phone calls and Internet traffic
can share the same telephone line simultaneously without interfering with one another.
An ADSL connection
The asymmetric nature of ADSL means that the downstream bandwidth (i.e. from the
exchange to the subscriber's premises) is far greater than the upstream bandwidth (from the
subscriber's premises to the exchange). This is acceptable to most non-business subscribers,
since they tend to download far more data than they upload. It also takes advantage of the fact
that crosstalk at the exchange is far greater than at the subscriber's premises, simply because
of the large number of twisted pair telephone lines entering the exchange in close proximity
to one another.
The lower end of the frequency spectrum (26 - 138 kHz) is therefore used to transmit
data in the upstream direction, since signals at these frequencies will degrade far less due to
attenuation and therefore not be so susceptible to crosstalk at the exchange. The higher
frequency band (138 kHz - 1.1 MHz) is used to transmit the downstream data, and has a
sufficiently high signal-to-noise ratio (SNR) as it leaves the exchange to ensure that the
signal can be detected by the subscriber's equipment. The much broader range of frequencies
available for downstream transmission allows a much higher date rate in this direction.
The ADSL frequency spectrum
The maximum data rates currently being quoted for standard ADSL are 8Mbps download
speeds and 1Mbps upload speeds. The upload data rate can be increased by increasing the
bandwidth of the upstream channel, at the expense of the downstream channel. ADSL2
specifications use frequencies up to 2.2 MHz, effectively doubling the bandwidth and
providing theoretical maximum downstream data rates of up to 24 Mbps. It should be noted,
however, that these higher frequencies are far more susceptible to attenuation, which means
that the higher date rates could only be achieved if the subscriber was within two kilometers
or so of an ADSL-enabled exchange.
In fact, distance from the exchange is the main factor that determines whether or not a
subscriber can obtain an ADSL connection. Because attenuation due to external interference
can be worse on some frequencies than on others, the upstream and downstream channels are
sub-divided into a number of 4.3125 kHz channels. When the ADSL modem is turned on, it
tests each sub-channel to determine which of the available channels has an acceptable signal-
to-noise ratio. Channels with a high error rate will not be used, resulting in a reduction in
throughput but significantly reducing the incidence of data transmission errors (the actual
throughput achieved is dependent on the number of sub-channels that are usable).
One of the other benefits of ADSL, as opposed to either dial-up or ISDN connections,
is that it is an always-on service, eliminating the need for call set-up. The subscriber typically
has an ADSL modem or an ADSL router provided by the ISP, which connects user's PC via a
USB port or via an Ethernet network interface card (NIC) using a Category 5 patch cable.
The ISP usually provides one or more ADSL splitters (or microfilters). A splitter allows a
single telephone outlet to be connected to both a standard analogue telephone and an ADSL
modem or router. The splitter filters out the ADSL frequencies from the telephone connection
using a low-pass filter, and filters out the telephone frequencies from the ADSL modem
connection using a high-pass filter. A basic analogue telephone service will thus always be
available, even if the Internet connection fails.
At the telephone exchange the line generally terminates at a digital subscriber line access
multiplexer (DSLAM). ADSL data is typically routed over the telephone company's data
network, and from there to an Internet backbone network. British Telecom's ADSL service
sends data via its ATM data network to its main Internet backbone, the Colossus IP network.
Frame Relay
Frame relay is a fast, connection-oriented, packet-switching technology (based on the
older X.25 packet switching technology) originally intended for use in ISDN networks, but
now widely used in a variety of local and wide area networks. It can be used, for example, to
interconnect local area networks. Unlike X.25, frame relay is purely a data-link layer
protocol, and does not provide error handling or flow control. When an error is detected in a
frame, it is dropped. Upper layer protocols are responsible for detecting and retransmitting
dropped frames. The devices attached to a frame relay network are either DTEs (data
terminating equipment), typically located on customer premises (for example, computer
terminals, routers, and bridges), and DCEs (data circuit-terminating equipment), which are
carrier-owned internetworking devices that provide network switching services (usually
packet switches). The diagram below shows the relationship between these devices.
Relationship between DTEs and DCEs in a frame relay WAN
Flags - a bit pattern that delimits the beginning and end of the frame
(01111110)
o Extended address (EA) - if set to 1, the current byte is the last in the DLCI
(current implementations use a two-byte DLCI)
o Congestion control - a 3-bit field that contains the FECN, BECN and DE bits
The frame relay local management Interface (LMI) extends the basic frame relay
specification to provide a number of features for managing complex internetworks.
The global addressing extension makes it possible for DLCI values to have global
significance, making them unique within the frame relay WAN, allowing individual network
interfaces (and the end user devices attached to them) to be identified using standard address-
resolution and discovery techniques. LMI virtual circuit status messages provide
communication and synchronisation between DTE and DCE devices, and are used to report
on the status of permanent virtual circuits. The LMI multicasting extension saves bandwidth
by allowing routing updates and address-resolution messages to be sent to specific multicast
groups of routers only.
LMI DLCI - identifies the frame as an LMI frame instead of a basic frame
relay frame
Call reference - always contains zeros (this field is not currently used)
An ATM network is made up of ATM switches and ATM end systems (e.g. workstations,
switches and routers). An ATM switch is responsible for cell transit through an ATM
network. It accepts an incoming cell from an ATM end system (or another ATM switch),
reads and updates the cell header information, and switches the cell to the appropriate output
interface. An ATM end system contains an ATM network interface card. ATM switches
support two types of interface - User-to-Network Interface (UNI) and Network-to-Network
Interface (NNI). These interfaces can be further categorised as either public or private. A
private UNI connects an ATM end system to a private ATM switch, while a public UNI
connects an ATM end system or a private switch to a public ATM switch. A private NNI
connects two private ATM switches, while a public NNI connects two public ATM switches.
Each cell consists of a 5-byte header and a 48-byte payload. The basic format is shown
below.
An ATM header can have one of two formats - User-Network Interface (UNI) or Network-
Node Interface (NNI). UNI is used for communication between end systems and switches.
NNI is used for communication between switches. The header formats are shown below.
ATM UNI and NNI cell formats
The following descriptions summaries the header fields (note that the NNI header does not
include the Generic Flow Control field, but has an additional four bits in the Virtual Path
Identifier field, allowing for a much larger number of virtual paths to be used):
GFC - the Generic Flow Control field provides local functions such as identifying
multiple stations that share a single ATM interface (it is typically not used, and is
set to a default value of 0).
VPI - the Virtual Path Identifier is used together with the Virtual Channel
Identifier (VCI) to identify the virtual circuit along which the cell will be directed
as it passes through an ATM network on the way to its destination.
VCI - the Virtual Channel Identifier is used together with the Virtual Path
Identifier (VPI) to identify the virtual circuit along which the cell will be directed
as it passes through an ATM network on the way to its destination (values of 0 to
31 are reserved).
PT - the first bit of the Payload Type field indicates whether the cell contains user
data (0) or control data (1). The second bit indicates whether there is congestion (0
= no congestion, 1 = congestion), and the third bit indicates whether or not the cell
is the last in a series of cells representing a single AAL5 frame (1 = last cell for
the frame).
CLP - the Cell Loss Priority bit field indicates whether the cell should be
discarded if it encounters extreme congestion as it moves through the network. If
set to 1, the cell should be discarded before cells that have the bit set to 0.
HEC - the Header Error Control field contains a checksum calculated on the first
4 bytes of the header. It can be used to correct a single bit error in these bytes,
preserving the cell rather than discarding it, or to detect multi-bit header errors (in
which case the cell is dropped).
ATM addressing
The purpose of allocating an address is to identify an ATM device in the ATM network while
a connection is being established. ITU-T standardized addresses for public ATM networks
according to its E.164 recommendation, which defines the international public
telecommunication numbering plan used in the public switched telephone network (PSTN)
and some other data networks. The ATM Forum (now part of the IP/MPLS Forum) defined a
separate 20-byte address structure, based on that of the Network Service Access Point (NSAP)
developed by OSI, for private ATM network addressing. They have also specified an NSAP
encoding for E.164 addresses, which is used to encode those addresses within private
networks. Because the address allocated to each ATM system is separate from the address
used by higher layer protocols (such as IP), an ATM Address Resolution Protocol (ATM
ARP) is required to map between ATM addresses and their higher layer counterparts. Private
networks can also base their own NSAP format addressing scheme on the E.164 address of
the public UNI to which they are connected. The address prefix from the E.164 number is
used, with local nodes being uniquely identified using the lower-order bits.
NSAP-format ATM addresses consist of an authority and format identifier (AFI), an initial
domain identifier (IDI), and a domain-specific part (DSP). The AFI identifies the type and
format of the IDI, which in turn, identifies the address allocation and administrative authority.
The DSP contains actual routing information. The first thirteen bytes uniquely identify the
switch to which the ATM end system is attached, and are used by ATM switches for routing
purposes. The next six bytes uniquely identify a specific ATM end system attached to the
switch. The last byte identifies the target process within the destination end system. In the
NSAP-format E.164 format, the IDI is an E.164 number (i.e. an ISDN telephone number). In
the DCC format, the IDI is a data country code (DCC), which identifies a particular country.
In the ICD format, the IDI is an international code designator (ICD) allocated by the ISO
6523 registration authority (the British Standards Institute). ICD codes identify specific
international organizations. The ATM Forum recommends that organizations use either the
DCC or ICD formats to create their own internal numbering scheme.
ATM Switching
The sequence of cells in a virtual connection is preserved within each ATM switch to
simplify the process of reconstructing packets or frames at their destination. Cells are routed
through the ATM network using the information contained within the VPI/VCI. At each
switch, the VCI and VPI together identify the virtual connection to which a cell belongs, and
are used together with the incoming port number to index a routing table within the switch to
determine the correct outgoing port and the new VPI/VCI values (Incoming VPI/VCI values
must be translated to outgoing VPI/VCI values at every switch through which the cell
passes). If a number of cells require the same output port during the same time slot, any cells
that cannot be sent immediately will be placed in a queue and sent when the output port
becomes available. If too many cells are queued for a particular link, some cells may be lost.
Traffic management
An application wishing to send data across an ATM network should advise the network of the
type of data is to be sent, together with any Quality of Service (QoS) requirements. The ATM
Forum has defined five service categories in order to try and match traffic characteristics and
QoS requirements to network behaviour, which are described below.
Constant Bit Rate (CBR) - used for traffic requiring a consistent and predictable
bit rate for the lifetime of the connection. Typical applications include video
conferencing and telephony.
Real-Time Variable Bit Rate (rt-VBR) - used for variable rate data that must be
delivered in a timely fashion. Examples might include traffic that could be
considered bursty, such as variable rate compressed video streams.
Non-Real-Time Variable Bit Rate (nrt-VBR) - is used for variable bit rate traffic
that is not time-critical, but may have some minimum requirement with regard to
bandwidth or latency (for example, Frame Relay internetworking traffic).
Available Bit Rate (ABR) - this service is similar to nrt-VBR, but is intended
primarily for traffic that is not time sensitive and requires no guarantee of service,
and that can moderate its data rate in response to flow-control data (for example,
TCP/IP traffic). ABR employs Resource Management cells to provide the
necessary feedback to the traffic source in response to variations in the resources
available within the ATM network.
Unspecified Bit Rate (UBR) - this service is similar to ABR in that it is intended
primarily for traffic that is not time sensitive and requires no guarantee of service,
but no flow-control mechanism is provided. This service is suitable for
applications that are tolerant of delay and cell-loss, such as file-transfer and e-
mail.
Every ATM connection implements a set of parameters that describe the traffic
characteristics of the source. They are listed below.
Peak Cell Rate (PCR) - the maximum rate at which cells may be transmitted on
the connection.
Cell Delay Variation Tolerance (CDVT) - indicates how much jitter is allowed on
the connection.
Maximum Burst Size (MBS) - the maximum number of cells that can be
transmitted contiguously on the connection.
Minimum Cell Rate (MCR) - the minimum rate at which cells should be
transmitted on the connection.
The appropriate traffic characteristics for a given connection are determined when the
connection is set up, and between them define the traffic contract for the connection. The
contract is intended to ensure that the requested Quality of Serviceis provided. Traffic
shaping is implemented through the use of queuing and other measures in order to constrain
data bursts, limit peak data rates, and eliminate jitter. In order to ensure that the traffic on
each connection behaves and is managed according to its agreed contract, the contract may be
policed by ATM switches. The switches can compare actual traffic flows against the contract.
If a connection is exceeding its agreed throughput, the switch may reset the cell-loss
priority (CLP) bit for offending cells, making them eligible to be discarded by any switch
handling them during a period of network congestion. The Quality of Service parameters that
the network strives to meet include:
Cell Transfer Delay - the total amount of time that elapses from the time the first
bit is transmitted by the source end system and the time the last bit is received by
the destination end system.
Cell Delay Variation (CDV) - the difference between the maximum and minimum
cell transfer delay experienced by a connection. Peak-to-peak and instantaneous
CDV are used.
Cell Loss Ratio (CLR) - the percentage of cells lost in the network due to errors or
congestion.
The ATM reference model describes the architecture of ATM and the
functionality it supports. The model corresponds primarily to the first two layers
(i.e. the physical and data link layers) of the OSI reference model. As well as the
vertically-defined layers, the ATM model includes the following planes, which
span all layers:
Physical layer - maps to the physical layer of the OSI reference model and
manages the medium-dependent transmission.
ATM layer - together with the ATM adaptation layer, this layer maps
approximately to the data link layer of the OSI reference model. The ATM layer is
responsible for connection establishment, cell multiplexing, and cell relay.
ATM adaptation layer (AAL) - together with the ATM layer, this layer maps
approximately to the data link layer of the OSI reference model. The AAL adapts
(segments) user data from higher layer protocols into 48-byte cell payloads.
Higher layers - these layers accept user data, form it into packets, and pass the
packets down to the AAL.
The ATM physical layer converts cells into a bit-stream, transmits and receives cells on the
physical medium, tracks ATM cell boundaries, and packages cells using the framing
appropriate for the physical medium. The layer is divided into the physical medium-
dependent (PMD) sub-layer, and the transmission convergence (TC) sub-layer. The PMD
sub-layer is responsible for the synchronisation and timing of bit streams, and specifies the
correct transmission medium and connection interfaces for the physical network (e.g. SDH,
SONET etc.) The TC sub-layer is responsible for maintaining and tracking cell boundaries,
header error control (HEC) sequence generation (at the transmitter) and verification (at the
receiver), cell-rate decoupling, (synchronising the ATM cell rate with the payload capacity of
the physical transmission system) and transmission frame adaptation (packaging ATM cells
in the correct frame type for the physical layer implementation).
The ATM layer is responsible for connection setup, the multiplexing or de-multiplexing of
cells on different virtual connections, the translation between incoming and outgoing VPI and
VCI values during switching operations, ensuring quality of service is monitored and
maintained, cell header creation (or removal) as data is received from (or passed to) the
adaptation layer, and flow-control.
This layer provides the interface between the ATM layer and the higher-layer protocols, and
consists of two sub-layers. The Segmentation and Re-assembly (SAR) sub-layer is
responsible for the segmentation of higher layer data into 48-byte payloads for outbound
cells, and the reassembly of higher layer data from incoming cell payloads.
The Convergence sub-layer performs functions such as message identification and time
recovery. Four AAL types are recommended by the ITU-T, each of which is suited to a
particular service category and connection mode. The AAL types are listed below.
AAL5 - the most commonly used adaptation layer for data (e.g. IP traffic), AAL5
provides both connection-oriented and connectionless services, and supports data
sources with different bit rate requirements
(i.e. available, variable and unspecified bit rate data). There is a trade off between
lower protocol overhead and simplified processing on the one hand, and reduced
bandwidth and error-recovery capability on the other.
ATM was developed to provide a data network for any type of application, regardless of its
bandwidth requirements. Due to its complexity, and the advances made in competing
technologies, it has not lived up to the original expectations of its creators, and has not gained
widespread acceptance as a LAN technology. It has, however, been widely implemented in
public and corporate networks. A number of telecommunications service providers have
implemented ATM in the core of their wide area networks, and many DSL networks, which
still run at relatively slow data rates, utilise ATM as a multiplexing service. In high-speed
interconnects, ATM is often still used as a means of integrating PDH, SDH and packet-
switched traffic within a common transport infrastructure.
The need to reduce queuing delays by reducing packet and frame sizes has to some extent
(though not completely) gone away, due to the fact that both network data rates and switching
speeds have increased dramatically since ATM was first conceived. Real-time voice and
video applications are now being successfully carried over IP networks, and network
performance benchmarks have routinely been surpassed, greatly reducing the incentive to
deploy ATM. At the same time, interest in using ATM for carrying live video and audio has
gained momentum recently, and its use in this type of environment has prompted the
development of new standards for professional uncompressed audio and video data over
ATM.
ATM is struggling to adapt, however, to the speed and traffic-shaping requirements of
convergent network technologies, particularly with respect to the complexity associated with
Segmentation and Reassembly (SAR), which is the source of a significant performance
bottleneck. It is quite possible that gigabit Ethernet will supplant ATM as the technology of
choice in new WAN implementations. At the same time, Multi-protocol Label
Switching (MPLS) appears to be a possible contender to replace ATM as the unifying data-
link layer protocol of the future, providing data-carrying services for both virtual-circuit
based and packet switching network clients. The evolution of MPLS has benefited from
lessons learned from ATM, and both the strengths and weaknesses of ATM have been kept in
mind throughout its development.
802.11b 802.11a
Modulation Scheme DSSS OFDM
If media is sensed busy, wait until idle and then transmit immediately
Collisions can occur if more than one user transmits at the same time
If a collision is detected, stop transmitting.
CSMA/CA Details
Physical Layer is very similar to 802.11a (OFDM operating in the 5 GHz spectrum)
While communicating, a host needs Layer-2 (MAC) address of the destination machine
which belongs to the same broadcast domain or network. A MAC address is physically burnt
into the Network Interface Card (NIC) of a machine and it never changes.
On the other hand, IP address on the public domain is rarely changed. If the NIC is changed
in case of some fault, the MAC address also changes. This way, for Layer-2 communication
to take place, a mapping between the two is required.
To know the MAC address of remote host on a broadcast domain, a computer wishing to
initiate communication sends out an ARP broadcast message asking, “Who has this IP
address?” Because it is a broadcast, all hosts on the network segment (broadcast domain)
receive this packet and process it. ARP packet contains the IP address of destination host,
the sending host wishes to talk to. When a host receives an ARP packet destined to it, it
replies back with its own MAC address.
Once the host gets destination MAC address, it can communicate with remote host using
Layer-2 link protocol. This MAC to IP mapping is saved into ARP cache of both sending
and receiving hosts. Next time, if they require to communicate, they can directly refer to
their respective ARP cache.
Reverse ARP is a mechanism where host knows the MAC address of remote host but
requires to know IP address to communicate.
ICMP is network diagnostic and error reporting protocol. ICMP belongs to IP protocol suite
and uses IP as carrier protocol. After constructing ICMP packet, it is encapsulated in IP
packet. Because IP itself is a best-effort non-reliable protocol, so is ICMP.
Any feedback about network is sent back to the originating host. If some error in the
network occurs, it is reported by means of ICMP. ICMP contains dozens of diagnostic and
error reporting messages.
ICMP-echo and ICMP-echo-reply are the most commonly used ICMP messages to check
the reachability of end-to-end hosts. When a host receives an ICMP-echo request, it is bound
to send back an ICMP-echo-reply. If there is any problem in the transit network, the ICMP
will report that problem.
IPv4 is 32-bit addressing scheme used as TCP/IP host addressing mechanism. IP addressing
enables every host on the TCP/IP network to be uniquely identifiable.
IPv4 provides hierarchical addressing scheme which enables it to divide the network into
sub-networks, each with well-defined number of hosts. IP addresses are divided into many
categories:
Class A - it uses first octet for network addresses and last three octets for host
addressing
Class B - it uses first two octets for network addresses and last two for host
addressing
Class C - it uses first three octets for network addresses and last one for host
addressing
Class D - it provides flat IP addressing scheme in contrast to hierarchical structure
for above three.
Class E - It is used as experimental.
IPv4 also has well-defined address spaces to be used as private addresses (not routable on
internet), and public addresses (provided by ISPs and are routable on internet).
Though IP is not reliable one; it provides „Best-Effort-Delivery‟ mechanism.
Internet Protocol Version 6 (IPv6)
Exhaustion of IPv4 addresses gave birth to a next generation Internet Protocol version 6.
IPv6 addresses its nodes with 128-bit wide address providing plenty of address space for
future to be used on entire planet or beyond.
IPv6 has introduced Anycast addressing but has removed the concept of broadcasting. IPv6
enables devices to self-acquire an IPv6 address and communicate within that subnet. This
auto-configuration removes the dependability of Dynamic Host Configuration Protocol
(DHCP) servers. This way, even if the DHCP server on that subnet is down, the hosts can
communicate with each other.
IPv6 provides new feature of IPv6 mobility. Mobile IPv6 equipped machines can roam
around without the need of changing their IP addresses.
IPv6 is still in transition phase and is expected to replace IPv4 completely in coming years.
At present, there are few networks which are running on IPv6. There are some transition
mechanisms available for IPv6 enabled networks to speak and roam around different
networks easily on IPv4. These are:
Dual stack implementation
Tunneling
NAT-PT
Transmission Control Protocol
The transmission Control Protocol (TCP) is one of the most important protocols of Internet
Protocols suite. It is most widely used protocol for data transmission in communication
network such as internet.
Features
TCP is reliable protocol. That is, the receiver always sends either positive or negative
acknowledgement about the data packet to the sender, so that the sender always has
bright clue about whether the data packet is reached the destination or it needs to
resend it.
TCP ensures that the data reaches intended destination in the same order it was sent.
TCP is connection oriented. TCP requires that connection between two remote points
be established before sending actual data.
TCP provides error-checking and recovery mechanism.
TCP provides full duplex server, i.e. it can perform roles of both receiver and sender.
Header
The length of TCP header is minimum 20 bytes long and maximum 60 bytes.
Source Port (16-bits) - It identifies source port of the application process on the
sending device.
Acknowledgement Number (32-bits) - When ACK flag is set, this number contains
the next sequence number of the data byte expected and works as acknowledgement
of the previous data received.
Data Offset (4-bits) - This field implies both, the size of TCP header (32-bit words)
and the offset of data in current packet in the whole TCP segment.
Reserved (3-bits) - Reserved for future use and all are set zero by default.
process.
o CWR - When a host receives packet with ECE bit set, it sets Congestion
If SYN bit is clear to 0, then ECE means that the IP packet has its CE
(congestion experience) bit set.
If SYN bit is set to 1, ECE means that the device is ECT capable.
o URG - It indicates that Urgent Pointer field has significant data and should be
processed.
o PSH - When set, it is a request to the receiving station to PUSH data (as soon
o FIN - This flag is used to release a connection and no more data is exchanged
thereafter. Because packets with SYN and FIN flags have sequence numbers,
they are processed in correct order.
Windows Size - This field is used for flow control between two stations and
indicates the amount of buffer (in bytes) the receiver has allocated for a segment, i.e.
how much data is the receiver expecting.
Checksum - This field contains the checksum of Header, Data and Pseudo Headers.
Urgent Pointer - It points to the urgent data byte if URG flag is set to 1.
Options - It facilitates additional options which are not covered by the regular
header. Option field is always described in 32-bit words. If this field contains data
less than 32-bit, padding is used to cover the remaining bits to reach 32-bit boundary.
Addressing
TCP communication between two remote hosts is done by means of port numbers (TSAPs).
Ports numbers can range from 0 – 65535 which are divided as:
Connection Management
TCP communication works in Server/Client model. The client initiates the connection and
the server either accepts or rejects it. Three-way handshaking is used for connection
management.
Establishment
Client initiates the connection and sends the segment with a Sequence number. Server
acknowledges it back with its own Sequence number and ACK of client‟s segment which is
one more than client‟s Sequence number. Client after receiving ACK of its segment sends an
acknowledgement of Server‟s response.
Release
Either of server and client can send TCP segment with FIN flag set to 1. When the receiving
end responds it back by ACKnowledging FIN, that direction of TCP communication is
closed and connection is released.
Bandwidth Management
TCP uses the concept of window size to accommodate the need of Bandwidth management.
Window size tells the sender at the remote end, the number of data byte segments the
receiver at this end can receive. TCP uses slow start phase by using window size 1 and
increases the window size exponentially after each successful communication.
For example, the client uses windows size 2 and sends 2 bytes of data. When the
acknowledgement of this segment received the windows size is doubled to 4 and next sent
the segment sent will be 4 data bytes long. When the acknowledgement of 4-byte data
segment is received, the client sets windows size to 8 and so on.
If an acknowledgement is missed, i.e. data lost in transit network or it received NACK, then
the window size is reduced to half and slow start phase starts again.
TCP uses port numbers to know what application process it needs to handover the data
segment. Along with that, it uses sequence numbers to synchronize itself with the remote
host. All data segments are sent and received with sequence numbers. The Sender knows
which last data segment was received by the Receiver when it gets ACK. The Receiver
knows about the last segment sent by the Sender by referring to the sequence number of
recently received packet.
If the sequence number of a segment recently received does not match with the sequence
number the receiver was expecting, then it is discarded and NACK is sent back. If two
segments arrive with the same sequence number, the TCP timestamp value is compared to
make a decision.
Multiplexing
The technique to combine two or more data streams in one session is called Multiplexing.
When a TCP client initializes a connection with Server, it always refers to a well-defined
port number which indicates the application process. The client itself uses a randomly
generated port number from private port number pools.
This enables the client system to receive multiple connection over single virtual
connection. These virtual connections are not good for Servers if the timeout is too long.
Congestion Control
When large amount of data is fed to system which is not capable of handling it, congestion
occurs. TCP controls congestion by means of Window mechanism. TCP sets a window size
telling the other end how much data segment to send. TCP may use three algorithms for
congestion control:
Slow Start
Timeout React
Timer Management
TCP uses different types of timer to control and management various tasks:
Keep-alive timer:
This timer is used to check the integrity and validity of a connection.
When keep-alive time expires, the host sends a probe to check if the connection still
exists.
Retransmission timer:
This timer maintains stateful session of data sent.
If the acknowledgement of sent data does not receive within the Retransmission time,
the data segment is sent again.
Persist timer:
TCP session can be paused by either host by sending Window Size 0.
To resume the session a host needs to send Window Size with some larger value.
If this segment never reaches the other end, both ends may wait for each other for
infinite time.
When the Persist timer expires, the host re-sends its window size to let the other end
know.
Timed-Wait:
After releasing a connection, either of the hosts waits for a Timed-Wait time to
terminate the connection completely.
This is in order to make sure that the other end has received the acknowledgement of
its connection termination request.
Crash Recovery
TCP is very reliable protocol. It provides sequence number to each of byte sent in segment.
It provides the feedback mechanism i.e. when a host receives a packet, it is bound to ACK
that packet having the next sequence number expected (if it is not the last segment).
When a TCP Server crashes mid-way communication and re-starts its process it sends
TPDU broadcast to all its hosts. The hosts can then send the last data segment which was
never unacknowledged and carry onwards.
In UDP, the receiver does not generate an acknowledgement of packet received and in turn,
the sender does not wait for any acknowledgement of packet sent. This shortcoming makes
this protocol unreliable as well as easier on processing.
Requirement of UDP
A question may arise, why do we need an unreliable protocol to transport the data? We
deploy UDP where the acknowledgement packets share significant amount of bandwidth
along with the actual data. For example, in case of video streaming, thousands of packets are
forwarded towards its users. Acknowledging all the packets is troublesome and may contain
huge amount of bandwidth wastage. The best delivery mechanism of underlying IP protocol
ensures best efforts to deliver its packets, but even if some packets in video streaming get
lost, the impact is not calamitous and can be ignored easily. Loss of few packets in video
and voice traffic sometimes goes unnoticed.
Features
UDP is used when acknowledgement of data does not hold any significance.
UDP is good protocol for data flowing in one direction.
UDP is simple and suitable for query based communications.
UDP is not connection oriented.
UDP does not provide congestion control mechanism.
UDP does not guarantee ordered delivery of data.
UDP is stateless.
UDP is suitable protocol for streaming applications such as VoIP, multimedia
streaming.
UDP Header
UDP header is as simple as its function.
UDP header contains four main parameters:
Source Port - This 16 bits information is used to identify the source port of the
packet.
Destination Port - This 16 bits information, is used identify application level service
on destination machine.
Length - Length field specifies the entire length of UDP packet (including header).
It is 16-bits field and minimum value is 8-byte, i.e. the size of UDP header itself.
Checksum - This field stores the checksum value generated by the sender before
sending. IPv4 has this field as optional so when checksum field does not contain any
value it is made 0 and all its bits are set to zero.
UDP application
Here are few applications where UDP is used to transmit data:
Kerberos
Unit – V :Client Server Model
Two remote application processes can communicate mainly in two different fashions:
Peer-to-peer: Both remote processes are executing at same level and they exchange
data using some shared resource.
Client-Server: One remote process acts as a Client and requests some resource from
another application process acting as Server.
In client-server model, any process can act as Server or Client. It is not the type of machine,
size of the machine, or its computing power which makes it server; it is the ability of serving
request that makes a machine a server.
A system can act as Server and Client simultaneously. That is, one process is acting as
Server and another is acting as a client. This may also happen that both client and server
processes reside on the same machine.
Communication
The Domain Name System (DNS) works on Client Server model. It uses UDP protocol for
transport layer communication. DNS uses hierarchical domain based naming scheme. The
DNS server is configured with Fully Qualified Domain Names (FQDN) and email addresses
mapped with their respective Internet Protocol addresses.
A DNS server is requested with FQDN and it responds back with the IP address mapped
with it. DNS uses UDP port 53.
A DNS server provides the name resolution using the name daemon, which is often called
named, (pronounced name-dee). The DNS server stores different types of resource records
used to resolve names. These records contain the name, address, and type of record. Some of
these record types are:
• CNAME - the canonical name (or Fully Qualified Domain Name) for an alias; used
when multiple services have the single network address but each service has its own entry in
DNS
• MX - mail exchange record; maps a domain name to a list of mail exchange servers
for that domain
When a client makes a query, the server's "named" process first looks at its own records to
see if it can resolve the name. If it is unable to resolve the name using its stored records, it
contacts other servers in order to resolve the name. The request may be passed along to a
number of servers, which can take extra time and consume bandwidth. Once a match is
found and returned to the original requesting server, the server temporarily stores the
numbered address that matches the name in cache. If that same name is requested again, the
first server can return the address by using the value stored in its name cache. Caching
reduces both the DNS query data network traffic and the workloads of servers higher up the
hierarchy. The DNS Client service on Windows PCs optimizes the performance of DNS
name resolution by storing previously resolved names in memory, as well. The ipconfig
/displaydns command displays all of the cached DNS entries on a Windows XP or 2000
computer system.
The Domain Name System uses a hierarchical system to create a name database to provide
name resolution. The hierarchy looks like an inverted tree with the root at the top and
branches below. At the top of the hierarchy, the root servers maintain records about how to
reach the top-level domain servers, which in turn have records that point to the secondary
level domain servers and so on. The different top-level domains represent either the type of
organization or the country of origin. Examples of top-level domains are:
• .au - Australia
• .co - Colombia
• .jp - Japan
The Simple Mail Transfer Protocol (SMTP) is used to transfer electronic mail from one user
to another. This task is done by means of email client software (User Agents) the user is
using. User Agents help the user to type and format the email and store it until internet is
available. When an email is submitted to send, the sending process is handled by Message
Transfer Agent which is normally comes inbuilt in email client software.
Message Transfer Agent uses SMTP to forward the email to another Message Transfer
Agent (Server side). While SMTP is used by end user to only send the emails, the Servers
normally use SMTP to send as well as receive emails. SMTP uses TCP port number 25 and
587.
Client software uses Internet Message Access Protocol (IMAP) or POP protocols to receive
emails.
The File Transfer Protocol (FTP) is the most widely used protocol for file transfer over the
network. FTP uses TCP/IP for communication and it works on TCP port 21. FTP works on
Client/Server Model where a client requests file from Server and server sends requested
resource back to the client.
FTP uses out-of-band controlling i.e. FTP uses TCP port 20 for exchanging controlling
information and the actual data is sent over TCP port 21.
The client requests the server for a file. When the server receives a request for a file, it opens
a TCP connection for the client and transfers the file. After the transfer is complete, the
server closes the connection. For a second file, client requests again and the server reopens a
new TCP connection.
The Post Office Protocol version 3 (POP 3) is a simple mail retrieval protocol used by User
Agents (client email software) to retrieve mails from mail server.
When a client needs to retrieve mails from server, it opens a connection with the server on
TCP port 110. User can then access his mails and download them to the local computer.
POP3 works in two modes. The most common mode the delete mode, is to delete the emails
from remote server after they are downloaded to local machines. The second mode, the keep
mode, does not delete the email from mail server and gives the user an option to access
mails later on mail server.
Hyper Text Transfer Protocol (HTTP)
The Hyper Text Transfer Protocol (HTTP) is the foundation of World Wide Web. Hypertext
is well organized documentation system which uses hyperlinks to link the pages in the text
documents. HTTP works on client server model. When a user wants to access any HTTP
page on the internet, the client machine at user end initiates a TCP connection to server on
port 80. When the server accepts the client request, the client is authorized to access web
pages.
To access the web pages, a client normally uses web browsers, who are responsible for
initiating, maintaining, and closing TCP connections. HTTP is a stateless protocol, which
means the Server maintains no information about earlier requests by clients.
HTTP versions
HTTP 1.0 uses non persistent HTTP. At most one object can be sent over a single
TCP connection.
HTTP 1.1 uses persistent HTTP. In this version, multiple objects can be sent over a
single TCP connection.
Telnet Protocol
TELNET is a protocol that provides “a general, bi-directional, eight-bit byte oriented
communications facility”.
telnet is a program that supports the TELNET protocol over TCP.
Many application protocols are built upon the TELNET protocol.
The TELNET Protocol
TCP connection
data and control over the same connection.
Network Virtual Terminal
negotiated options
Network Virtual Terminal
intermediate representation of a generic terminal.
provides a standard language for communication of terminal control functions.
Network Virtual Terminal
All NVTs support a minimal set of capabilities.
Some terminals have more capabilites than the minimal set.
The 2 endpoints negotiate a set of mutually acceptable options (character set, echo
mode, etc).
The protocol for requesting optional features is well defined and includes rules for
eliminating possible negotiation “loops”.
The set of options is not part of the TELNET protocol, so that new terminal features
can be incorporated without changing the TELNET protocol.
Option examples
Line mode vs. character mode
echo modes
character set (EBCDIC vs. ASCII)
Control Functions
TELNET includes support for a series of control functions commonly supported by
servers.
This provides a uniform mechanism for communication of (the supported) control
functions.
Control Functions
Interrupt Process (IP)
o suspend/abort process.
Abort Output (AO)
o process can complete, but send no more output to user‟s terminal.
Are You There (AYT)
o check to see if system is still running.
More Control Functions
Erase Character (EC)
o delete last character sent
o typically used to edit keyboard input.
Erase Line (EL)
o delete all input in current line.
Command Structure
All TELNET commands and data flow through the same TCP connection.
Commands start with a special character called the Interpret as Command escape
character (IAC).
The IAC code is 255.
If a 255 is sent as data - it must be followed by another 255.
Looking for Commands
Each receiver must look at each byte that arrives and look for IAC.
If IAC is found and the next byte is IAC - a single byte is presented to the
application/terminal.
If IAC is followed by any other code - the TELNET layer interprets this as a
command.
Command Codes
IP 243
AO 244
AYT 245
EC 246
EL 247
• 1994 – Mark Andreesen invents MOSAIC at National Center for Super Computing
Applications (NCSA)
– Freely distributed
– Exponential growth
– E-commerce
• WWW Components
• Structural Components
• Semantic Components
• WWW Structure
• Clients use browser application to send URIs via HTTP to servers requesting a Web
page
• Web pages constructed using HTML (or other markup language) and consist of text,
graphics, sounds plus embedded files
• The entire system runs over standard networking protocols (TCP/IP, DNS,…)
• Instance: http://www.foo.com/index.html
– RFC 2396
• HTTP Basics
– Stateless
• RFC 2068
Location: http://www.wisc.edu/cs/index.html
• HTTP Headers
• Both requests and responses can contain a variable number of header fields
• Request
• Response
• Body
• Netscape default = 4
• Persistent connections
• Pipelining
• Persistent connections
• Pipelining
HTML Basics
• Hyper-Text Markup Language
• Documents use elements to “mark up” or identify sections of text for different
purposes or display characteristics
• Mark up elements are not seen by the user when page is displayed