M.C.A. (Sem - III) Paper - III Data Communication and Networking PDF
M.C.A. (Sem - III) Paper - III Data Communication and Networking PDF
UNIT 1
Fundamentals in communication
1
FUNDAMENTALS OF DATA
COMMUNICATION
Unit Structure
1.1 Introduction to data communication
1.2 Characteristics of a data communication system
1.3 Data & Signal
1.3.1 Analog & Digital data
1.3.2 Analog & Digital Signal
1.3.3 Periodic & Non-Periodic Signal
1.3.4 Composite Signal
1.4 Signal Encoding
1.5 Synchronization
1.6 Digital Data to Digital Signal
1.6.1 Coding methods
1.6.2 Line Encoding
1.6.3 Block Coding
Further Reading & References
Data refers to the raw facts that are collected while information
refers to processed data that enables us to take decisions.
1.2.
CHARACTERISTICS OF A DATA
COMMUNICATION SYSTEM
b. Digital Signal
Signals which repeat itself after a fixed time period are called
Periodic Signals.
an
1.5. SYNCHRONIZATION
Figure : Synchronization
All signal levels are either above or below the time axis.
B. Polar
Polar RZ
The Return to Zero (RZ) scheme uses three voltage
values. +, 0, -.
Each symbol has a transition in the middle. Either from
high to zero or from low to zero
10
D. Multilevel
11
Here m = 8; n = 6 ; T = 3
Dimensional
Five-Level
Pulse
Amplitude
E. Multitransition
MLT-3
o Signal rate is same as NRZ-I
o Uses three levels (+v, 0, and - V) and three transition
rules to move between the levels.
If the next bit is 0, there is no transition.
12
m : message bits
Figure : Block Coding
Example: 4B/5B encoding
Here a 4 bit code is converted into a 5 bit code
Further Reading & References
Data Communication & Networking Behrouz Forouzan.
13
2
MODULATION
Unit Structure
2.1. Introduction
2.2. Modulation (Analog to digital signal conversion)
2.2.1. PAM
2.2.2. PCM
2.2.3. PWM
2.3. Modulation (Analog data to analog signal conversion)
2.3.1. AM
2.3.2. FM
2.3.3. PM
2.4. Digital Modulation (Digital Data to Digital Signal conversion)
2.4.1. ASK
2.4.2. FSK
2.4.3. PSK
2.4.4. QAM
Further Reading & References
2.1. INTRODUCTION
The previous chapter described encoding techniques to
convert digital data to digital signal, now we look at the modulation
techniques to convert
2.2. MODULATION
SIGNALS)
(ANALOG
DATA
TO
DIGITAL
14
15
2.3.
Modulation
Types of Modulation:
Signal modulation can be divided into two broad categories:
Digital modulation.
16
17
18
19
20
21
If the phase of the wave does not change, then the signal
state stays the same (low or high).
22
23
3
MULTIPLEXING
Unit Structure
3.1. Multiplexing
3.1.1.
FDM
3.1.2.
TDM
3.1.3.
WDM
3.2.
Modes of communication
3.2.1.
Simplex
3.2.2.
Half Duplex
3.2.3.
Full Duplex
3.3.
Switching Techniques
3.3.1.
Circuit switching
3.3.2.
Packet switching
3.3.2.1. Datagram Approach
3.3.2.2. Virtual Circuit Approach
Further Reading & References
3.1. MULTIPLEXING
24
Fig. Multiplexing
25
26
27
28
One of the devices only sends the data and the other one
only receives the data.
29
When one device is sending other can only receive and viceversa (as shown in figure above.)
Example: A walkie-talkie.
30
3.3. SWITCHING
In the above figure the Switches are labeled I, II, III, IV and the
end nodes are labeled as A, B, C, D, E, F, G. Switch I is
connected to end nodes A, B, C and switch III is connected to
end node D and E while Switch II is only used for routing.
31
connection setup
data transfer, and
connection teardown.
Connection Setup
32
Datagram Switching
Using Datagram transmission, each packet is treated as
a separate entity and contains a header with the full
information about the intended recipient.
33
Routing Table
34
Destination Address
35
36
37
38
Teardown Phase
39
UNIT 2
Introductions
4
INTRODUCTION TO COMPUTER
NETWORKS
Unit Structure
4.1 Uses of computer network
4.1.1 Introduction
4.1.2 Purpose
4.1.3 Network Classification
4.1.4 Basic Hardware Components
4.2 LANs, WANs, MANs
4.3 Wireless Networks
4.3.1 Classification of Wireless networks
4.3.2 Uses of Wireless Networks
4.3.3 Problems with Wireless Networks
4.4 Internet work
4.4.1 VPN
4.4.2 Overlay Network
40
4.1.2 Purpose:
Computer networks can be used for several purposes:
Facilitating communications. Using a network, people can
communicate efficiently and easily via email, instant messaging,
chat rooms, telephone, video telephone calls, and video
conferencing.
Sharing hardware. In a networked environment, each
computer on a network may access and use hardware
resources on the network, such as printing a document on a
shared network printer.
Sharing files, data, and information. In a network
environment, authorized user may access data and information
stored on other computers on the network. The capability of
providing access to data and information on shared storage
devices is an important feature of many networks.
Sharing software. Users connected to a network may run
application programs on remote computers.
41
waves or infrared signals as a transmission medium. ITUT G.hn technology uses existing home wiring (coaxial cable,
phone lines and power lines) to create a high-speed (up to 1
Gigabit/s) local area network.
a) Wired technologies
Twisted pair wire is the most widely used medium for
telecommunication. Twisted-pair wires are ordinary
telephone wires which consist of two insulated copper wires
twisted into pairs and are used for both voice and data
transmission. The use of two wires twisted together helps to
reduce crosstalk and electromagnetic
induction.
The
transmission speed ranges from 2 million bits per second to
100 million bits per second.
Coaxial cable is widely used for cable television systems,
office buildings, and other worksites for local area networks.
The cables consist of copper or aluminum wire wrapped with
insulating layer typically of a flexible material with a high
dielectric constant, all of which are surrounded by a
conductive layer. The layers of insulation help minimize
interference and distortion. Transmission speed range from
200 million to more than 500 million bits per second.
Optical fiber cable consists of one or more filaments of
glass fiber wrapped in protective layers. It transmits light
which can travel over extended distances. Fiber-optic cables
are not affected by electromagnetic radiation. Transmission
speed may reach trillions of bits per second. The
transmission speed of fiber optics is hundreds of times faster
than for coaxial cables and thousands of times faster than a
twisted-pair wire.
b) Wireless technologies
Terrestrial microwave Terrestrial microwaves use Earthbased transmitter and receiver. The equipment looks similar
to satellite dishes. Terrestrial microwaves use low-gigahertz
range, which limits all communications to line-of-sight. Path
between relay stations spaced approx, 30 miles apart.
Microwave antennas are usually placed on top of buildings,
towers, hills, and mountain peaks.
Communications satellites The satellites use microwave
radio as their telecommunications medium which are not
deflected by the Earth's atmosphere. The satellites are
stationed in space, typically 22,000 miles (for
geosynchronous satellites) above the equator. These Earthorbiting systems are capable of receiving and relaying voice,
data, and TV signals.
42
Cellular and PCS systems Use several radio
communications technologies. The systems are divided to
different geographic areas. Each area has a low-power
transmitter or radio relay antenna device to relay calls from
one area to the next area.
Wireless LANs Wireless local area network use a highfrequency radio technology similar to digital cellular and a
low-frequency radio technology. Wireless LANs use spread
spectrum technology to enable communication between
multiple devices in a limited area. An example of openstandards wireless radio-wave technology is IEEE.
Infrared communication , which can transmit signals
between devices within small distances not more than 10
meters peer to peer or (face to face) without anybody in the
line of transmitting.
2. Scale:
Networks are often classified as
local area network (LAN),
wide area network (WAN),
metropolitan area network (MAN),
personal area network (PAN),
virtual private network (VPN),
campus area network (CAN),
storage area network (SAN), and others, depending on
their scale, scope and purpose, e.g., controller area
network (CAN) usage, trust level, and access right often
differ between these types of networks.
43
44
with that port. The first time that a previously unknown destination
address is seen, the bridge will forward the frame to all ports other
than the one on which the frame arrived.
Bridges come in three basic types:
Local bridges: Directly connect local area networks (LANs)
Remote bridges: Can be used to create a wide area network
(WAN) link between LANs. Remote bridges, where the
connecting link is slower than the end networks, largely have
been replaced with routers.
Wireless bridges: Can be used to join LANs or connect
remote stations to LANs.
Switches:
A network switch is a device that forwards and filters OSI
layer 2 datagrams (chunk of data communication) between ports
(connected cables) based on the MAC addresses in the packets. A
switch is distinct from a hub in that it only forwards the frames to
the ports involved in the communication rather than all ports
connected. A switch breaks the collision domain but represents
itself as a broadcast domain. Switches make forwarding decisions
of frames on the basis of MAC addresses. A switch normally has
numerous ports, facilitating a star topology for devices, and
cascading additional switches. Some switches are capable of
routing based on Layer 3 addressing or additional logical levels;
these are called multi-layer switches. The term switch is used
loosely in marketing to encompass devices including routers and
bridges, as well as devices that may distribute traffic on load or by
application content (e.g., a Web URL identifier).
Routers:
A router is
an
internetworking
device
that
forwards packets between networks by processing information
found in the datagram or packet (Internet protocol information
from Layer 3 of the OSI Model). In many situations, this information
is processed in conjunction with the routing table (also known as
forwarding table). Routers use routing tables to determine what
interface to forward packets (this can include the "null" also known
as the "black hole" interface because data can go into it, however,
no further processing is done for said data).
45
I. Local area network
A local area network (LAN) is a network that connects
computers and devices in a limited geographical area such as
home, school, computer laboratory, office building, or closely
positioned group of buildings. Each computer or device on the
network is a node. Current wired LANs are most likely to be
based on Ethernet technology, although new standards like ITUT G.hn also provide a way to create a wired LAN using existing
home wires (coaxial cables, phone lines and power lines).
All interconnected devices must understand the network
layer (layer 3), because they are handling multiple subnets (the
different colors). Those inside the library, which have only
10/100 Mbit/s Ethernet connections to the user device and a
Gigabit Ethernet connection to the central router, could be
called "layer 3 switches" because they only have Ethernet
interfaces and must understand IP. It would be more correct to
call them access routers, where the router at the top is a
distribution router that connects to the Internet and academic
networks' customer access routers.
46
47
48
Option:
Description
Sample
Bandwidth
Advantages Disadvantages
protocols
range
used
Point-to-Point
connection
between two
Leased line computers or Most secure Expensive
Local
Area
Networks
(LANs)
Circuit
switching
A
dedicated
circuit path is
created
between end Less
points.
Best Expensive
example
is dialup conn
ections
Call Setup
PPP,HDLC
,SDLC,
HNAS
28 kbit/s
144
PPP,ISDN
49
Packet
switching
Devices
transport
packets via a
shared single
point-to-point
or
point-tomultipoint link
across
a
carrier internet
work. Variable
length packets
are
transmitted
over
Permanent
Virtual Circuits
(PVC)
or
Switched
Virtual Circuits
(SVC)
Cell relay
Similar
to
packet
switching, but
uses
fixed
length
cells
instead
of
variable length
packets. Data
is divided into
fixed-length
cells and then
transported
across virtual
circuits
Shared media
across link
Best
for
simultaneous Overhead can
use of voice be considerable
and data
X.25Frame
-Relay
ATM
50
51
Wireless LAN
A wireless local area network (WLAN) links two or more devices
using a wireless distribution method (typically spread-spectrum
or OFDM radio), and usually providing a connection through an
access point to the wider internet. This gives users the mobility
to move around within a local coverage area and still be
connected to the network.
Wi-Fi: Wi-Fi is increasingly used as a synonym for 804.11
WLANs, although it is technically a certification of
interoperability between 804.11 devices.
52
Fixed Wireless Data:
This implements point to point links between computers or
networks
at
two
locations.
It
is
often
using
dedicated microwave or laser beams over line of sight paths. It
is often used in cities to connect networks in two or more
buildings without physically wiring the buildings together.
Wireless MAN
Wireless Metropolitan area networks are a type of wireless
network that connects several Wireless LANs. WiMAX is the term
used to refer to wireless MANs.
53
Wireless WAN
Wireless wide area networks are wireless networks that typically
cover large outdoor areas. These networks can be used to
connect branch offices of business or as a public internet
access system. They are usually deployed on the 4.4 GHz
band. A typical system contains base station gateways, access
points and wireless bridging relays. Other configurations are
mesh systems where each access point acts as a relay also.
When combined with renewable energy systems such as photovoltaic solar panels or wind systems they can be stand alone
systems.
Mobile devices networks
With the development of smart phones, cellular telephone
networks routinely carry data in addition to telephone
conversations:
Global System for Mobile Communications (GSM): The GSM
network is divided into three major systems: the switching
system, the base station system, and the operation and support
system. The cell phone connects to the base system station
which then connects to the operation and support station; it then
connects to the switching station where the call is transferred to
where it needs to go. GSM is the most common standard and is
used for a majority of cell phones.
Personal Communications Service (PCS): PCS is a radio
band that can be used by mobile phones in North America and
South Asia. Sprint happened to be the first service to set up a
PCS.
D-AMPS: Digital Advanced Mobile Phone Service, an upgraded
version of AMPS, is being phased out due to advancement in
technology. The newer GSM networks are replacing the older
system.
54
55
4.4 INTERNETWORK
Interconnection of networks
Internetworking started as a way to connect disparate types
of networking technology, but it became widespread through the
developing need to connect two or more networks via some sort
of wide area network. The original term for an internetwork
was catenet.
The definition of an internetwork today includes the
connection of other types of computer networks such as personal
area networks.
The network elements used to connect individual networks in
the ARPANET, the predecessor of the Internet, were originally
called gateways, but the term has been deprecated in this context,
because of possible confusion with functionally different devices.
Today the interconnecting gateways are called Internet routers.
Another type of interconnection of networks often occurs
within enterprises at the Link Layer of the networking model, i.e. at
the hardware-centric layer below the level of the TCP/IP logical
interfaces. Such interconnection is accomplished with network
bridges and network switches. This is sometimes incorrectly termed
internetworking, but the resulting system is simply a larger,
single sub-network, and no internetworking protocol, such
as Internet Protocol, is required to traverse these devices.
However, a single computer network may be converted into an
internetwork by dividing the network into segments and logically
dividing the segment traffic with routers.
The Internet Protocol is designed to provide an unreliable
(not guaranteed) packet service across the network. The
architecture avoids intermediate network elements maintaining any
state of the network. Instead, this function is assigned to the
endpoints of each communication session. To transfer data reliably,
applications must utilize an appropriate Transport Layer protocol,
such as Transmission Control Protocol (TCP), which provides
a reliable stream. Some applications use a simpler, connection-less
56
57
58
59
5
NETWORK MODEL
Unit Structure
5.1 Introduction
3.2 The OSI Reference model
5.3 The TCP/IP Reference model
5.4 A comparison of the OSL and TCP Reference models
5.1 INTRODUCTION
Networking models
Two architectural models are commonly used to describe the
protocols and methods used in internetworking.
The Open System Interconnection (OSI) reference model
was developed under the auspices of the International Organization
for Standardization (ISO) and provides a rigorous description for
layering protocol functions from the underlying hardware to the
software interface concepts in user applications. Internetworking is
implemented in the Network Layer (Layer 3) of the model.
The Internet Protocol Suite, also called the TCP/IP model of
the Internet was not designed to conform to the OSI model and
does not refer to it in any of the normative specifications in
Requests and Internet standards. Despite similar appearance as a
layered model, it uses a much less rigorous, loosely defined
architecture that concerns itself only with the aspects of logical
networking. It does not discuss hardware-specific low-level
interfaces, and assumes availability of a Link Layer interface to the
local network link to which the host is connected. Internetworking is
facilitated by the protocols of its Internet Layer.
Connection:
TCP is a connection-oriented protocol. It establishes a virtual
path between the source and destination. All the segments
belonging to a message are then sent over this virtual path. Using a
single virtual pathway for the entire message facilitates the
acknowledgment process as well as retransmission of damaged or
lost frames. In TCP, connection-oriented transmission requires two
procedures:
60
(1) Connection Establishment and
(2) Connection Termination.
Connection Establishment
TCP transmits data in full-duplex mode. When two TCPs in two
machines are connected, they are able to send segments to each
other simultaneously. This implies that each party must initialize
communication and get approval from the other party before any
data transfer.
Four steps are needed to establish the connection, as discussed
before.
However, the second and third steps can be combined to create
a three-step connection, called a three-way handshake, as
shown in Figure.
61
segment 1. The server must also define the client window size.
Second, the segment is used as the initialization segment for
the server. It contains the initialization sequence number used
to number the bytes sent from the server to the client.
(3) The client sends the third segment. This is just an ACK
segment. It acknowledges the receipt of the second segment,
using the ACK flag and acknowledgment number field. Note
that the acknowledgment number is the server initialization
sequence number plus 1 because no user data have been sent
in segment 2. The client must also define the server window
size. Data can be sent with the third packet.
Connection Termination
Any of the two parties involved in exchanging data (client or server)
can close the connection. When connection in one direction is
terminated, the other party can continue sending data in the other
direction. Therefore, four steps are needed to close the connections
in both directions, as shown in Figure.
62
(4) The client TCP sends the fourth segment, an ACK segment, to
confirm the receipt of the FIN segment from the TCP server.
Note that the acknowledgment number is 1 plus the sequence
number received in the FIN segment from the server.
Connection Resetting:
TCP may request the resetting of a connection. Resetting
here means that the current connection is destroyed. This happens
in one of three cases:
(1) The TCP on one side has requested a connection to a
nonexistent port. The TCP on the other side may send a
segment with its RST bit set to annul the request.
(2) One TCP may want to abort the connection due to an abnormal
situation. It can send an RST segment to close the connection.
(3) The TCP on one side may discover that the TCP on the other
side has been idle for a long time. It may send an RST
segment to destroy the connection
When is TCP open, TCP half opened?
A three-step process is shown in Figure above. After the server
receives the initial SYN packet, the connection is in a half-opened
state. The server replies with its own sequence number, and
awaits an acknowledgment, the third and final packet of a TCP
open.
Attackers have gamed this half-open state. SYN attacks flood
the server with the first packet only, hoping to swamp the host with
half-open connections that will never be completed. In addition, the
first part of this three-step process can be used to detect active
TCP services without alerting the application programs, which
usually aren't informed of incoming connections until the threepacket handshake is complete.
The sequence numbers have another function. Because the
initial sequence number for new connections changes constantly, it
is possible for TCP to detect stale packets from previous
incarnations of the same circuit (i.e., from previous uses of the
same 4-tuple).
There is also a modest security benefit: A connection
cannot be fully established until both sides have
acknowledged the other's initial sequence number.
63
Node-to-Node,
deliveries:
Host-to-Host
and
Process-to-Process
Connection oriented
Connectionless
Definition
A characteristic of a
network
system
that
requires
a
pair
of
computers to establish a
connection before sending
data.
Example: Telephone line
A characteristic of network
system that allows a
computer to send data to
any other computer at any
time
without
any
prerequisite of destination
connection
Example: Postal system
PDU movement
Sequential
Three
handshake
Modus Operandi
64
Decision on path
At every node
Sequence
Example
TCP
UDP
Reliability
Reliable
Unreliable
65
66
Host
layers
Data
Layer
Function
7. Application
6. Presentation
5. Session
Inter-host communication
Segments 4. Transport
End-to-end
connections
reliability, Flow control
Packet
3. Network
2. Data Link
Physical addressing
1. Physical
Media,
signal
transmission
Media
layers Frame
Bit
and
and
binary
67
of
a connection to
68
69
makes these belong to the Network Layer, not the protocol that
carries them.
Layer 4: Transport Layer
The Transport Layer provides transparent transfer of data
between end users, providing reliable data transfer services to the
upper layers. The Transport Layer controls the reliability of a given
link through flow control, segmentation/desegmentation, and error
control. Some protocols are state and connection oriented. This
means that the Transport Layer can keep track of the segments
and retransmit those that fail. The Transport layer also provides the
acknowledgement of the successful data transmission and if no
error free data was transferred then sends the next data.
Although not developed under the OSI Reference Model and
not strictly conforming to the OSI definition of the Transport Layer,
typical examples of Layer 4 are the Transmission Control
Protocol (TCP) and User Datagram Protocol (UDP).
Of the actual OSI protocols, there are five classes of
connection-mode transport protocols ranging from class 0 (which is
also known as TP0 and provides the least features) to class 4 (TP4,
designed for less reliable networks, similar to the Internet). Class 0
contains no error recovery, and was designed for use on network
layers that provide error-free connections. Class 4 is closest to
TCP, although TCP contains functions, such as the graceful close,
which OSI assigns to the Session Layer. Also, all OSI TP
connection-mode protocol classes provide expedited data and
preservation of record boundaries, both of which TCP is incapable.
Detailed characteristics of TP0-4 classes are shown in the following
table:
Feature Name
TP0
TP1
TP2 TP3
TP4
Yes
Yes
Yes Yes
Yes
Connectionless network
No
No
No
Yes
No
Yes
Yes Yes
Yes
Yes
Yes
Yes Yes
Yes
Error Recovery
No
Yes
No
Yes
Yes
Yes
No
Yes
No
No
Yes Yes
No
Yes
70
No
No
Yes Yes
Yes
Retransmission on timeout
No
No
No
No
Yes
No
Yes
No
Yes
Yes
71
72
73
74
75
more robust set of services, the Open SPF (OSPF) protocol has
been proposed in a separate set of Requests For Comment (RFC).
Link Services
At the link-level, the Internet protocols need only provide
delivery of completed packets. While reliable service can be
implemented, it is not necessary, and in some cases could actually
impede the performance of TCP/IP based systems with additional
retransmissions. Where reliability is required, TCP based services
are generally used. Standards for exchange of Internet traffic have
been developed for a large number of physical links
including Ethernet, serial lines, Token Ring LANs, ATM, ISDN and
others. The Point To Point protocol is often used to support the
exchange of IP packets between interconnected nodes. In general
the link protocol services include mechanisms to perform the
following:
76
77
Application Layer
Application Layer
Presentation Layer
Session Layer
Transport Layer
Transport Layer
Network Layer
Internet Layer
Host to Network
Physical Layer
78
79
O.S.I. Model
Expand
the Open
Interconnect
acronym
7
No. of layers
T.C.P./I.P.
System Transmission
Control
Protocol / Internet Protocol
4
Protocols
Orientation
80
Services
Suitability
Physical layer Data Link & Physical are Doesnt even mention about
separate
these
Application, TCP/IP does not have
Top
layers Separate
Presentation
and separate
Session
and
merged
session layers
Presentation Layer. It is a
part of Application Layer
81
UNIT 3
Physical Layer
6
TRANSMISSION MEDIUM
Unit Structure
6.1. Transmission Media
6.1.1. Guided Media
6.1.1.1. Twisted Pair
6.1.1.2. Coaxial Cable
6.1.1.3. Fiber Optics
6.2.2. Unguided Media
6.2. Wireless transmission
6.3. Electromagnetic Spectrum
6.4. Radio Transmission
6.5. Microwave transmission
6.6. Infra red waves
6.6.1. Millimeter waves
6.6.2. Applications of millimeter waves
6.7. Light wave transmission
82
83
84
the size and spacing of the conductors and the type of dielectric
used between them. Balanced pair, or twin lines, has a Zo which
depends on the ratio of the wire spacing to wire diameter and the
foregoing remarks still apply. For practical lines, Zo at high
frequencies is very nearly, but not exactly, a pure resistance.
Because the impedance of a cable is actually a function of the
spacing of the conductors, so separating the conductors
significantly changes the cable impedance at that point.
When many twisted pairs are put together to form a multipair cable, individual conductors are twisted into pairs with varying
twists to minimize crosstalk. Specified color combinations to
provide pair identification.
The most commonly used twisted pair cable impedance is
100 ohms. It is widely used for data communications and
telecommunications applications in structured cabling systems. In
most twisted pair cable applications the cable impedance is
between 100 ohms and 150 ohms. When a cable has a long
distance between the conductors, higher impedances are possible.
Typical wire conductor sizes for cables used in telecommunications
26, 24, 22 or 19 AWG.
Here is some common impedance related to twisted pair lines:
120 ohms: 120 ohms shielded cable is generally used for for
RS485 communications in industrial networking. There are
many industrial "control and data" cables which have
impedance of around 120 ohms. Also some telecom cables
(both shielded and unshielded) have impedance of 120 ohms,
and there are digital telecom systems matched to this
impedance also (for example some E1 systems).
85
86
87
88
89
Step Index has a large core the light rays tend to bounce
around, reflecting off the cladding, inside the core. This causes
some rays to take a longer or shorted path through the core. Some
take the direct path with hardly any reflections while others bounce
back and forth taking a longer path. The result is that the light rays
arrive at the receiver at different times. The signal becomes longer
than the original signal. LED light sources are used. Typical Core:
62.5 microns.
90
Single Mode:
When the fiber core is so small that only light ray at 0
incident angle can stably pass through the length of fiber without
much loss, this kind of fiber is called single mode fiber. The basic
requirement for single mode fiber is that the core be small enough
to restrict transmission to a singe mode. This lowest-order mode
can propagate in all fibers with smaller cores (as long as light can
physically enter the fiber).
The most common type of single mode fiber has a core
diameter of 8 to 10 m and is designed for use in the near infrared
(the most common are 1310nm and 1550nm). Please note that the
mode structure depends on the wavelength of the light used, so
that this fiber actually supports a small number of additional modes
at visible wavelengths. Multi mode fiber, by comparison, is
manufactured with core diameters as small as 50um and as large
as hundreds of microns.
The following picture shows the fiber structure of a single
mode fiber.
91
92
93
Region of the
Main interactions with matter
spectrum
Radio
Microwave
through
far Plasma oscillation, molecular rotation
infrared
Near infrared
Visible
Ultraviolet
X-rays
Gamma rays
94
Energetic ejection of core electrons in heavy
elements, Compton scattering (for all atomic
numbers), excitation of atomic nuclei, including
dissociation of nuclei
95
96
97
considerations
Advantages:
No cables needed
Multiple channels available
Wide bandwidth
98
Disadvantages:
Line-of-sight will be disrupted if any obstacle, such as new
99
Band
Starts at
Ends at
VHF
30 MHz
300 MHz
10 meters
1 meter
UHF
300 MHz
3 GHz
1 meter
10 cm< /TR>
SHF
3 GHz
30 GHz
10 cm
1 cm
EHF
30 GHz
300 GHz
1 cm
1 mm
Sub Millimeter
300 GHz
Infinity
1 mm
zero
100
101
UNIT 4
Data Link Layer
7
DATA LINK LAYER
Unit Structure
7.1. Design Issues
7.1.1. Reliable Delivery:
7.1.2. Best Effort:
7.1.3. Acknowledged Delivery:
7.1.4. Framing
7.1.5. Length Count:
7.1.6. Bit Stuffing:
7.1.7. Character stuffing:
7.1.8. Encoding Violations:
7.1.9.Error Control
7.2. Error correction and detection
7.2.1. General strategy:
7.2.2. Error-correcting codes :
7.2.3. Hamming Code (1 bit error correction)
7.2.4. Error-detecting codes
7.2.5. Modulo 2 arithmetic
7.2.6. Error detection with CRC
102
3. Receiver recomputes the checksum and compares it with
the received value. If they differ, an error has occurred and
the frame is discarded.
4. Perhaps return a positive or negative acknowledgment to the
sender. A positive acknowledgment indicate the frame was
received without errors, while a negative acknowledgment
indicates the opposite.
5. Flow control. Prevent a fast sender from overwhelming a
slower receiver. For example, a supercomputer can easily
generate data faster than a PC can consume it.
6. In general, provide service to the network layer. The network
layer wants to be able to send packets to its neighbors
without worrying about the details of getting it there in one
piece.
103
104
105
106
The data link layer uses open operations for allocating buffer
space, control blocks, agreeing on the maximum message
size, etc.
Any data section (length m) is valid (we allow any 0,1 bit
string).
107
If errors still getting through: Reduce block size, so will get less
error per block.
108
109
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
16
110
26
27
28
29
30
31
...
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
111
How it works
21 (as sum of powers of 2) =
Bit 21 is checked by check bits
1 + 4 +
1, 4 and
16
16.
width
n,
height
k.
Last row of parity bits appended: (parity bit for col 1, col 2, ..., col n)
Transmit n(k+1) block row by row.
1. Any burst of length up to n in the data bits will leave at most
1 error in each col.
Can be detected.
2. If burst is in the parity bits row, they will be wrong and we
will detect there has been an error.
Note: Don't know where error was (this is detection not
correction). Will ask for re-transmit (don't know that data bits
were actually ok).
112
Redundancy
Check)
113
0+0=0
0+1=1
1+0=1
1+1=0
Multiplication:
0*0=0
0*1=0
1*0=0
1*1=1
Subtraction: if 1+1=0, then 0-1=1, hence:
0-0=0
0-1=1
1-0=1
1-1=0
Long division is as normal, except the subtraction is modulo 2.
Example
No carry or borrow:
011 + (or minus)
110
--101
Consider the polynomials:
x + 1 + x2 + x
------------x2 + 2x + 1
= x2 + 1
We're saying the polynomial arithmetic is modulo 2 as well, so that:
2 xk = 0 for all k.
7.2.6. Error detection with CRC
Consider a message 110010 represented by the polynomial M(x) =
x5 + x4 + x
Consider a generating polynomial G(x) = x3 + x2 + 1 (1101)
This is used to generate a 3 bit CRC = C(x) to be appended to
M(x).
Note this G(x) is prime.
Steps:
1. Multiply M(x) by x3 (highest power in G(x)). i.e. Add 3 zeros.
110010000
2. Divide the result by G(x). The remainder = C(x).
1101 long division into 110010000 (with subtraction mod 2)
= 100100 remainder 100
Special case: This won't work if bit string = all zeros. We don't
allow such an M(x).
114
But M(x) bit string = 1 will work, for example. Can divide 1101
into 1000.
If: x div y gives remainder c
that means: x = n y + c
Hence (x-c) = n y
(x-c) div y gives remainder 0
Here (x-c) = (x+c)
Hence (x+c) div y gives remainder 0
110010000 + 100 should give remainder 0.
3. Transmit 110010000 + 100
To be precise, transmit: T(x) = x3M(x) + C(x)
= 110010100
4. Receiver end: Receive T(x). Divide by G(x), should have
remainder 0.
Note if G(x) has order n - highest power is xn,
then G(x) will cover (n+1) bits
and the remainder will cover n bits.
i.e. Add n bits to message.
115
8
DATA LINK LAYER PROTOCOLS
Unit Structure
8.1. Elementary data link protocols
8.1.1. An unrestricted simplex protocol
8.1.2. A simplex stop-and-wait protocol
8.1.3. A simplex protocol for a noisy channel
The physical layer, data link layer, and network layer are
independent processes.
116
seq_nr
packet
} frame;
ack;
info;
/* acknowledgement number */
/* the network layer packet */
layer
to
cause
117
while (true) {
from_network_layer(&buffer);/* get something to send */
s.info = buffer; /* copy it into s for transmission */
to_physical_layer(&s); /* send it on its way */
}
}
void receiver1(void)
{
frame r;
event_type event /* filled in by wait, but not used here */
while(true) {
wait_for_event(&event); /* only possiblity is frame_arrival */
from_physical_layer(&r); /* go get the inbound frame */
to_network_layer(&r.info); /* pass the data to the network
layer */
}
8.1.2. A simplex stop-and-wait protocol
Assumptions:
Data transmission in one direction only.
Channel is error free.
How to prevent the sender from flooding the receiver with data ?
A general solution:
118
119
}
}
void receiver2(void)
{
frame r, s; /* buffers for frames */
event_type event /* filled in by wait, but not used here */
while(true) {
wait_for_event(&event);
120
event_type event;
next_frame_to_send = 0;
numbers */
wait_for_event(&event);
if(event == frame_arrival) {
from_network_layer(&buffer);/* get the next one to send */
inc(next_frame_to_send); /* invert next_frame_to_send */
}
}
}
121
void receiver3(void) {
seq_nr frame_expected;
frame r, s;
event_type event;
frame_expected = 0;
while(true) {
wait_for_event(&event);
/* frame_arrival, cksum_err */
122
9
SLIDING WINDOW PROTOCOLS
Unit Structure
9.1. SLIDING WINDOW PROTOCOLS
9.1.1 A One-Bit Sliding Window Protocol
9.1.2 A Protocol Using Go Back N
9.1.3. A Protocol Using Selective Repeat
9.2. HDLC
9.2.1. HDLC STATIONS AND CONFIGURATIONS
9.2.2. Stations
9.2.3. HDLC Operational Modes
9.2.4. HDLC Non-Operational Modes
9.2.5. HDLC Frame Structure
9.2.6. HDLC COMMANDS AND RESPONSES
123
124
125
(a) Initially. (b) After the first frame has been sent. (c) After the first
frame has been received. (d) After the first acknowledgement has
been received. data link layer only accepts frames in order, but for
larger windows this is not so. The network layer, in contrast, is
always fed data in the proper order, regardless of the data link
layers window size.
The figure shows an example with a maximum window size
of 1. Initially, no frames are outstanding, so the lower and upper
edges of the senders window are equal, but as time goes on, the
situation progresses as shown.
9.1.1 A One-Bit Sliding Window Protocol
Before tackling the general case, let us first examine a
sliding window protocol with a maximum window size of 1. Such a
protocol uses stop-and-wait since the sender transmits a frame and
waits for its acknowledgement before sending the next one.
Like the others, it starts out by defining some variables.
Next3frame3to3send tells which frame the sender is trying to send.
Similarly, frame3expected tells which frame the receiver is
expecting. In both cases, 0 and 1 are the only possibilities.
Under normal circumstances, one of the two data link layers
goes first and transmits the first frame. In other words, only one of
the data link layer programs should contain the to3physical3layer
and start3timer procedure calls outside the main loop
126
{frame3arrival,
cksum3err,
timeout}
#include "protocol.h"
void protocol4 (void)
{
seq3nr next3frame3to3send; /* 0 or 1 only */
seq3nr frame3expected; /* 0 or 1 only */
frame r, s; /* scratch variables */
packet buffer; /* current packet being sent */
event3type event;
next3frame3to3send = 0; /* next frame on the outbound
stream */
frame3expected = 0; /* frame expected next */
from3network3layer(&buffer); /* fetch a packet from the
network layer */
s.info = buffer; /* prepare to send the initial frame */
s.seq = next3frame3to3send; /* insert sequence number into
frame */
s.ack = 1 frame3expected; /* piggybacked ack */
to3physical3layer(&s); /* transmit the frame */
start3timer(s.seq); /* start the timer running */
while (true) {
wait3for3event(&event); /* frame3arrival, cksum3err, or
timeout */
if (event == frame3arrival) { /* a frame has arrived
undamaged. */
from3physical3layer(&r); /* go get it */
if (r.seq == frame3expected) { /* handle inbound frame
stream. */
to3network3layer(&r.info); /* pass packet to network layer */
inc(frame3expected); /* invert seq number expected next */
}
if (r.ack == next3frame3to3send) { /* handle outbound frame
stream. */
stop3timer(r.ack); /* turn the timer off */
127
/*
invert
senders
sequence
}
}
s.info = buffer; /* construct outbound frame */
s.seq = next3frame3to3send; /* insert sequence number into
it */
s.ack = 1 frame3expected; /* seq number of last received
frame */
to3physical3layer(&s); /* transmit a frame */
start3timer(s.seq); /* start the timer running */
}
}
The starting machine fetches the first packet from its network
layer, builds a frame from it, and sends it. When this (or any) frame
arrives, the receiving data link layer checks to see if it is a
duplicate, just as in protocol 3. If the frame is the one expected, it is
passed to the network layer and the receivers window is slid up.
The acknowledgement field contains the number of the last
frame received without error. If this number agrees with the
sequence number of the frame the sender is trying to send, the
sender knows it is done with the frame stored in buffer and can
fetch the next packet from its network layer. If the sequence
number disagrees, it must continue trying to send the same frame.
Whenever a frame is received, a frame is also sent back.
Now let us examine protocol 4 to see how resilient it is to
pathological scenarios. Assume that computer A is trying to send its
frame 0 to computer B and that B is trying to send its frame 0 to A.
Suppose that A sends a frame to B, but As timeout interval is a
little too short. Consequently, A may time out repeatedly, sending a
series of identical frames, all with seq 0 and ack 1.
When the first valid frame arrives at computer B, it will be
accepted and frame3expected will be set to 1. All the subsequent
frames will be rejected because B is now expecting frames with
sequence number 1, not 0. Furthermore, since all the duplicates
have ack 1 and B is still waiting for an acknowledgement of 0, B
will not fetch a new packet from its network layer.
128
129
the first frame. At t = 20 msec the frame has been completely sent.
Not until t = 270 msec has the frame fully arrived at the receiver,
and not until t = 520 msec has the acknowledgement arrived back
at the sender, under the best of circumstances (no waiting in the
receiver and a short acknowledgement frame).
This means that the sender was blocked during 500/520 or
96 percent of the time. In other words, only 4 percent of the
available bandwidth was used. Clearly, the combination of a long
transit time, high bandwidth, and short frame length is disastrous in
terms of efficiency.
The problem described above can be viewed as a
consequence of the rule requiring a sender to wait for an
acknowledgement before sending another frame. If we relax that
restriction, much better efficiency can be achieved. Basically, the
solution lies in allowing the sender to transmit up to w frames
before blocking, instead of just 1. With an appropriate choice of w
the sender will be able to continuously transmit frames for a time
equal to the round-trip transit time without filling up the window. In
the example above, w should be at least 26. The sender begins
sending frame 0 as before. By the time it has finished sending 26
frames, at t = 520, the acknowledgement for frame 0 will have just
arrived. Thereafter, acknowledgements arrive every 20 msec, so
the sender always gets permission to continue just when it needs it.
At all times, 25 or 26 unacknowledged frames are outstanding. Put
in other terms, the senders maximum window size is 26.
The need for a large window on the sending side occurs
whenever the product of bandwidth round-trip-delay is large. If the
bandwidth is high, even for a moderate delay, the sender will
exhaust its window quickly unless it has a large window. If the delay
is high (e.g., on a geostationary satellite channel), the sender will
exhaust its window even for a moderate bandwidth. The product of
these two factors basically tells what the capacity of the pipe is, and
the sender needs the ability to fill it without stopping in order to
operate at peak efficiency.
This technique is known as pipelining. If the channel
capacity is b bits/sec, the frame size l bits, and the round-trip
propagation time R sec, the time required to transmit a single frame
is l /b sec. After the last bit of a data frame has been sent, there is a
delay of R /2 before that bit arrives at the receiver and another
delay of at least R /2 for the acknowledgement to come back, for a
total delay of R. In stop-and-wait the line is busy for l /b and idle for
R, giving line utilization = l /(l + bR) If l < bR, the efficiency will be
less than 50 percent. Since there is always a nonzero delay for the
acknowledgement to propagate back, pipelining can, in principle,
be used to keep the line busy during this interval, but if the interval
is small, the additional complexity is not worth the trouble.
130
131
timer for frame 2 expires. Then it backs up to frame 2 and starts all
over with it, sending 2, 3, 4, etc. all over again.
The other general strategy for handling errors when frames
are pipelined is called selective repeat. When it is used, a bad
frame that is received is discarded, but good frames received after
it are buffered. When the sender times out, only the oldest
unacknowledged frame is retransmitted. If that frame arrives
correctly, the receiver can deliver to the network layer, in sequence,
all the frames it has buffered. Selective repeat is often combined
with having the receiver send a negative acknowledgement (NAK)
when it detects an error, for example, when it receives a checksum
error or a frame out of sequence. NAKs stimulate retransmission
before the corresponding timer expires and thus improve
performance.
In Fig. (b), frames 0 and 1 are again correctly received and
acknowledged and frame 2 is lost. When frame 3 arrives at the
receiver, the data link layer there notices that is has missed a
frame, so it sends back a NAK for 2 but buffers 3. When frames 4
and 5 arrive, they, too, are buffered by the data link layer instead of
being passed to the network layer. Eventually, the NAK 2 gets back
to the sender, which immediately resends frame 2. When that
arrives, the data link layer now has 2, 3, 4, and 5 and can pass all
of them to the network layer in the correct protocol 5 (go back n)
allows multiple outstanding frames.
The sender may transmit up to MAX3SEQ frames without
waiting for an ack. In addition, unlike in the previous protocols, the
network layer is not assumed to have a new packet all the time.
Instead, the network layer causes a network3layer3ready event
when there is a packet to send.
*/ #define MAX3SEQ 7 /* should be 2n 1 */
typedef
enum
{frame3arrival,
network3layer3ready} event3type;
cksum3err,
timeout,
#include "protocol.h"
static boolean between(seq3nr a, seq3nr b, seq3nr c)
{
/* Return true if a <=b < c circularly; false otherwise. */
if (((a <= b) && (b < c)) || ((c < a) && (a <= b)) || ((b < c) &&
(c < a)))
return(true);
else
return(false);
132
}
static
void
send3data(seq3nr
frame3expected, packet buffer[ ])
frame3nr,
seq3nr
{
/* Construct and send a data frame. */
frame s; /* scratch variable */
s.info = buffer[frame3nr]; /* insert packet into frame */
s.seq = frame3nr; /* insert sequence number into frame */
s.ack = (frame3expected + MAX3SEQ) % (MAX3SEQ + 1);/*
piggyback ack */
to3physical3layer(&s); /* transmit the frame */
start3timer(frame3nr); /* start the timer running */
}
void protocol5(void)
{
seq3nr next3frame3to3send; /* MAX3SEQ > 1; used for
outbound stream */
seq3nr
ack3expected;
unacknowledged */
/*
oldest
frame
as
yet
/*
allow
network3layer3ready
133
134
135
136
137
it must be kept within the data link layer and not passed to the
network layer until all the lower-numbered frames have already
been delivered to the network layer in the correct order. A protocol
using this algorithm is given in the above mentioned figure.
Non sequential receive introduces certain problems not
present in protocols in which frames are only accepted in order. We
can illustrate the trouble most easily with an example. Suppose that
we have a 3-bit sequence number, so that the sender is permitted
to transmit up to seven frames before being required to wait for an
acknowledgement. Initially, the senders and receivers windows
are as shown in the following figure (a).
138
timeout,
#include "protocol.h"
boolean no3nak = true; /* no nak has been sent yet */
seq3nr oldest3frame = MAX3SEQ + 1; /* initial value is only
for the simulator */
static boolean between(seq3nr a, seq3nr b, seq3nr c)
{
/* Same as between in protocol5, but shorter and more
obscure. */
return ((a <= b) && (b < c)) || ((c < a) && (a <= b)) || ((b < c)
&& (c < a));
}
static void send3frame(frame3kind fk, seq3nr frame3nr,
seq3nr frame3expected, packet buffer[ ])
{
/* Construct and send a data, ack, or nak frame. */
frame s; /* scratch variable */
s.kind = fk; /* kind == data, ack, or nak */
if (fk == data) s.info = buffer[frame3nr % NR3BUFS];
s.seq = frame3nr; /* only meaningful for data frames */
s.ack = (frame3expected + MAX3SEQ) % (MAX3SEQ + 1);
if (fk == nak) no3nak = false; /* one nak per frame, please */
to3physical3layer(&s); /* transmit the frame */
if (fk == data) start3timer(frame3nr % NR3BUFS);
stop3ack3timer(); /* no need for separate ack frame */
}
139
void protocol6(void)
{
seq3nr ack3expected; /* lower edge of senders window */
seq3nr next3frame3to3send; /* upper edge of senders
window + 1 */
seq3nr frame3expected; /* lower edge of receivers window
*/
seq3nr too3far; /* upper edge of receivers window + 1 */
int i; /* index into buffer pool */
frame r; /* scratch variable */
packet out3buf[NR3BUFS]; /* buffers for the outbound
stream */
packet in3buf[NR3BUFS]; /* buffers for the inbound stream */
boolean arrived[NR3BUFS]; /* inbound bit map */
seq3nr nbuffered; /* how many output buffers currently used
*/
event3type event;
enable3network3layer(); /* initialize */
ack3expected = 0; /* next ack expected on the inbound
stream */
next3frame3to3send = 0; /* number of next outgoing frame */
frame3expected = 0;
too3far = NR3BUFS;
nbuffered = 0; /* initially no packets are buffered */
for (i = 0; i < NR3BUFS; i++) arrived[i] = false;
SEC. 3.4 SLIDING WINDOW PROTOCOLS 225
while (true) {
wait3for3event(&event); /* five possibilities: see event3type
above */
switch(event) {
case network3layer3ready: /* accept, save, and transmit a
new frame */
nbuffered = nbuffered + 1; /* expand the window */
from3network3layer(&out3buf[next3frame3to3send
NR3BUFS]); /* fetch new packet */
send3frame(data, next3frame3to3send,
out3buf);/* transmit the frame */
frame3expected,
140
0,
frame3expected,
out3buf);
else
if
(between(frame3expected,
r.seq,
(arrived[r.seq%NR3BUFS] == false)) {
too3far)
&&
(MAX3SEQ
1),
141
142
143
other than the expected one arrived (potential lost frame). To avoid
making
Multiple requests for retransmission of the same lost frame,
the receiver should keep track of whether a NAK has already been
sent for a given frame. The variable no3nak in protocol 6 is true if
no NAK has been sent yet for frame3expected. If the NAK gets
mangled or lost, no real harm is done, since the senders will
eventually time out and retransmit the missing frame anyway. If the
wrong frame arrives after a NAK has been sent and lost, no3nak
will be true and the auxiliary timer will be started. When it expires,
an ACK will be sent to resynchronize the sender to the receivers
current status.
In some situations, the time required for a frame to
propagate to the destination, be processed there, and have the
acknowledgement come back is (nearly) constant. In these
situations, the sender can adjust its timer to be just slightly larger
than the normal time interval expected between sending a frame
and receiving its acknowledgement. However, if this time is highly
variable, the sender is faced with the choice of either setting the
interval to a small value (or risking unnecessary retransmissions),
or setting it to a large value (and going idle for a long period after
an error).
Both choices waste bandwidth. If the reverse traffic is
sporadic, the time before acknowledgement will be irregular, being
shorter when there is reverse traffic and longer when there is not.
Variable processing time within the receiver can also be a problem
here. In general, whenever the standard deviation of the
acknowledgement interval is small compared to the interval itself,
the timer can be set tight and NAKs are not useful. Otherwise the
timer must be set loose, to avoid unnecessary retransmissions,
but NAKs can appreciably speed up retransmission of lost or
damaged frames.
Closely related to the matter of time outs and NAKs is the
question of determining which frame caused a timeout. In protocol
5, it is always ack3expected, because it is always the oldest. In
protocol 6, there is no trivial way to determine who timed out.
Suppose that frames 0 through 4 have been transmitted, meaning
that the list of outstanding frames is 01234, in order from oldest to
youngest. Now imagine that 0 times out, 5 (a new frame) is
transmitted, 1 times out, 2 times out, and 6 (another new frame) is
transmitted. At this point the list of outstanding frames is 3405126,
from oldest to youngest. If all inbound traffic (i.e.,
acknowledgement- bearing frames) is lost for a while, the seven
outstanding frames will time out in that order.
144
9.2. HDLC
High-Level Data Link Control, also know as HDLC, is a bit
oriented, switched and non-switched protocol. It is a data link
control protocol, and falls within layer 2, the Data Link Layer, of the
Open Systems Interface(OSI) model as shown in the following
figure.
Figure 1
HDLC is a protocol developed by the International
Organization for Standardization (ISO). It falls under the ISO
standards ISO 3309 and ISO 4335. It has found itself being used
throughout the world. It has been so widely implemented because it
supports both half duplex and full duplex communication lines, point
to point(peer to peer) and multi-point networks, and switched or
non-switched channels. The procedures outlined in HDLC are
designed to permit synchronous, code-transparent data
transmission. Other benefits of HDLC are that the control
information is always in the same position, and specific bit patterns
used for control differ dramatically from those in representing data,
which reduces the chance of errors.
It has also led to many subsets. Two subsets widely in use
are Synchronous Data Link Control(SDLC) and Link Access
Procedure-Balanced (LAP-B).
145
146
9.2.2. Stations:
HDLC also defines three types of configurations for the three
types of stations:
Unbalanced Configuration
Balanced Configuration
Symmetrical Configuration
UNBALANCED CONFIGURATION
The unbalanced configuration in an HDLC link consists of a
primary station and one or more secondary stations. The
unbalanced occurs because one stations controls the other
stations. In a unbalanced configurations, any of the following can
be used:
147
SYMMETRICAL CONFIGURATION
This third type of configuration is not widely in use today. It
consists of two independent point to point, unbalanced station
configurations. In this configurations, each station has a primary
and secondary status. Each station is logically considered as two
stations.
9.2.3. HDLC Operational Modes
HDLC offers three different modes of operation. These three
modes of operations are:
Normal Response Mode(NRM)
Asynchronous Response Mode(ARM)
Asynchronous Balanced Mode(ABM)
Normal Response Mode
This is the mode in which the primary station initiates
transfers to the secondary station. The secondary station can only
transmit a response when, and only when, it is instructed to do so
by the primary station. In other words, the secondary station must
receive explicit permission from the primary station to transfer a
response. After receiving permission from the primary station, the
secondary station initiates it's transmission. This transmission from
the secondary station to the primary station may be much more
than just an acknowledgment of a frame. It may in fact be more
than one information frame. Once the last frame is transmitted by
the secondary station, it must wait once again from explicit
permission to transfer anything, from the primary station. Normal
Response Mode is only used within an unbalanced configuration.
Asynchronous Response Mode
In this mode, the primary station doesn't initiate transfers to
the secondary station. In fact, the secondary station does not have
to wait to receive explicit permission from the primary station to
transfer any frames. The frames may be more than just
acknowledgment frames. They may contain data, or control
information regarding the status of the secondary station. This
mode can reduce overhead on the link, as no frames need to be
148
149
Field Name
Flag Field( F )
Address Field( A )
Control Field( C )
Information Field( I )
Frame Check Sequence( FCS )
Closing Flag Field( F )
Size(in bits)
8 bits
8 bits
8 or 16 bits
Variable; Not used in some
frames
16 or 32 bits
8 bits
150
not concerned with any specific bit code inside the data stream. It is
only concerned with keeping flags unique.
THE ADDRESS FIELD
The address field (A) identifies the primary or secondary
stations involvement in the frame transmission or reception. Each
station on the link has a unique address. In an unbalanced
configuration, the A field in both commands and responses refers to
the secondary station. In a balanced configuration, the command
frame contains the destination station address and the response
frame has the sending station's address.
THE CONTROL FIELD
HDLC uses the control field(C) to determine how to control
the communications process. This field contains the commands,
responses and sequences numbers used to maintain the data flow
accountability of the link, defines the functions of the frame and
initiates the logic to control the movement of traffic between
sending and receiving stations. There three control field formats:
Information Transfer Format
The frame is used to transmit end-user data between two devices.
Supervisory Format
The control field performs control functions such as
acknowledgment of frames, requests for re-transmission, and
requests for temporary suspension of frames being transmitted. Its
use depends on the operational mode being used.
Unnumbered Format
This control field format is also used for control purposes. It
is used to perform link initialization, link disconnection and other link
control functions.
151
field. The information field contains the actually data the sender is
transmitting to the receiver.
THE FRAME CHECK SEQUENCE FIELD
This field contains a 16 bit, or 32 bit cyclic redundancy
check. It is used for error detection.
9.2.6. HDLC COMMANDS AND RESPONSES
The set of commands and responses in HDLC is
summarized in table 1.
INFORMATION TRANSFER FORMAT COMMAND AND
RESPONSE
The functions of the information command and response is
to transfer sequentially numbered frames, each containing an
information field, across the data link.
SUPERVISORY FORMAT COMMANDS AND RESPONSES
Supervisory(S) commands and responses are used to perform
numbered supervisory functions such as acknowledgment, polling,
temporary suspension of information transfer, or error recovery.
Frames with the S format control field cannot contain an information
field. A primary station may use the S format command frame with
the P bit set to 1 to request a response from a secondary station
regarding its status. Supervisory Format commands and responses
are as follows:
152
153
Information Transfer
Format Commands
Format Responses
I - Information
I - Information
Supervisory Format
Supervisory Format
Commands
Responses
RR - Receive ready
RR - Receive ready
REJ - Reject
REJ - Reject
Unnumbered Format
Unnumbered Format
Commands
Commands
UA
Unnumbered
Acknowledgment
RD - Request Disconnect
Unnumbered
Exchange
SABME
Set
Asynchronous
FRMR - Frame Reject
Balanced Mode Extended
SIM - Set Initialization Mode
UP - Unnumbered Poll
TEST - Test
154
UI - Unnumbered Information
XID - Exchange identification
RSET - Reset
TEST - Test
HDLC SUBSETS
Many other data link protocols have been derived from
HDLC. However, some of them reach beyond the scope of HDLC.
Two other popular offsets of HDLC are Synchronous Data Link
Control (SDLC), and Link Access Protocol, Balanced(LAP-B).
SDLC is used and developed by IBM. It is used in a variety of
terminal to computer applications. It is also part of IBM's SNA
communication architecture. LAP-B was developed by the ITU-T. It
is derived mainly from the asynchronous response mode (ARM) of
HDLC. It is commonly used for attaching devices to packetswitched networks.
155
10
THE MEDIUM ACCESS SUB LAYER
Under this unit, we are going to discuss the importance of the
medium access sub layer and the protocols support the
functionalities. Following Protocols are going to be discussed:
Unit Structure
10.1 Multiple Access Protocols
10.1.1ALOHA (Pure, slotted, reservation)
10.2 Carrier Sense Multiple Access Protocols (CSMA)
Collision free Protocols
IEEE Standard 802.3, 802.4, 802.5, 802.6
10.3 High speed LANs FDDI
10.4 Satellite Networks Polling, ALOHA, FDMA, TDMA, CDMA
10.5 Categories of satellites GEO, MEO, LEO
156
10.1.1 ALOHA:
Aloha, also called the Aloha method, refers to a simple
communications scheme in which each source (transmitter) in a
network sends data whenever there is a frame to send. If the frame
successfully reaches the destination (receiver), the next frame is
sent. If the frame fails to be received at the destination, it is sent
again. This protocol was originally developed at the University of
Hawaii for use with satellite communication systems in the Pacific.
In a wireless broadcast system or a half-duplex two-way link,
Aloha works perfectly. But as networks become more complex, for
example in an Ethernet system involving multiple sources and
destinations in which data travels many paths at once, trouble
occurs because data frames collide (conflict). The heavier the
communications volume, the worse the collision problems
become. The result is degradation of system efficiency, because
when two frames collide, the data contained in both frames is lost.
To minimize the number of collisions, thereby optimizing
network efficiency and increasing the number of subscribers that
can use a given network, a scheme called slotted Aloha was
developed. This system employs signals called beacons that are
sent at precise intervals and tell each source when the channel is
clear to send a frame. Further improvement can be realized by a
more sophisticated protocol called Carrier Sense Multiple Access
with Collision Detection (CSMA).
History about ALOHA:
In 1070s, Norman Abranson and his colleagues at the
University of Hawaii devised a new and elegant method to solve the
channel allocation problem. Many researchers have extended their
work since then. Although Abranson's work, called the Aloha
System, used ground-based radio broadcasting, the basic idea is
applicable to any system in which uncoordinated users are
completing for the use of a single shared channel.
The two several of ALOHA are:
PURE ALOHA
SLOTTED ALOHA
Pure Aloha:
157
158
159
160
c1,...,
and sends a control packet over the
cm control
channel at the first time unit of the cycle. The control packet is
consisting of the transmitter address, the receiver address and the
dk, of the data channel. Immediately after
wavelength,
transmission of the control packet the data packet is transmitted
over
Types of CSMA
1-persistent CSMA
When the sender (station) is ready to transmit data, it checks
if the physical medium is busy. If so, it senses the medium
continually until it becomes idle, and then it transmits a piece of
data (a frame). In case of a collision, the sender waits for
a random period of time and attempts to transmit again.
p-persistent CSMA
When the sender is ready to send data, it checks continually
if the medium is busy. If the medium becomes idle, the sender
transmits a frame with a probability p. If the station chooses not to
transmit (the probability of this event is 1-p), the sender waits until
the next available time slot and transmits again with the same
probability p. This process repeats until the frame is sent or some
other sender stops transmitting. In the latter case the sender
monitors the channel, and when idle, transmits with a probability p,
and so on.
o-persistent CSMA
Each station is assigned a transmission order by a
supervisor station. When medium goes idle, stations wait for their
time slot in accordance with their assigned transmission order. The
station assigned to transmit first transmits immediately. The station
assigned to transmit second waits one time slot (but by that time
the first station has already started transmitting). Stations monitor
the medium for transmissions from other stations and update their
assigned order with each detected transmission (i.e. they move one
position closer to the front of the queue).
Protocol modifications/variations:
Carrier
Sense
Multiple
Access
With
Collision
Detection (CSMA/CD) is a modification of CSMA. CSMA/CD is
used to improve CSMA performance by terminating transmission as
soon as a collision is detected, and reducing the probability of a
second collision on retry.
161
Carrier
Sense
Multiple
Access
with
Collision
Avoidance (CSMA/CA) is a modification of CSMA. Collision
avoidance is used to improve the performance of CSMA by
attempting to be less "greedy" on the channel. If the channel is
sensed busy before transmission then the transmission is deferred
for a "random" interval. This reduces the probability of collisions on
the channel.
162
FDDI Specifications
FDDI specifies the physical and media-access portions of the OSI
reference model. FDDI is not actually a single specification, but it is
a collection of four separate specifications each with a specific
function. Combined, these specifications have the capability to
provide high-speed connectivity between upper-layer protocols
such as TCP/IP and IPX, and media such as fiber-optic cabling.
FDDI's four specifications are the Media Access Control (MAC),
Physical Layer Protocol (PHY), Physical-Medium Dependent
(PMD), and Station Management (SMT). The MAC specification
defines how the medium is accessed, including frame format, token
handling, addressing, algorithms for calculating cyclic redundancy
check (CRC) value, and error-recovery mechanisms. The PHY
specification defines data encoding/decoding procedures, clocking
requirements, and framing, among other functions. The PMD
specification defines the characteristics of the transmission
medium, including fiber-optic links, power levels, bit-error rates,
optical components, and connectors. The SMT specification defines
FDDI station configuration, ring configuration, and ring control
features, including station insertion and removal, initialization, fault
isolation and recovery, scheduling, and statistics collection.
FDDI is similar to IEEE 802.3 Ethernet and IEEE 802.5 Token Ring
in its relationship with the OSI model. Its primary purpose is to
163
164
165
166
Dual Homing
Critical devices, such as routers or mainframe hosts, can
use a fault-tolerant technique called dual homing to provide
additional redundancy and to help guarantee operation. In dual-
167
168
169
170
40:
Void Frame.
41,4F: Station
Management
(SMT) Frame.
C2,C3
MAC Frame.
:
50,51: LLC Frame.
60:
Implementor Frame.
70:
Reserved Frame.
Please note that the list here are only the most common values
that can be formed by a 48 bit addressed synchronous data frames.
Destination Address
Destination Address field contains 12 symbols that identifies
the station that is receiving this particular frame. When FDDI is first
setup, each station is given a unique address that identifies
themselves from the others. When a frame passed by the station,
the station will compare its address against the DA field of the
frame. If it is a match, station then copies the frame into its buffer
area waiting to be processed. There is not restriction on the number
of stations that a frame can reach at a time. If the first bit of the DA
field
is
set
to
'1',
then
the
address
is
called
a group or global address. If the first bit is '0', then the address is
calledindividual address. As the name suggests, a frame with
a global address setting can be sent to multiple stations on the
network. If the frame is intended for everyone on the network, the
address bits will be set to all 1's. Therefore, a global address
contains all 'F' symbols. There are also two different ways of
administer these addresses. One's called local and the other's
calleduniversal. The second bit of the address field determine
whether or not the address is locally or universally administered. If
the second bit is '1' then it is locally administered address. If the
second bit is a '0', then it is universally administered adress.A locally
administer address are addresses that havebeen assigned by the
network administrator and a universally administered addresses are
pre-assigned by the manufacturer's OUI.
Source Address
A Source Address identifies the station that created the
frame. This field is used for remove frames from the ring. Each time
a frame is sent, it travels around the ring, visiting each station,and
eventually (hopefully) comes back to the station that originally sent
that frame. If the address of a station matches the SA field inthe
frame, the station will strip the frame off the ring. Each stationis
responsible for removing its own frame from the ring.
171
Information Field
INFO field is the heart and soul of the frame. Every
components of the frame is designed around this field; Who to send
it to, where's this coming from, how it is received and so on.The
type of information in the INFO field can be found by looking in the
FC field of the frame. For example:'50'(hex) denodes a LLC frame.
So, the INFO field will have a LLC header followed by other upper
layer headers. For example SNAP, ARP, IP, TCP, SNMP,
etc.'41'(hex or '4F'(hex) denode s SMT (Station Management)
frame. Therefore, a SMT header will appear in the INFO field.
Frame Check Sequence
Frame Check Sequence field is used to check or verify the
traversingframe for any bit errors. FCS information is generated by
the stationthat sends the frame, using the bits in FC, DA, SA, INFO,
and FCSfields. To verify if there are any bit errors in the frame,
FDDI uses 8 symbols (32 bits) CRC (Cyclic Redundancy Check) to
ensure the transmission of a frame on the ring.
End Delimiter
As the name suggests, the end delimiter denodes the end of
the frame. The ending delimiter consist of a 'T' symbol. This 'T'
symbols indicates that the frame is complete or ended. Any data
sequence that does not end with this 'T' symbol is not considered to
be a frame.
Frame Status
Frame Status (FS) contains 3 indicators that dictates the
condition of the frame. Each indicator can have two values: Set ('S')
or Reset ('R'). The indicators could possibly be corrupted. In this
case, the indicators is neither 'S' nor 'R'. All frame are initially set to
'R'.Three types of indicators are as follows:Error (E):This indicator is
set if a station determines an error for that frame. Might be a CRC
failiure or other causes. If a frame has its E indicator set, then, that
frame is discarded by the first station that encounters the
frame.Acknowledge(A): Sometime this indicator is called 'address
recognized'. This indicator is set whenever a frame in properly
received; meaning the frame has reached its destination address.
Copy (C): This indicator is set whenever a station is able to copy the
received frame into its buffer section. Thus, Copy and Acknowledge
indicators are usually set at the same time. But, sometimes when a
station is receiving too many frames and cannot copy all the
incoming frames. If this happens, it would re-transmit the frame with
indicator 'A' set indicator 'C' left on reset.
172
Broadcast a packet only at the beginning of this discretetime interval, for example every 20 milliseconds.
173
174
175
176
multiplexing divides the capacity of the low-level communication
channel into several higher-level logical channels, one for each
message signal or data stream to be transferred. A reverse
process, known as demultiplexing, can extract the original
channels on the receiver side.
A device that performs the multiplexing is called
a multiplexer (MUX), and a device that performs the reverse
process is called a demultiplexer (DEMUX).
Inverse multiplexing (IMUX) has the opposite aim as
multiplexing, namely to break one data stream into several
streams, transfer them simultaneously over several
communication channels, and recreate the original data stream
177
customer would have to pay for the satellite bandwidth even when
they were not using it.
Where
multiple-access
is
concerned,
SCPC
is
essentially FDMA. Some applications use SCPC instead of TDMA,
because they require guaranteed, unrestricted bandwidth. As
satellite TDMA technology improves however, the applications for
SCPC are becoming more limited.
Advantages:
simple and reliable technology
low-cost equipment
any bandwidth (up to a full transponder)
usually 64 kbit/s to 50 Mbit/s
easy to add additional receive sites (earth stations)
Disadvantages
Satellite Networks-TDMA
When you require efficient satellite bandwidth utilization for
multiple sites with infrequent or low usage requirements, TDMA
technologies provide low cost, highly effective solutions perfect
for Satellite Internet applications.
178
OVERVIEW :
No other technology is quite as suited to the bursty nature of a
good deal of IP traffic as TDMA. Many sites can share bandwidth
and timeslots allocated dynamically to enhance bandwidth
utilization. TDMA networks are highly flexible, extendable and
bandwidth efficient. Ideal for applications that are not constantly
resource intensive, then we have to create TDMA VSAT satellite
networks that share network capacity on-demand
KEY FEATURES
This network is available in point-to-multipoint and fully meshed
configurations.
TDMA networks offer the following benefits:
Low cost Internet access
Dynamic, demand-assigned bandwidth allocation
Cost-effective, flexible and expandable network architecture
Reliability and quality of service guaranteed by SLA
One stop solution for network design, implementation and
support
Centralized management.
KEY APPLICATIONS
You should consider an TDMA VSAT satellite network for the
following:
Low cost, multi-site connectivity
Satellite Internet
VoIP
Content distribution
179
Satellite Networks-CDMA
The CDMA signal provides excellent data and voice
capacity through the satellite phone network of 48 satellites. The
CDMA signal is the foundation for 3G communication services
worldwide. CDMA converts speech signal into digital format and
then transmits it from the Satellite phone up to the satellite systems
and down to the ground station. Every call over the satellite network
has its own unique code which distinguishes it from the other calls
sharing the airwaves at the same time. The CDMA signal is
without interference, cross talk or static. CDMA was introduced
in 1995 and soon become the fastest growing wireless technology
Key features of satellite phone CDMA:
Unique forward and reverse links
Direct sequence spread spectrum
Seamless soft handoff
Universal frequency reuse
Multi-path propagation for diversity
Variable rate transmission
Introduction to Global Satellite Systems
Several different types of global satellite communications
systems are in various stages of development. Each system either
planned or existing, has a unique configuration optimized to
support a unique business plan based on the services offered and
the markets targeted.
In the last few years more than 60 global systems have been
proposed to meet the growing demand for international
communications services. More are being planned and these are in
addition to a large number of new regional systems.
Some of the global systems intend to provide global phone
service, filling in where ground-based wireless systems leave off or
providing seamless connectivity between different systems.
Others intend to provide global data connectivity, either for lowcost short message applications such as equipment monitoring, or
for high-speed Internet access anywhere in the world.
The global phone systems will target two very different markets.
The first is the international business user, who wants the ability to
use a single mobile wireless phone anywhere in the world. This is
impossible today on terrestrial systems because mobile phone
standards are different from region to region. The second market is
unserved and underserved communities where mobile and even
basic telecommunications services are unavailable.
180
181
182
183
184
185
186
are. This orbit causes the satellite to move around the Earth faster
when it is traveling close to the Earth and slower the farther away it
gets. In addition, the satellites beam covers more of the Earth from
farther away, as shown in the illustration.
The orbits are designed to
maximize the amount of time each
satellite spends in view of
populated areas. Therefore, unlike
most LEOs, HEO systems do not
offer continuous coverage over
outlying
geographic
regions,
especially near the south pole.
Several of the proposed global
communications satellite systems
actually are hybrids of the four
varieties reviewed above. For
example, all of the proposed HEO
communications
systems
are
hybrids, most often including a
GEO or MEO satellite orbital plane
around the equator to ensure
maximum coverage in the densely
populated zone between 40
degrees North Latitude and 40
degrees South Latitude. Examples
of HEO systems include Ellipso and
the proposed Pentriad.
Summary of HEO Pros and Cons
PRO: The HEO orbital design maximizes the satellites' time
spent over populated areas, thus requiring fewer satellites than
LEOs and providing superior line-of-sight in comparison to most
LEOs or GEOs.
187
11
THE NETWORK LAYER
Unit Structure
11.1. Introduction
11.2. Network Layer Design issues
11.3. Routing Algorithms
11.3.1. Adaptive & Non Adaptive
11.3.2. The Optimality Principle & Sink Tree
11.4. Shortest Path routing
11.5. Flooding
11.6. Distance vector routing
11.7. Link state routing
11.8. Broadcast routing
11.9. Multicast routing
11.10. Internetworking
188
Both approaches
disadvantages.
have
their
own
advantages
and
189
190
191
Sink tree
From the optimality principle, we can see that the set of
optimal routes from all sources to a given destination form a
tree rooted at the destination.
Such a tree is called a sink tree and is shown in the diagram
below, where the distance metric is the number of hops.
Since a sink tree is a tree that does not contain any loops, so
each packet will be delivered within a finite and bounded
number of hops.
Figure : A Subnet
192
193
11.5. FLOODING
3. Selective Flooding:
Applications of Flooding
194
It is also known as :
1. Distributed Bellman-Ford routing algorithm
2. Ford-Fulkerson algorithm
195
The router creates new copies of the packet one for each
output line and each packet contains only those destinations
which could be reached on those output lines. (i.e. the
number of destination addresses is filtered out)
After a few hops each packet will contain only one address
like a normal packet.
3. Spanning Tree
This approach uses the Sink tree for the router that initiates
the broadcast.
Spanning tree is a subset of the subnet that contains all the
routers but contains no loops.
Using this approach the router can broadcast the incoming
packet on all the spanning tree lines except the one it
arrived on.
This approach makes excellent use of bandwidth and
generates the minimum number of packets necessary to get
the job done.
It is mandatory for the routers to know the spanning tree.
4. Reverse Path Forwarding
This algorithm tries to approximate the behavior of spanning
tree approach when the routers have no information about
the spanning tree.
196
Mechanism :
When a broadcasted packet is received by a router, it
checks whether the same line is used to broadcast packets
to the source. Hence the packet has itself followed the best
available path.
In case the broadcast packet arrives at a line other than the
one used to send data to the source the packet is
discarded.
Advantages of this approach
1. It is easy to implement and is efficient.
2. The routers do not need to know the spanning tree.
3. There is no overhead involved in maintaining the
destination list.
4. No mechanism is required to stop the process (damping
like hop counter) as is required in flooding.
11.9.MULTICAST ROUTING
197
II.
11.10. INTERNETWORKING
II.
III.
198
I.
No Criteria
Description
Service
Offered
Protocol
Addressing
Multicasting
Packet Size
Quality
Service
Flow Control
Congestion
Control
10
Security
II.
A packet sent from one network to another may have to face the
issues listed above before it reaches the destination.
II.
199
III.
IV.
V.
VI.
Connectionless Internetworking
I.
II.
III.
IV.
Tunneling
I.
II.
200
Fragmentation
I.
II.
III.
201
12
IP PROTOCOL
Unit Structure
12.1 Introduction Network layer in the internet
12.2. IP Protocol IPv4
12.2.1. IP addresses
12.2.2. Address Space
12.2.3. Notation
12.2.4. Classful Addressing
12.2.5. Netid & Hostid
12.2.6. Subnetting
12.2.7.CIDR
12.2.8. NAT
12.3. Protocols
12.3.1. ICMP
12.3.2. IGMP
12.3.3. ARP
12.3.4. RARP, BOOTP, DHCP
12.4. IPv6
202
Description
Version
IHL
Type
service
Total length
Identification
of Used to distinguish
classes of service
between
different
203
DF
MF
Fragment
offset
Time to live
10
Header
checksum
11
Source
address
12
Destination
address
13
Options
12.2.1. IP addresses
204
12.2.3.Notations
205
12.2.6. Subnetting
206
If 6 bits from the host Id are taken for subnet then available
bits are :
14 bits for network + 6 bits for subnet + 10 bits for host
With 10 bits for host the number of possible host are 210
which is 1022 (0 & 1 are not available)
12.2.7. CIDR
A class B address is far too large for most organizations and a
class C network, with 256 addresses is too small.This leads to
granting Class B address to organizations who do not require all
the address in the address space wasting most of it.
This is resulting in depletion of Address space.
A solution is CIDR (Classless InterDomain The basic idea behind
CIDR, is to allocate the remaining IP addresses in variable-sized
blocks, without regard to the classes.
Ex. If a site needs, say, 2000 addresses, it is given a block of 2048
addresses on a 2048-byte boundary. Routing.
207
Operation:
Within the Organization, every computer has a unique
address of the form 10.x.y.z. However, when a packet
leaves the organization, it passes through a NAT box that
converts the internal IP source address, 10.x.y.z, to the
organizations true IP address, 198.60.42.12 for example.
12.3. PROTOCOLS
BOOTP, DHCP
ICMP,
IGMP,
ARP,
RARP,
Apart from IP the network layer in the Internet has the following
protocols: ICMP, ARP, RARP, BOOTP, and DHCP
12.3.1. The Internet Control Message Protocol (ICMP)
DESTINATION
UNREACHABLE
Description
It is used when a router cannot
locate the destination or when a
packet with the DF bit cannot be
delivered to a network which allows
smaller packet size.
208
TIME EXCEEDED
PARAMETER
PROBLEM
SOURCE QUENCH
REDIRECT
ECHO
ECHO REPLY
TIMESTAMP
REQUEST
Same
as
TIMESTAMP
TIMESTAMP REPLY
ECHO,
but
with
IGMP helps the multicast router create and update this list of
groups in the network.
209
IGMP Messages
It has three types of messages:
1. The query
There are two types of query messages: general and special
a. General Query: used to learn which groups have members
on an attached network.
b. Special Query: It is a group specific query used to learn if a
particular group has any members on an attached network.
2.
210
211
12.4. IPV6
212
Description
Version
Traffic
class
Flow label
Payload
length
Next
header
Hop limit
213
Source
address
and
Destination
address
214
13
TRANSPORT LAYER
Unit Structure
13.1. Introduction
13.2. Elements of Transport protocols
13.2.1 .Addressing
13. 2 .2. Establishing a connection
13.2.3. Releasing a connection
13.2.4: Timer-based connection management
13.2.4: Timer-based connection management
13.2.5. Flow control and buffering
13.2.6. Multiplexing
13.2.7. Crash recovery
13.3. The TCP service model
13.3.1. TCP segment header
13.3.1.1. TCP Header Format
13.4. UDP
13.4.1. Format
13.4.2. Fields
13.5. Congestion control
13.5.1.Congestion prevention
13.1. INTRODUCTION:
The first three layers of the OSI Reference Modelthe
physical layer, data link layer and network layerare very
important layers for understanding how networks function. The
physical layer moves bits over wires; the data link layer moves
frames on a network; the network layer moves datagrams on an
internet work. Taken as a whole, they are the parts of a protocol
stack that are responsible for the actual nuts and bolts of getting
data from one place to another.
Immediately above these we have the fourth layer of the OSI
Reference Model: the transport layer, called the host-to-host
transport layer in the TCP/IP model. This layer is interesting in that
215
Buffering and flow control are needed in both layers, but the
presence of a large and dynamically varying number of
connections in the transport layer may require a different
approach than the data link layer approach (e.g., sliding
window buffer management).
13.2.1 .Addressing
How to specify the transport users?
The method normally used is to define transport addresses to
which processes can listen for connection requests:
216
(a).
The new server then does the requested work, while the
process server goes back to listening for new requests, as
shown Fig.
-(b).
217
Then the user releases the connection with the name server
and establishes a new one with the desired service.
TSAP = <NSAP><PORT>
Flat address space: using a name server to map TSAP into
NSAP.
13. 2 .2. Establishing a connection
The problem with establishing a connection occurs when the
subnet can lose, store, and duplicate packets.
Consider the following scenario:
218
219
The clock based method only solves the delayed duplicate problem
for data TPDUs (after the connection has been established).
220
The white army is larger than either of the blue armies alone,
but together they are larger than the white army.
The question is, does a protocol exist that allows the blue armies to
win ?
The answer is there is NO such protocol exists.
Just substitute ``disconnect'' for ``attack''. If neither side is
prepared to disconnect until it is convinced that the other side is
prepared to disconnect too, the disconnection will never happen.
In practice, one is usually prepared to take more risks when
releasing connections than attacking white armies, so the situation
is not entirely hopeless.
Three-way handshake combined with timers
In theory, this protocol can fail if the initial DR and
retransmissions are all lost. The sender will give up and delete the
connection, while the other side knows nothing at all about the
attempts to disconnect and is still fully active. This situation results
in a half-open connection.
221
The first TPDU contains a 1-bit flag DRF (Data Run Flag).
When a TPDU with DRF flag set arrives, the receiver creates
a connection record and starts a timer for it.
222
223
For transmitting a minimum of two TPDUs (for queryresponse application), it is as efficient as a connectionless
protocol.
224
The
receiver
then
separately
piggybacks
both
acknowledgements and buffer allocations onto reverse
traffic.
225
226
227
228
+--------+--------+--------+--------+
|
Source Address
|
+--------+--------+--------+--------+
|
Destination Address
|
+--------+--------+--------+--------+
| zero | PTCL | TCP Length |
+--------+--------+--------+--------+
The TCP Length is the TCP header length plus the data length
in octets (this is not an explicitly transmitted quantity, but is
computed), and it does not count the 12 octets of the pseudo
header.
Urgent Pointer: 16 bits
This field communicates the current value of the urgent pointer
as a positive offset from the sequence number in this segment.
The urgent pointer points to the sequence number of the octet
following the urgent data. This field is only be interpreted in
segments with the URG control bit set.
Options: variable
Options may occupy space at the end of the TCP header and are
a multiple of 8 bits in length. All options are included in the
checksum. An option may begin on any octet boundary. There are
two cases for the format of an option:
Case 1: A single octet of option-kind.
Case 2: An octet of option-kind, an octet of option-length, and the
actual option-data octets.
The option-length counts the two octets of option-kind and
option-length as well as the option-data octets. Note that the list of
options may be shorter than the data offset field might imply. The
content of the header beyond the End-of-Option option must be
header padding (i.e., zero).
A TCP must implement all options. Currently defined options
include (kind indicated in octal):
Kind Length Meaning
---- ------ ------0
End of option list.
1
No-Operation.
2
4
Maximum Segment Size.
229
13.4. UDP:
This User Datagram Protocol (UDP) is defined to make
available a datagram
mode of packet-switched
computer
230
231
0
7 8 15 16 23 24 31
+--------+--------+--------+--------+
|
Source address
|
+--------+--------+--------+--------+
|
Destination address
|
+--------+--------+--------+--------+
| zero |protocol| UDP length |
+--------+--------+--------+--------+
232
Solutions:
1. The first solution is to realize that two potential problems exist
network capacity and receiver capacity which needs to be dealt
with separately. To avoid the same each sender maintains two
windows.
a) The window the receiver has granted
b) The congestion window.
Each reflects the number of bytes the sender may transmit. The
number of bytes that may be sent is the minimum of the two
windows. Thus, the effective window is the minimum of what the
sender and the receiver thinks as the maximum capacity
2) When a connection is established, the sender initializes the
congestion window to the size of the maximum segment in use
on the connection. It then sends one maximum segment. IF this
segment is acknowledged before the timer goes off, it adds one
segments worth of bytes to the congestion window to make it
two maximum size segments and sends two segments. As each
of these segments is acknowledged, the congestion window is
increased by one maximum segment size. When the congestion
window is n segments, if all n are acknowledged on time, the
congestion window is increased by the byte count
corresponding to n segments.
3) The congestion window keeps growing exponentially until either
a timeout occurs or the receivers window is reached.
4) The idea behind is that if bursts of size 1024,2048 and 4096
bytes work fine but a burst of 8192 bytes gives a timeout, the
congestion window should be set to 4096 to avoid congestion.
As long as the congestion window remains at 4096, no bursts
longer than that will be sent. This algorithm is called slow start.
5) Another way of implementing congestion control algorithm is by
using third parameter, the threshold, initially 64KB, in addition to
the receiver and congestion windows. When timeout occurs, the
threshold is set to half of the current congestion window, and
the congestion window is reset to one maximum segment. Slow
start is then used to determine what the network can handle,
except that exponential growth stops when the threshold is hit.
6) From that point on, successful transmissions grow the
congestion window linearly instead of one per segment.
233
14
THE APPLICATION LAYER
In this Unit, we are going to discuss in detail about the following
services.
Unit Structure
14.1 WWW,
14.2 HTTP,
14.3 DNS,
14.4 SNMP,
14.5 FTP,
14.6 Remote logging/Telnet and
14.7 E-mail.
234
235
14.2 HTTP
Introduction:
HTTP defines how messages are formatted and transmitted,
and what actions Web servers and browsers should take in
response to various commands. For example, when you enter
a URL in your browser, this actually sends an HTTP command to
the Web server directing it to fetch and transmit the requested Web
236
page. The other main standard that controls how the World Wide
Web works is HTML, which covers how Web pages are formatted
and displayed.
HTTP is called a stateless protocol because each command
is executed independently, without any knowledge of the
commands that came before it. This is the main reason that it is
difficult to implement Web sites that react intelligently to user input.
This shortcoming of HTTP is being addressed in a number of new
technologies, including ActiveX, Java, JavaScript and cookies.
S-HTTP is an extension to the HTTP protocol to support
sending data securely over the World Wide Web. S-HTTP was
developed by Enterprise Integration Technologies (EIT), which was
acquired
by
Verifone,
Inc.
in
1995.
Not
all Web
browsers and servers support S-HTTP.
Another technology for transmitting secure communications
over the World Wide Web -- Secure Sockets Layer (SSL) -- is more
prevalent.
However, SSL and S-HTTP have very different designs and
goals so it is possible to use the two protocols together. Whereas
SSL is designed to establish a secure connection between two
computers, S-HTTP is designed to send individual messages
securely. Both protocols have been submitted to the Internet
Engineering Task Force (IETF) for approval as a standard.
The Hypertext Transfer Protocol (HTTP) is a networking
protocol for distributed, collaborative, hypermedia information
systems. HTTP is the foundation of data communication for
the World Wide Web. The standards development of HTTP has
been coordinated by the Internet Engineering Task Force (IETF)
and the World Wide Web Consortium (W3C), culminating in the
publication of a series of Requests for Comments (RFCs), most
notably RFC 2616 (June 1999), which defines HTTP/1.1, the
version of HTTP in common use.
Technical overview:
HTTP functions as a request-response protocol in the clientserver computing model. In HTTP, a web browser, for example,
acts as a client, while an application running on a
computer hosting a web site functions as a server. The client
submits an HTTP request message to the server. The server, which
stores content, or provides resources, such as HTML files, or
performs other functions on behalf of the client, returns a response
message to the client. A response contains completion status
information about the request and may contain any content
requested by the client in its message body.
237
238
Request message:
The request message consists of the following:
Request line, such as GET /images/logo.png HTTP/1.1, which
requests a resource called /images/logo.png from server
Headers, such as Accept-Language: en
An empty line
5.
6.
239
TRACE- Echoes back the received request, so that a client
can see what (if any) changes or additions have been made by
intermediate servers.
Safe methods:
Some methods (for example, HEAD, GET, OPTIONS and
TRACE) are defined as safe, which means they are intended only
for information retrieval and should not change the state of the
server. In other words, they should not have side effects, beyond
relatively harmless effects such as logging, caching and the serving
of banner advertisements or incrementing a web counter. Making
arbitrary GET requests without regard to the context of the
application's state should therefore be considered safe.
240
Status codes:
In HTTP/1.0 and since, the first line of the HTTP response is
called the status line and includes a numeric status code (such as
"404") and a textual reason phrase (such as "Not Found"). The way
the user agent handles the response primarily depends on the code
and secondarily on the response headers. Custom status codes
can be used since, if the user agent encounters a code it does not
recognize, it can use the first digit of the code to determine the
general class of the response. Also, the standard reason
phrases are only recommendations and can be replaced with "local
equivalents" at the web developer's discretion. If the status code
indicated a problem, the user agent might display the reason
phrase to the user to provide further information about the nature of
the problem. The standard also allows the user agent to attempt to
interpret the reason phrase, though this might be unwise since the
standard explicitly specifies that status codes are machine-readable
and reason phrases are human-readable.
Persistent connections:
In HTTP/0.9 and 1.0, the connection is closed after a single
request/response pair. In HTTP/1.1 a keep-alive-mechanism was
241
Example session:
Below is a sample conversation between an HTTP client and
an HTTP server running on www.example.com, port 80.
Client request
GET /index.html HTTP/1.1
Host: www.example.com
A client request (consisting in this case of the request line
and only one header) is followed by a blank line, so that the
request ends with a double newline, each in the form of
a carriage return followed by a line feed. The "Host" header
distinguishes between various DNS names sharing a
single IP address, allowing name-based virtual hosting. While
optional in HTTP/1.0, it is mandatory in HTTP/1.1.
242
Server response:
HTTP/1.1 200 OK
Date: Mon, 23 May 2005 22:38:34 GMT
Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)
Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT
Etag: "3f80f-1b6-3e1cb03b"
Accept-Ranges: bytes
Content-Length: 438
Connection: close
Content-Type: text/html; charset=UTF-8
243
244
characters in length, with the total domain name (all the labels)
limited to 255 bytes in overall length assuming a screen line length
of 80 characters this is just 3 screen lines.
The IP protocol mandates the use of IP addresses. Any user
may use this address to connect to any service on the network;
however, for a user to remember the addresses of the entire
network servers on the network are an impossible task. Users are
more likely to remember names than they are to remember
numbers. For those familiar with database environments, the
domain name server are simply a database (consisting of
information such as names and IP addresses, and much more) to
which any station on the network can make queries using the
domain name resolver.
The domain name system is not necessarily complex, but it
is involved. It is based on a hierarchical structure. The domain
name is simply that: a name assigned to a domain. For example,
isi.edu, cisco.com, and 3Com.com represent the domain name at
those companies or educational institutions.
DNS Components:
DNS does much more than the name-to-address translation.
It also allows for the following components.
Domain Name Space and resource records
Name servers
Resolvers
The Domain Name Space and resource records:
This is the database of grouped names and addresses that
are strictly formatted using a tree-structured name space and data
associated with the names. The domain system consists of
separate sets of local information called zones. The database is
divided up into sections called zones, which are distributed among
the name servers.
While name servers can have several optional functions and
sources of data, the essential task of a name server is to answer
queries using data in its zones. Conceptually, each node and leaf of
the domain name space tree names a set of information, and query
operations are attempts to extract specific types of information from
a particular set. A query names the domain name of interest and
describes the type of resource information that is desired.
Name servers:
These are workstations that contain a database of
information about hosts in zones. This information can be about
well-known services, mail exchanger, or host information. A name
server may cache structure or set information about any part of the
245
246
Going down the tree, we can pick out a domain name, such
as research.Naugle.com. This would signify the Research
department (which is a sub-domain of domain Naugle.com) at
Naugle Enterprises, which is defined as a commercial entity of the
Internet. Naugle.com can be a node in the domain acting as a
name server, or there may be different name servers for
Naugle.com.
A user at workstation 148.1.1.2 types in the TELNET
command and the domain name of host1.research.Naugle.com.
This workstation must have the domain name resolver installed on
it. This program would send out the translation request to a domain
name server to resolve the hostname-to-IP address. If the
hostname is found, the domain name server would return the IP
address to the workstation. If the name is not found, the server may
search for the name elsewhere and return the information to the
requesting workstation, or return the address of a name server that
the workstation (if able) can query to get more information.
A domain contains all hosts whose domain names are within
a certain domain. A domain name is a sequence of labels
separated by dots. A domain is a sub-domain of another domain if it
is contained within that domain. This relationship can be tested by
seeing if the sub-domains name ends with the containing domains
name. For example, research.Naugle.com is a sub-domain of
Naugle.com. Naugle.com is a sub-domain of .com, and (root).
There are special servers on the Internet that provide
guidance to all name servers.
These are known as root name servers and, as of this
writing, there are nine of them. They do not contain all information
about every host on the Internet, but they do provide direction as to
where domains are located (the IP address of the name server for
the uppermost domain a server is requesting). The root name
server is the starting point to find any domain on the Internet. If
access to the root servers ceased, transmission over the Internet
would eventually come to a halt.
247
248
SNMP Manager:
An SNMP manager is a software application that queries the
agents for information, or sets information on the client agent. The
returned information is then stored in a database to be manipulated
by other application software that is not defined by SNMP. The
information gathered can be used to display graphs of how many
bytes of information are transmitted out a port, how many errors
have occurred, and so forth. SNMP simply sets or gathers
information in a node. Therefore, the server will be comprised of
two things:
Management applications:
Applications that can receive and process the information
gathered by the SNMP manager. These applications are the ones
that have some type of user interface to allow the network manager
to manipulate the SNMP protocol; for example, set the SNMP node
that it would like to talk to, send that node information, get
information from that node, and so on.
Databases:
The information that is stored in the database is from the
configuration, performance, and audit data of the agents. There are
multiple databases on the server: the MIB database, the Network
Element database, and the Management Application databases
(Topology database, History Log, Monitor Logs). All of this runs on
top of SNMP. It is not necessary for SNMP to operate, but it does
allow for the human factor to fit in.
Agent:
Agents are simple elements that have access to a networks
elements (router, switch, PC, etc.) MIB. Agents are the interface
from the network management server to the client MIB. They
perform server-requested functions. When a server requests
information from a client, it will build its SNMP request and send it,
unicast, to the client. The agent receives this request, processes it,
retrieves or sets the information in the MIB of the client, and
generates some type of response to the server. Usually, agents
only transmit information when requested by a server.
However, there is one instance in which an agent will
transmit unwanted information.
It is known as a trap. There are certain things on a network
station that may force it to immediately notify the server. Some of
these traps are defined by the RFC. Things such as cold/warm start
and authentication failure are traps that are sent to the server. Most
agent applications today permit the use of user-defined traps. This
means that the network administrator of an SNMP compliant device
can configure the router to send traps to the server when certain
249
conditions are met. For example, a router may send a trap to the
server when its memory buffers constantly overflow or when too
many ICMP redirects have been sent.
Another type of agent is known as the proxy agent. This allows
one station to become an agent for another network station that
does not support SNMP. Basically, proxy agents server acts as
translators between servers and non-SNMP capable clients (for
security, limited resources, etc.).
Management Information Base (MIB):
The MIB is a collection of objects that contain specific
information that together form a group. You can think of a MIB as a
database that contains certain information that was either preset
(during configuration of the node) or was gathered by the agent and
placed into the MIB. Simply stated, the MIB is a database that
contains information about the client that is currently placed on it.
The structure of management information is defined in RFC 1155
and defines the format of the MIB objects. This includes the
following:
Collection of objects that contain specific information that
together form a group.
Structure of Management Information is specified in RFC 1155.
The objects contain the following:
Syntax
Access
Status
Description
Index
DefVal
Value notation
Syntax (required): It is the abstract notation for the object type.
This defines the data type that models the object.
Access (required): It defines the minimal level of support required
for the object types. It must be one of read-only, read-write, writeonly, or not accessible.
Status (required): It denotes the status of the MIB entry. It can be
mandatory, optional, obsolete, or deprecated (removed).
Description (Optional): It is a text description of the object type.
Index-Present only if the object type corresponds to a row in a
table.
DefVal (Optional): It defines a default value that may be assigned
to the object when a new instance is created by an agent.
Value Notation- The name of the object, which is an Object
Identifier.
250
251
252
253
254
255
14.7. E-MAIL
Today it is well known as Electronic Mail, or email. RFCs
821, 822, 974 are handling this protocol. Email still cannot transport
packages and other items. Email is very fast and guarantees
delivery. Three protocols are used for todays email:
SMTPoperates over TCP
POPoperates over TCP
DNSoperates over UDP
SMTP allows for the sending/receiving of email.
POP allows us to intermittently retrieve email.
DNS makes it simple.
RFC 822 defines the structure for the message, and RFC
821 specifies the protocol that is used to exchange the mail
256
257
258
259
260
261
262
15
THE APPLICATION LAYER AND ITS
SECURITY
In this Unit, we are going to discuss in detail about the following
options available to secure the application layer.
Unit Structure
14.8 Cryptography
14.9 Symmetric key and asymmetric key cryptography,
14.10 DES algorithm,
14.11 RSA algorithm,
14.12 Security services
14.13 Message and entity.
15.1. CRYPTOGRAPHY
History of Cryptography:
263
264
Business transactions.
265
Principles of Cryptography:
(1) Symmetric (secret key):
An identical key is used for encryption and decryption
Strength of algorithm is determined by the size of the key,
longer the key more difficult it is to crack.
Key length is expressed in bits.
Typical key sizes vary between 48bits and 448 bits.
Set of possible keys for a cipher is called key space.
For 40-bit key there are 240 possible keys.
For 128-bit key there are 2128 possible keys.
Each additional bit added to the key length doubles the
security.
To crack the key the hacker has to use brute-force (try all
the possible keys till a key works is found).
Super Computer can crack a 56-bit key in 24 hours.
It will take 272 times longer to crack a 128-bit key (Longer
than the age of the universe).
Primitive Ciphers:
Caesar Cipher is a method in which each letter shifted in the
plaintext n places. Mono-alphabetic Cipher is a method in which
any letter can be substituted for any other letter
Advantages of primitive ciphers:
Relatively simple and significantly faster than the rest.
Used in an environment where single authority manages the
keys.
Used in environments where secure secret key distribution can
take place
Disadvantages:
266
Disadvantages:
267
268
Cryptography
is
often
used
for
Long
269
IP
LPT
RPT
16 Rounds
FP
Cipher Text
270
10
11
12
43
29
30
31
32
44
45
46
47
48
271
(d) Check the value and take the binary equivalent of the
number.
(e) The result is 4-bit binary number.
(02) 0010 - coloum number
Binary
Equivalent
(f) For example if the 6-bit number is 100101 then the first and
last bit is 11 and the decimal equivalent of the number is 3.
The remaining bits are 0010 and the decimal equivalent of
the number is 2. If it is the first block of input, then check
the 3rd row 2nd column value in the Sbox-1 substitution
table. It is given as 1 in the table. Binary equivalent of 1 is
0001.
(g) The input 100101 of 6-bit is now reduced to 0001 after SBox Substitution.
Step 4: in Round is P-Box Permutation:
In this step, the output of S-Box, that is 32 bits are permuted
using a p-box. This mechanism involves simple permutation that is
replacement of each bit with another bit as specified in the p-Box
table, without any expansion or compression. This is called as PBox Permutation. The P-Box is shown below.
16 7 20 21 29 12 28 17 1 15 23 26 5 18 31 10
2 8 24 14 32 27 3 9 19 13 30 6 22 11 4 25
For example, a 16 in the first block indicates that the bit at
position 16 moves to bit at position 1 in the output.
Step 5: is XOR and Swap:
The untouched LPT, which is of 32 bits, is XORed with the
resultant RPT that is with the output produced by P-Box
permutation. The result of this XOR operation becomes the new
right half. The old right half becomes the new left half in the process
of swapping. This is shown below.
272
Original 64 bit Plain Text Block
273
274
Similarly, on the other side, if Y wants to send the message to
X, reverse method is performed.
(5) Y encrypts the message using X's public key and sends this to
X.
(6) On receiving the message, X can further decrypt it using his
own private key.
The basis of this working lies in the assumption of large prime
number with only two factors. If one of the factors is used for
encryption process, only the other factor shall be used for
decryption.
The best example of an asymmetric key cryptography algorithm
is the famous RSA algorithm (developed by Rivest, Shamir and
Adleman at MIT in 1978, based on the framework setup by Diffie &
Hellman earlier).
In public-key cryptography, there are two keys: a private key and a
public key. The receiver keeps the private key. The public key is
announced to the public.
Imagine Alice, as shown in Figure wants to send a message to
Bob. Alice uses the public key to encrypt the message. When Bob
receives the message, the private key is used to decrypt the
message.
275
276
15.5.
PRINCIPLES
SERVICES:
OF
SECURITY/SECURITY
277
Security
Confidentiality
Authenticity
Integrity
These three are the basic levels of issues, which will lead to the
further loopholes.
Confidentiality:
Basically the threats can be in the area of network or in the area
of application. Mainly the insiders who are having full access with
the computer system create the application levels of threats. This
can be easily detected can be avoided by using suitable
mechanisms. Even though the application level threats are easy to
detect, this is also creating high level of problems to the system.
The network levels of attacks are dangerous because of the
major business transactions that are happening through the
Internet. This is maintaining the actual secrecy of the message. The
message that is traveling through the network should not be
opened by any of the third parties who are not related with the
transaction.
Nowadays majority of the bank transactions are happening
through the network. So that confidentiality issue is creating lot of
problems related with network, software and data.
The confidentiality related issues could be belonging to any one
of the following categories:
(1) IP Spoofing
(2) Packet sniffing
(3) Alteration of message
(4) Modification of message
(5) Man-in-middle
(6) Brute force attack
(7) Password cracking
By using any of the above-mentioned methods an user can enter
into others message and can create problem related to the secrecy
of the message. The intension of the user may be viewing the
message without making many changes to the hacked message.
278
Secret
Loss of Confidentiality
Authenticity:
This can be defined as an identity for a user to assure that the
message is coming from the right person. This is also an another
important issue along with confidentiality which may lead to the
further security threats. This can be assured by any of the following
factors:
(1) Something you have (like tokens, credit card, passport etc).
(2) Something you know (like PIN numbers, account number
etc).
(3) Something you are (like fingerprints, signatures etc).
Generally with the computer system passwords are the very
simple authentication mechanism, which help the system to
authenticate a particular person. The people can use one-time
passwords and key technology to assure authenticity during
message transaction.
Various issues related to authenticity includes.
(1) Stealing password
(2) Fake login screen
(3) Information leakage Etc.
279
I am User A
Absence of Authenticity
Fabrication is possible in the absence of proper authentication
mechanisms.
Availability:
This can be defined as keeping the right information or resources
available to the right person at the right time. This can happen
either with the data or with the hardware resources. This will stop
the person from accessing various resources by flooding the
network.
There is a complexity with the availability of the resources and
data. Because it can be identified as a issue only when the
following conditions are existing in the system.
(1) The resources are completely available up to the users
expectation.
(2) The content is present in a usable format.
(3) The access rights are used in a proper way.
The actual problem related with availability can be identified only
when the above-mentioned things are assured before the problem
identification. This is a very serious issue, which will totally stop the
process and may lead the user to an idle condition without allowing
him to proceed with the further process.
The main problem related with the availability is DOS attack and
DDOS attacks. Flooding the network path by sending continuous
packets, which will create a heavy traffic in the network, does this.
This stops the right people accessing right information at the right
time.
280
Attack on Availability
Access Control:
This is also an issue, which is dealing with the hardware
resources, software and data. This is helping the operating system
to allow access to a particular resource or data only to an
authorized person. In this way it is interrelated with the authenticity
and availability.
Because the authenticated users will be allotted with certain
kinds of rights like Read, Write, Read/Write, Owner etc. These
rights will be maintained by the operating system in a tabular format
or in a linked list format. First the users authenticity has to be
verified and then the authentic users rights will be verified against
the table.
USERS/FILES
USER1
USER2
USER3
FILE1
RW
O
-
FILE2
RW
OR
FILE3
W
W
OW
281
Transfer $
100 to D
Actual Route
of the message
Transfer $
1000 to C
Loss of Integrity
Wherever the problems are available accordingly solutions are
also present. It is the responsibility of the user to categorize the
problem and identify the suitable solution.
Generally the solutions can be categorized as follows.
(a) Security Issues
(b) Security Objectives
(c) Security Techniques
282
Security Technique
Security
Objective
of Encryption
Confidentiality Privacy
message
Authentication Origin Verification Digital Signatures,
Challenge-response,
Passwords, and
Biometric devices
Proof of origin, Bi-directional hashing ,
Nonreceipt
and Digital signatures,
repudiation
contents
Transaction Certificates,
Time stamps and
Confirmation services
Limiting entry to Firewalls, Passwords and
Access
authorized users Biometric devices
controls
Security Issue
283
Message Headers
HTTP header fields, which include general-header
(section 4.5), request-header (section 5.3), response-header
(section 6.2), and entity-header (section 7.1) fields, follow the same
generic format as that given in Section 3.1 of RFC 822 [9]. Each
header field consists of a name followed by a colon (":") and the
field value. Field names are case-insensitive. The field value MAY
be preceded by any amount of LWS, though a single SP is
preferred. Header fields can be extended over multiple lines by
preceding each extra line with at least one SP or HT. Applications
ought to follow "common form", where one is known or indicated,
when generating HTTP constructs, since there might exist some
implementations that fail to accept anything beyond the common
forms.
message-header = field-name ":" [ field-value ]
field-name = token
field-value = *( field-content | LWS )
field-content = <the OCTETs making up the field-value
and consisting of either *TEXT or combinations
of token, separators, and quoted-string>
The field-content does not include any leading or trailing
LWS: linear white space occurring before the first non-whitespace
character of the field-value or after the last non-whitespace
character of the field-value. Such leading or trailing LWS MAY be
removed without changing the semantics of the field value. Any
LWS that occurs between field-content MAY be replaced with a
single SP before interpreting the field value or forwarding the
message downstream.
The order in which header fields with differing field names
are received is not significant. However, it is "good practice" to
send general-header fields first, followed by request-header or
response- header fields, and ending with the entity-header fields.
Multiple message-header fields with the same field-name
MAY be present in a message if and only if the entire field-value for
that header field is defined as a comma-separated list [i.e.,
#(values)]. It MUST be possible to combine the multiple header
fields into one "field-name: field-value" pair, without changing the
semantics of the message, by appending each subsequent fieldvalue to the first, each separated by a comma. The order in which
header fields with the same field-name are received is therefore
significant to the interpretation of the combined field value, and thus
a proxy MUST NOT change the order of these field values when a
message is forwarded.
284
Message Body
The message-body (if any) of an HTTP message is used to
carry the entity-body associated with the request or response. The
message-body differs from the entity-body only when a transfercoding has been applied, as indicated by the Transfer-Encoding
header field (section 14.41).
message-body = entity-body
| <entity-body encoded as per Transfer-Encoding>
Transfer-Encoding MUST be used to indicate any transfercodings applied by an application to ensure safe and proper
transfer of the message. Transfer-Encoding is a property of the
message, not of the entity, and thus MAY be added or removed by
any application along the request/response chain.
The rules for when a message-body is allowed in a
message differ for requests and responses.
The presence of a message-body in a request is signaled by
the inclusion of a Content-Length or Transfer-Encoding header field
in the request's message-headers. A message-body MUST NOT be
included in a request if the specification of the request method does
not allow sending an entity-body in requests. A server SHOULD
read and forward a message-body on any request; if the request
method does not include defined semantics for an entity-body, then
the message-body SHOULD be ignored when handling the request.
For response messages, whether or not a message-body is
included with a message is dependent on both the request method
and the response status code. All responses to the HEAD request
method MUST NOT include a message-body, even though the
presence of entity- header fields might lead one to believe they do.
All 1xx (informational), 204 (no content), and 304 (not modified)
responses MUST NOT include a message-body. All other
responses do include a message-body, although it MAY be of zero
length.
Message Length
The transfer-length of a message is the length of the
message-body as it appears in the message; that is, after any
transfer-codings have been applied. When a message-body is
included with a message, the transfer-length of that body is
determined by one of the following (in order of precedence):
1.Any response message which "MUST NOT" include a messagebody (such as the 1xx, 204, and 304 responses and any response
to a HEAD request) is always terminated by the first empty line
285
286