Computer Network Notes
Computer Network Notes
Network architecture – layers – Physical links – Channel access on links – Hybrid multiple access
techniques - Issues in the data link layer - Framing – Error correction and detection – Link-level Flow
Control.
Network Architecture
Network architecture that guides the design and implementation of networks. Two of the most widely
referenced architectures—the OSI architecture and the Internet architecture.
Layering
In OSI Architecture, there are 7 layers which can be combined into basic four layers as shown in the below
figure.
Layering Characteristics
Each layer relies on services from layer below and exports services to layer above
Hides implementation - layers can change without disturbing other layers
Abstraction
In the above figure, two layers of abstraction sandwiched between the application program and the
underlying hardware, as illustrated in above figure. The services provided at the high layers are
implemented in terms of the services provided by the low layers.
The layer immediately above the hardware in this case might provide host-to-host connectivity and the
next layer up on the available host-to-host communication is Process to Process channel that provides
support for application to application communication services. From the above fissure, the process to
process channel is a combination of two abstract channels; one is Request/Reply channel for sending or
receiving the request or reply message. The Message stream abstract channel is used for sending or
receiving the actual message.
Features of Layering
• First, it decomposes the problem of building a network into more manageable components,
rather than implementing a monolithic piece of software
• Second, it provides a more modular design.
Protocol
Protocol graph
A suite of protocols that make up a network system is called as a protocol graph. The following figure
illustrates a protocol graph for the above hypothetical layered system.
Example of a protocol graph
In this example, suppose that the file access program on host 1 wants to send a message to its peer on
host 2 using the communication service offered by protocol RRP. In this case, the file application asks RRP
to send the message on its behalf. To communicate with its peer, RRP then invokes the services of HHP,
which in turn transmits the message to its peer on the other machine. Once the message has arrived at
protocol HHP on host 2, HHP passes the message up to RRP, which in turn delivers the message to the file
application.
Encapsulation
When one of the application programs sends a message to its peer by passing the message to protocol
RRP, RRP adds a header to the message. The header is a data structure contains some control information
which are usually attached to the message at front. We say this, that the application’s data is
encapsulated in the new message created by protocol RRP. This process of encapsulation is then repeated
at each level of the protocol graph; for example, HHP encapsulates RRP’s message by attaching a header
of its own.
Now assume that HHP sends the message to its peer over some network, then when the message arrives
at the destination host, it is processed in the opposite order: HHP first strips its header off the front of the
message, interprets it and passes the body of the message up to RRP, which removes its own header and
passes the body of the message up to the application program. The message passed up from RRP to the
application on host 2 is exactly the same message as the application passed down to RRP on host 1. This
whole process is illustrated in the following figure.
Example of how high-level messages are encapsulated inside of low-level messages
OSI Architecture
The ISO Open Systems Interconnection (OSI) architecture is illustrated in below figure which defines a
partitioning of network functionality into seven layers, where one or more protocols implement the
functionality assigned to a given layer.
• Starting at the bottom and working up, the physical layer handles the transmission of raw bits
over a communications link.
• The data link layer then collects a stream of bits into a larger aggregate called a frame.
• The network layer handles routing among nodes within a packet-switched network. At this layer,
the unit of data exchanged among nodes is typically called a packet rather than a frame.
• The lower three layers are implemented on all network nodes, including switches within the
network and hosts connected along the exterior of the network.
• The transport layer then implements what we have up to this point been calling a process-to-
process channel. Here, the unit of data exchanged is commonly called a message rather than a
packet or a frame.
• The session layer performs the synchronization.
• The presentation layer is concerned with the format of data exchanged between peers
• The application layer where application programs are running which interacts with the user and
receives the message from the user.
Internet Architecture
The Internet architecture, which is also sometimes called the TCP/IP architecture
• a four-layer model
• At the lowest level are a wide variety of network protocols, denoted NET1, NET2, and so on. In
practice, these protocols are implemented by a combination of hardware (e.g., a network
adaptor) and software (e.g., a network device driver).
• The second layer consists of a single protocol—the Internet Protocol (IP). This is the protocol that
supports the interconnection of multiple networking technologies into a single, logical
internetwork.
• The third layer contains two main protocols—the Transmission Control Protocol (TCP) and the
User Datagram Protocol (UDP). TCP provides a reliable byte-stream channel, and UDP provides
an unreliable datagram delivery channel
• Running above the transport layer are a range of application protocols, such as FTP, TFTP (Trivial
File Transport Protocol), Telnet (remote login), and SMTP (Simple Mail Transfer Protocol, or
electronic mail), that enable the interoperation of popular applications.
• The Internet architecture does not imply strict layering. That is, the application is free to bypass
the defined transport layers and to directly use IP or one of the underlying networks
• Looking closely at the internet protocol graph, it has an hourglass shape—wide at the top, narrow
in the middle, and wide at the bottom. That is, IP serves as the focal point for the architecture—it
defines a common method for exchanging packets among a wide collection of networks.
• It has the ability to adapt rapidly to new user demands and changing technologies.
Seven layers: Physical layer, Data link layer, Four layers:, Network layer, Transport layer, IP
Network layer, Transport Layer, Session layer, layer, Application layer
Presentation layer, Application layer
Each layer defines a family of functions and the Each layer defines number of protocols and they are
functions are interdependent not dependent
Links
Types of Links
The communication between the nodes is either based on a point-to-point model or a Multicast model.
In the point-to-point model, a message follows a specific route across the network in order to get from
one node to another. In the multicast model, on the other hand, all nodes share the same communication
medium and, as a result, a message transmitted by any node can be received by all other nodes. A part of
the message (an address) indicates for which node the message is intended. All nodes look at this address
and ignore the message if it does not match their own address.
Connection Types
Connections between devices may be classified into three categories:
1. Simplex. This is a unidirectional connection, i.e., data can only travel in one direction. Simplex
connections are useful in situations where a device only receives or only sends data (e.g., a printer).
2. Half-duplex. This is a bidirectional connection, with the restriction that data can travel in one direction
at a time.
3. Full-duplex. This is a bidirectional connection in which data can travel in both directions at once. A
full-duplex connection is equivalent to two simplex connections in opposite directions.
There are five problems that must be addressed before the nodes can successfully exchange packets.
1. encoding problem
2. framing problem
3. error detection problem
4. Reliable delivery and
5. media access control problem
Encoding:
NRZ
Nonreturn to zero transmits 1s as zero voltages and 0s as positive voltages
Problem: Consecutive 1s or 0s
Low signal (0) may be interpreted as no signal
High signal (1) leads to baseline wander
Unable to recover clock
NRZI
Transition data if input is 1, and no transition if input is 0.
Manchester encoding: A transition occurs in the middle of the bits. 0 becomes a low to high transition and
1 high to low
Differential Manchester: a transition in the beginning of the interval to transmit 0. No transition in the
beginning of the interval to transmit 1. The transition in the middle is always present.
4B/5B
Framing
When node A wishes to transmit a frame to node B, it tells its adaptor to transmit a frame from the node’s
memory. This results in a sequence of bits being sent over the link. The adaptor on node B then collects
together the sequence of bits arriving on the link and deposits the corresponding frame in B’s memory.
Recognizing exactly what set of bits constitutes a frame—that is, determining where the frame begins and
ends—is the central challenge faced by the adaptor. .
Approaches
There are several ways to address the framing problem. Some of them are:
1. Byte Oriented: Special character to delineate frames, replace character in data stream
a. Sentinel approach
b. Byte counting approach
2. Bit Oriented: use a technique known as bit stuffing
3. Clock Based: fixed length frames, high reliability required
1. Byte-Oriented Protocols
a byte-oriented approach is exemplified by the BISYNC (Binary Synchronous Communication) protocol
developed by IBM. The BISYNC protocol illustrates the sentinel approach to framing; its frame format is
depicted in the following figure
Sentinel Approach
– PPP protocol uses 0x7e=01111110 as the flag byte to delimit a frame
– When a 0x7e is seen in the payload, it must be escaped to keep it from being seen as an
end of frame
The beginning of a frame is denoted by sending a special SYN (synchronization) character. The data
portion of the frame is then contained between special sentinel characters: STX (start of text) and ETX
(end of text). The SOH (start of header) field serves much the same purpose as the STX field. The frame
contains additional header fields that are used for, among other things, the link-level reliable delivery
algorithm
The problem with the sentinel approach, is that the ETX character might appear in the data portion of the
frame. BISYNC overcomes this problem by “escaping” the ETX character by preceding it with a DLE (data-
link-escape) character whenever it appears in the body of a frame;
The DLE character is also escaped (by preceding it with an extra DLE) in the frame body. This approach is
often called character stuffing because extra characters are inserted in the data portion of the frame.
The special start-of-text character, denoted as the Flag field is 01111110. The Address and Control fields
usually contain default values, and so are uninteresting. The Protocol field is used for demultiplexing. The
frame payload size can be negotiated, but it is 1500 bytes by default.
The Checksum field is either 2 (by default) or 4 bytes long used for error detection.
The COUNT field specifies how many bytes are contained in the frame’s body. One danger with this
approach is that a transmission error could corrupt the COUNT field, in which case the end of the frame
would not be correctly detected.
2. Bit Oriented approach
The High-Level Data Link Control (HDLC) protocol developed by IBM is an example of a bit-oriented
protocol.
HDLC: High-Level Data Link Control
Delineate frame with a special bit-sequence: 01111110
Its frame format is given in below figure.
3. Clock-Based Framing
This approach to framing is used by the Synchronous Optical Network (SONET) standard.
Data can be corrupted during transmission. For reliable communication, errors must be detected and
corrected
Types of Error
Single-Bit Error
Burst Error
Single-Bit Error
The basic idea behind any error detection scheme is to add redundant information to a frame that can be
used to determine if errors have been introduced. In other words, error detection uses the concept of
redundancy, which means adding extra bits for detecting errors at the destination as shown in below
figure.
We say that the extra bits we send are redundant because they add no new information to the message.
Instead, they are derived directly from the original message using some well-defined algorithm. Both the
sender and the receiver know exactly what that algorithm is. The sender applies the algorithm to the
message to generate the redundant bits. It then transmits both the message and those few extra bits.
When the receiver applies the same algorithm to the received message, it should (in the absence of
errors) come up with the same result as the sender. It compares the result with the one sent to it by the
sender. If they match, it can conclude (with high likelihood) that no errors were introduced in the message
during transmission. If they do not match, it can be sure that either the message or the redundant bits
were corrupted, and it must take appropriate action, that is, discarding the message, or correcting it if
that is possible.
Parity Check
1. Simple-parity check
2. Two dimensional parity check
Simple-parity check
In this parity check, a parity bit is added to every data unit so that the total number of 1s is even (or odd
for odd-parity). The following figure illustrates this concept.
Suppose the sender wants to send the word world. In ASCII the five characters are coded as
1110111 1101111 1110010 1101100 1100100
The following shows the actual bits sent
11101110 11011110 11100100 11011000 11001001
Now suppose the word world is received by the receiver without being corrupted in transmission.
11101110 11011110 11100100 11011000 11001001
The receiver counts the 1s in each character and comes up with even numbers (6, 6, 4, 4, 4). The data
are accepted.
Now suppose the word world is corrupted during transmission.
11111110 11011110 11101100 11011000 11001001
The receiver counts the 1s in each character and comes up with even and odd numbers (7, 6, 5, 4, 4).
The receiver knows that the data are corrupted, discards them, and asks for retransmission.
Performance
Simple parity check can detect all single-bit errors. It can detect burst errors only if the total number of
errors in each data unit is odd.
In two-dimensional parity check, a block of bits is divided into rows and a redundant row of bits is added
to the whole block.
CRC
Parity checks based on addition; CRC based on binary division
A sequence of redundant bits (a CRC or CRC remainder) is appended to the end of the data
unit
These bits are later used in calculations to detect whether or not an error had occurred.
CRC Steps
• On sender’s end, data unit is divided by a predetermined divisor; remainder is the CRC
• When appended to the data unit, it should be exactly divisible by a second predetermined
binary number
• At receiver’s end, data stream is divided by same number
• If no remainder, data unit is assumed to be error-free
Deriving the CRC
• A string of 0s is appended to the data unit; n is one less than number of bits in
predetermined divisor
• New data unit is divided by the divisor using binary division; remainder is CRC
• CRC of n bits replaces appended 0s at end of data unit
The divisor can give n-bit length CRC remainder. For example, for the divisor 11001 the corresponding
polynomial is X4+X3+1.
A polynomial is
Used to represent CRC generator
Cost effective method for performing calculations quickly
The first condition guarantees that all burst errors of a length equal to the degree of polynomial are
detected. The second condition guarantees that all burst errors affecting an odd number of bits are
detected.
CRC Performance
CRC can detect all single-bit errors
CRC can detect all double-bit errors (three 1’s)
CRC can detect any odd number of errors (X+1)
CRC can detect all burst errors of less than the degree of the polynomial.
CRC detects most of the larger burst errors with a high probability. For example CRC-12 detects
99.97% of errors with a length 12 or more.
Checksum
When the algorithm to create the code is based on addition, they may be called a checksum.
Checksum 00011101
When the receiver adds the three sections, it will get all 1s, which, after complementing, is all 0s and
shows that there is no error.
10101001
00111001
00011101
-----------
Sum 11111111
Complement 00000000 means that the pattern is OK.
Performance
Detects all errors involving odd number of bits, most errors involving even number of bits
If one or more bits of a segment are damaged and the corresponding bits of opposite value in a
second segment are also damaged, the sums of these columns will not change and the receiver
will not detect a problem.
Also, it is important to know the locations of the r bits in the m bits data. The r bits are placed in position
1,2,4,8,… (power of 2). Suppose if m =7, then r must be 4 bits and total number of bits becomes
11(m+r), in which the r bits are placed in the locations 1, 2, 4, and 8 as shown below
In a 7 bit data unit, the combinations of locations used to calculate r values are as follows:
Calculating r values
1. We place r bits in its appropriate location in the m+r length data unit.
2. We calculate the even parities for the various bit combinations
For example,
Reliable Transmission
Even when error-correcting codes are used some errors will be too severe to be corrected. As a result,
some corrupt frames must be discarded. A link-level protocol that wants to deliver frames reliably must
somehow recover from these discarded (lost) frames. This is usually accomplished using a combination of
two fundamental mechanisms—
1. acknowledgments
2. timeouts
• An acknowledgment (ACK for short) is a small control frame that a protocol sends back to its
peer saying that it has received an earlier frame. By control frame we mean a header without any
data.
• If the sender does not receive an acknowledgment after a reasonable amount of time, then it
retransmits the original frame. This action of waiting a reasonable amount of time is called a
timeout.
Propagation delay is defined as the delay between transmission and receipt of packets between hosts.
Propagation delay can be used to estimate timeout period
The general strategy of using acknowledgments and timeouts to implement reliable delivery is sometimes
called automatic repeat request (ARQ). There are four different ARQ algorithms:
1. Stop-and-Wait ARQ
2. Sliding Window ARQ
3. Go back N ARQ
4. Selective Repeat ARQ
• Sender doesn’t send next frame until he’s sure receiver has last packet
• The data frame/Ack. Frame sequence enables reliability. They are sequenced alternatively 0 and
1
• Sequence numbers help avoid problem of duplicate frames
• If the sender does not receive an acknowledgment after a reasonable amount of time, then it
retransmits the original frame
• The sender also starts retransmission when the timeout occurs.
a) Normal operation b) The Original frame is lost c) The ACK is lost d) Timeout occurs
Disadvantage
• The link capacity can not be utilized effectively since only one data frame or ACK frame can e
sent at a time
2. Sliding Window
When an acknowledgment arrives, the sender moves LAR to the right, thereby allowing the
sender to transmit another frame. The sender associates a timer with each frame it transmits,
and it retransmits the frame should the timer expire before an ACK is received.
4. The receiver maintains the following three variables:
a. The receive window size, denoted RWS, gives the upper bound on the number of out-of-
order frames that the receiver is willing to accept;
b. LAF denotes the sequence number of the largest acceptable frame;
c. LFR denotes the sequence number of the last frame received.
5. The receiver also maintains the following invariant:
LAF − LFR ≤ RWS
6. if LFR<SeqNum<-LFA, then the corresponding SeqNum frame is accepted
7. if SeqNum<=LFR or SeqNum > LFA, then the correspong SeqNum frame is discarded
This situation is illustrated in below figure
Operation
The sender window denotes the frames that have been transmitted but remain unacknowledged. This
window can vary in size, from empty to the entire range. The receiver window size is fixed. A receiver
window size of 1 means that frames must be received in transmission order. Larger window sizes allow
the receiver to receive as many frames out of order.
1. When a frame with sequence number SeqNum arrives, the receiver takes the following action.
a. If SeqNum ≤ LFR or SeqNum > LAF, then the frame is outside the receiver’s window and
it is discarded.
b. If LFR < SeqNum ≤ LAF, then the frame is within the receiver’s window and it is
accepted. Now the receiver needs to decide whether or not to send an ACK. The
acknowledgement can be cumulative.
c. It then sets LFR = Sequence Number to Acknowledge and adjusts LAF = LFR + RWS.
The following figure illustrates the operation of sliding window.
The receiver can set RWS to whatever it wants. Two common settings are:
1. RWS = 1, implies that the receiver will not buffer any frames that arrive out of order
2. RWS = SWS implies that the receiver can buffer any of the frames the sender transmits.
It is no sense to set RWS > SWS since it’s impossible for more than SWS frames to arrive out of order.
Advantages
1. Reliable transmission: The algorithm can be used to reliably deliver messages across an
unreliable network
2. Frame Order: The sliding window algorithm can serve is to preserve the order in which frames
are transmitted. Since each frame has a sequence number.
3. Flow control: The receiver not only acknowledges frames it has received, but also informs the
sender of how many frames it has room to receive
4. The link capacity can be utilized effectively since multiple frames can be transmitted at a time
Selective Repeat ARQ: upon encountering a faulty frame, the receiver requests the retransmission of
that specific frame. Since additional frames may have followed the faulty frame, the receiver needs to be
able to temporarily store these frames until it has received a corrected version of the faulty frame, so that
frame order is maintained.
Go Back N ARQ
A simpler method, Go-Back-N, involves the transmitter requesting the retransmission of the faulty frame
as well as all succeeding frames (i.e., all frames
transmitted after the faulty frame).
Advantages
The advantage of Selective Reject over Go-Back-N is that it leads to better throughput, because
only the erroneous frames are retransmitted.
Go-Back-N has the advantage of being simpler to implement and requiring less memory.
Unit II
Medium access – CSMA – Ethernet – Token ring – FDDI - Wireless LAN – Bridges and Switches
Ethernet (802.3)
Repeater: Multiple Ethernet segments can be joined together by repeaters. A repeater is a device that
forwards digital signals, much like an amplifier forwards analog signals. Any signal placed on the Ethernet
by a host is broadcast over the entire network
Ethernet standards
10Base2- can be constructed from a thinner cable called as thin-net, 200m length
10Base5 can be constructed from a thick cable called as thick-net, 500m length
10BaseT can be constructed from twisted pair cable; “T” stands for twisted pair, limited to under 100 m in
length
10” in 10Base2 means that the network operates at 10 Mbps, “Base” refers to the fact that the cable is
used in a baseband system, and the “2” means that a given segment can be no longer than 200 m
Carrier Sense: This protocol is applicable to a bus topology. Before a station can transmit, it listens to the
channel to see if any other station is already transmitting. If the station finds the channel idle, it attempt
to transmit; otherwise, it waits for the channel to become idle. If two or more stations find the channel
idle and simultaneously attempt to transmit. This is called a collision. When collision occurs, the station
should suspend transmission and re-attempts after a random period of time.
Use of a random wait period reduces the chance of the collision recurring. The following flow chart depicts
this technique.
If line is idle…
–send immediately
–upper bound message size of 1500 bytes
–minimum frame is 64 bytes (header + 46 bytes of data)
If line is busy…
–wait until idle and transmit immediately
If collision…
–send jam signal, and then stop transmitting frame
–delay for exponential Back off time and try again
Collision Detection
It takes one link latency for the frame to reach host B. Thus it arrives at B at time of t+d which is
shown in figure b.
Suppose an instant before host A’s frame arrives, B begins transmit its own frame and it collides
with the A’s frame as shown in figure c.
This collision is detected by host B. Host B will send a jamming signal which is known as ‘runt
frame’. The runt frame is a combination of 32 bit jamming sequence and 64 bit preamble bits. B’s
runt frame arrives at A at time t+2d. That is A sees the collision at the time of t+2d.
Once the adaptor has detected the collision, it stops transmission, and it waits a certain amount of time
and tries again transmission. Each time it tries to transmit bit fails, then the adapter doubles the amount
of time it waits before trying again. This strategy of doubling the delay interval between each transmission
attempt is known as ‘exponential back off’.
The adapter delays are,
–1st time: 0 or 51.2us
–2nd time: 0, 51.2, or 102.4us
–3rd time51.2, 102.4, or 153.6us
–nth time: k x 51.2us, for randomly selected k=0..2^n - 1
–give up after several tries (usually 16)
Frame format
Advantages
Physical Properties
• Electromechanical relays are used to Protection against failures -- single node failure should not
cause the entire ring to fail.
– One approach: change one bit in token which transforms it into a “start-of-frame
sequence” and appends frame for transmission.
– Second approach: station claims token by removing it from the ring.
• Frame circles the ring and is removed by the transmitting station.
• Each station interrogates passing frame, if destined for station, it copies the frame into local
buffer.
• After station has completed transmission of the frame, the sending station should release the
token on the ring.
How long can a node hold onto the token? This is dictated by the token holding time or THT.
• If lightly loaded network you may allow a node to hold onto it as long as it wants -- very high
utilization.
• In 802.5, THT is specified to be 10 milliseconds.
Timers
Token Release
e en en
m Tok Tok Fr
am
ra e
F
(a) (b)
8 8 8 48 48 Variable 32 8 8
Start Access Frame Dest Src Body Checksum End Frame
delimiter control control addr addr delimiter status
FDDI
In case of failure of a node or a fiber link, the ring is restored by wrapping the primary ring to the
secondary ring as shown in Figure b. If a station on the dual ring fails or is powered down, or if the cable
is damaged, the dual ring is automatically wrapped (doubled back onto itself) into a single ring. When the
ring is wrapped, the dual-ring topology becomes a single-ring topology. Data continues to be transmitted
on the FDDI ring without performance impact during the wrap condition. Network operation continues for
the remaining stations on the ring. When two or more failures occur, the FDDI ring segments into two or
more independent rings that are incapable of communicating with each other.
• FDDI is expensive for nodes to connect to two cables and so FDDI allows nodes to attach using a
single cable
– called Single Access Stations or SAS.
as shown in below figure.
Upstream Downstream
neighbor neighbor
Concentrator (DAS)
Token Maintenance
• Every node monitors ring for valid token. If operations are correct, a node must observe a
token or a data frame every so often.
Claim frames
• When Greatest idle time = Ring latency + frame transmission time,
and if nothing seen during this time, a node suspects something is wrong on the ring and
sends a “claim frame”.
• Then, node bid (propose) for the TTRT using the claim frame.
The bidding process
• A node can send a claim frame without having the token when it suspects failure
• If claim frame came back, node knows that its TTRT bid is the lowest. And now it is
responsible for inserting token on the ring.
• When a node receives a claim frame, it checks to see if the TTRT bid is lower than its own.
• If yes, it resets local definition of TTRT and simply forwards the claim frame.
• Else, it removes the claim frame and enters the bidding process
• Put its own claim frame on ring.
• When there are ties, highest address wins.
FDDI Analysis
802.11 was designed to run over three different physical media—two based on spread spectrum radio and
one based on diffused infrared.
The idea behind spread spectrum is to spread the signal over a wider frequency band than normal, so as
to minimize the impact of interference from other devices
Frequency hopping is a spread spectrum technique that involves transmitting the signal over a
random sequence of frequencies; that is, first transmitting at one frequency, then a second, then
a third, and so on.
The receiver uses the same algorithm as the sender—and initializes it with the same seed—and
hence is able to bound with the transmitter to correctly receive the frame.
The DSSS encoder spreads the data across a broad range of frequencies using a mathematical
key.
The receiver uses the same key to decode the data.
It sends redundant copies of the encoded data to ensure reception.
Infrared (IR)
The Infrared utilizes infrared light to transmit binary data using a specific modulation technique. The
infrared uses a 16-pulse position modulation (PPM).
a wireless protocol would follow exactly the same algorithm as the Ethernet—wait until the link becomes
idle before transmitting and back off should a collision occur.
Consider the situation depicted in figure, where each of four nodes is able to send and receive signals
that reach just the nodes to its immediate left and right. For example, B can exchange frames with
A and C but it cannot reach D, while C can reach B and D but not A.
• a carrier sensing scheme is used.
• a node wishing to transmit data has to first listen to the channel for a predetermined amount of
time to determine whether or not another node is transmitting on the channel within the wireless
range. If the channel is sensed "idle," then the node is permitted to begin the transmission
process. If the channel is sensed as "busy," the node defers its transmission for a random
period of time. This is the essence of both CSMA/CA and CSMA/CD. In CSMA/CA, once the
channel is clear, a station sends a signal telling all other stations not to transmit, and then sends
its packet.
Assume that node A has data to transfer to node B. Node A initiates the process by sending a Request to
Send frame (RTS) to node B. The destination node (node B) replies with a Clear to Send frame (CTS).
After receiving CTS, node A sends data. After successful reception, node B replies with an
acknowledgement frame (ACK). If node A has to send more than one data fragment, it has to wait a
random time after each successful data transfer and compete with adjacent nodes for the medium using
the RTS/CTS mechanism.
To sum up, a successful data transfer (A to B) consists of the following sequence of frames:
Suppose both A and C want to communicate with B and so they each send it a frame. A and C are
unaware of each other since their signals do not carry that far. These two frames collide with each other
at B, but A and C is not aware of this collision. A and C are said to be hidden nodes with respect to each
other.
A related problem, called the exposed node problem, occurs under the following circumstances.
B talks to A
C wants to talk to D
C senses channel and finds it to be
busy
So, C stays quiet
B is sending to A in figure. Node C is aware of this communication because it hears B’s transmission. It
would be a mistake for C to conclude that it cannot transmit to anyone just because it can hear B’s
transmission.
Reliability
When node B receives a data packet from node A, node B sends an Acknowledgement (ACK)
Frame Format
The frame contains the source and destination node addresses, each of which are 48 bits long; up to 2312
bytes of data; and a 32-bit CRC. The Control field contains three subfields of interest (not shown): a 6-bit
Type field that indicates whether the frame carries data, is an RTS or CTS frame. Addr1 identifies the
target node, and Addr2 identifies the source node. Addr3 identifies the intermediate destination.
Switch
Advantages
–it covers large geographic area (tolerate latency)
–it supports large numbers of hosts (scalable bandwidth)
Types
I. Datagram switching
II. Virtual Circuit switching
III. Source Routing switching
I Datagram Switching
No connection setup phase
Each packet forwarded independently
Sometimes called connectionless model
Packets may follow different paths to reach their destination
Receiving station may need to reorder
Switches decide the route based on source and destination addresses in the packet
Analogy: postal system
Each switch maintains a forwarding table
A third approach to switching that uses neither virtual circuits nor conventional datagrams is known as
source routing. The name derives from the fact that all the information about network topology that is
required to switch a packet across the network is provided by the source host. Assign a number to each
output of each switch and to place that number in the header of the packet. The switching function is then
very simple: For each packet that arrives on an input, the switch would read the port number in the
header and transmit the packet on that output. There will be more than one switch in the path between
the sending and the receiving host. In such a case the header for the packet needs to contain enough
information to allow every switch in the path to determine which output the packet needs to be placed on.
In this example, the packet needs to traverse three switches to get from host A to host B. At switch 1, it
needs to exit on port 1, at the next switch it needs to exit at port 0, and at the third switch it needs to exit
at port 3. Thus, the original header when the packet leaves host A contains the list of ports (3, 0, 1),
where we assume that each switch reads the rightmost element of the list. To make sure that the next
switch gets the appropriate information, each switch rotates the list after it has read its own entry. Thus,
the packet header as it leaves switch 1 en route to switch 2 is now (1, 3, 0); switch 2 performs another
rotation and sends out a packet with (0, 1, 3) in the header.
Bridges and Extended LANs
A class of switches that is used to forward packets between shared-media LANs such as Ethernets. Such
switches are sometimes known by the name of LAN switches; historically they have also been referred to
as bridges. Operate in both physical and data link layers
LANs have physical limitations (e.g 2500m). Bridge is used to connect two or more LANs as shown below
It uses ‘Store and forward’ technique.
Extended LANs
a collection of LANs connected by one or more bridges is usually said to form an extended LAN.
A bridge maintains a forwarding table to forward the packet that it receives. The forwarding table contains
two fields. One is host address filed and another one is used for storing the port number of bridge on
which the host is connected. For example,
Each packet carries a global address, and the bridge decides which output to send a frame on by looking
up that address in a table. Bridge inspects the source address in all the frames it receives and records the
fact in the table. When a frame is received by the bridge, it opens the frame to see the destination
address and then it checks the destination address in the forwarding table. Suppose if the destination
address is available in the table then it forwards the frame to the respective one of its output port which is
mentioned for that destination host in the table. Suppose if the destination address is not in the table then
it floods the frame to all of its output port and then it uses the source address of the frame to update the
table. Thus the bridge learns the table entry to decide whether to forward/discard the frame or to update
the table and the algorithms are listed below:
Unit III
Circuit switching vs. packet switching / Packet switched networks – IP – ARP – RARP – DHCP – ICMP –
Queuing discipline – Routing algorithms – RIP – OSPF – Subnetting – CIDR – Interdomain routing – BGP
– Ipv6 – Multicasting – Congestion avoidance in network layer
• Datagram
• Virtual circuit
•
Datagram Packet Switching
Compare and contrast datagram and virtual circuit approaches for packet switching
• Virtual circuits
— Network can provide sequencing and error control
— Packets are forwarded more quickly
• No routing decisions to make
— Less reliable
• Loss of a node looses all circuits through that node
• Datagram
— No call setup phase
• Better if few packets
— More flexible
• Routing can be used to avoid congested parts of the network
Internetworking
Each of these is a single-technology network. The nodes that interconnect the networks are called routers.
They are also sometimes called gateways. The Internet Protocol is the key tool used today to build
scalable, heterogeneous internetworks. The IP datagram is fundamental to the Internet Protocol. A
datagram is a type of packet that happens to be sent in a connectionless manner over a network. Every
datagram carries enough information to let the network forward the packet to its correct destination;
there is no need for any advance setup mechanism to tell the network what to do when the packet
arrives.
Service Model
To build an internetwork, it is better to define its service model, that is, the host-to-host services you
want to provide. The IP service model can be thought of as having two parts:
• an addressing scheme, which provides a way to identify all hosts in the internetwork,
• a datagram (connectionless) model of data delivery.
This service model is sometimes called best effort because, although IP makes every effort to deliver
datagrams, it makes no guarantees.
The IP datagram is fundamental to the Internet Protocol. It is a connectionless model of data delivery.
Every datagram carries enough information so that network forwards the packet to its correct destination;
there is no need for any advance setup mechanism to tell the network what to do when the packet
arrives. if something goes wrong and the packet gets lost, corrupted, or in any way fails to reach its
intended destination, the network does nothing—it made its best effort. So that, this is sometimes called
an unreliable service.
In internetwork, each hardware technology specifies the maximum amount of data that a frame can carry.
This is called the Maximum Transmission Unit (MTU). IP uses a technique called fragmentation to solve the
problem of heterogeneous MTUs. When a datagram is larger than the MTU of the network over which it
must be sent, it is divided into smaller fragments which are each sent separately. This process is
illustrated in Figure
.
Each fragment becomes its own datagram and is routed independently of any other datagrams. At the
final destination, the process of re-constructing the original datagram is called reassembly
Datagram Forwarding in IP
Global addressing
The IP service model, that one of the things that it provides is an addressing scheme. If you want to be
able to send data to any host on any network, there needs to be a way of identifying all the hosts. Thus,
we need a global addressing scheme—in which no two hosts have the same address.
Addressing Scheme
Address is need to uniquely and universally identify every device to allow global communication
Internet address or IP address is used in the network layer of the Internet model
Consists of a 32-bit binary address
IP Address Representation
Binary notation – IP address is displayed as 32 bits
Dotted-decimal notation – more compact and easier to read form of an IP address
o Each number is between 0 and 255
Types
Classful Addressing
In classful addressing, the address space is divided into five classes: A, B, C, D, and E. We can find the
class of an address when given the address in binary notation or dotted-decimal notation. If the address is
given in binary notation, the first few bits can immediately tell us the class of the address. If the address
is given in decimal-dotted notation, the first byte defines the class. Both methods are shown in figures
Example
Find the class of each address:
a.227.12.14.87
b.193.14.56.22
c.14.23.120.8
d.252.5.15.111
e.134.11.78.56
Solution
a.The first byte is 227 (between 224 and 239); the class is D.
b. The first byte is 193 (between 192 and 223); the class is C.
c.The first byte is 14 (between 0 and 127); the class is A.
d.The first byte is 252 (between 240 and 255); the class is E.
e.The first byte is 134 (between 128 and 191); the class is B.
One problem with classful addressing is that each class is divided into a fixed number
of blocks with each block having a fixed size as shown in Table
Network address
The network address is an address that defines the network itself. It can not be assigned to a host. A
network address has several properties:
All hosted bytes are 0
The network address is the first address in the set of address (block)
Given the network address, we can find the class of address.
Example
Given the address 23.56.7.91, find the network address
Solution:
The class is A. Only the first byte defines the netid and the remaining 3bytes define the host id in class A.
We can find the network address by replacing the hosted bytes with 0s. Therefore, the network address is
23.0.0.0
Example
Given address is 132.6.17.85, find the network address
Soultion
The class is B. The first 2 bytes defines the netid in class B. We can replace the next 2 bytes with 0s. So
the network address is 132.6.0.0
Example
Given the network address 17.0.0.0, find the class.
Solution
The class is A because the netid is only 1 byte
Example
Given the network address 17.0.0.0, find the class, the block, and the range of the addresses.
Solution
The class is A because the first byte is between 0 and 127. The block has a netidof 17. The addresses
range from 17.0.0.0 to 17.255.255.255.
Example
Given the network address 132.21.0.0, find the class, the block,and the range of the addresses.
Solution
The class is B because the first byte is between 128 and 191. The block has a netidof 132.21. The
addresses range from 132.21.0.0 to 132.21.255.255.
Subnetting
IP addressing is hierarchical
First it reaches network using its netid
Then it reaches the host itself using the second portion (hostid)
Since an organization may not have enough address, subnetting may be used to divide the
network into smaller networks or subnetworks
Subnetting creates an intermediate level of hierarchy
IP datagram routing then involves three steps: delivery to the network, delivery to the
subnetwork, and delivery to the host
A single IP class A, B, or C network is further divided into a group of hosts to form an IP sub-network.
Sub-networks are created for manageability, performance, and security of hosts and networks and to
reduce network congestion. The host ID portion of an IP address is further divided into a sub-network ID
part and a host ID part. The sub-network ID is used to uniquely identify the different sub-networks within
a network.
Mask
A mask is a 32-bit binary number that gives the first address in the block (the network address) when
bitwise ANDed with an address in the block. The masks for classes A, B, and C are shown in table.
The last column of table shows the mask in the form /n where n can be 8, 16, or 24 in classful addressing.
This notation is also called slash notation or Classless Interdomain Routing (CIDR) notation. The notation
is used in classless addressing.
Masking concept
AND Operation
The network address is the beginning address of each block. It can be found by applying the default mask
to any of the addresses in the block (including itself). It retains the netid of the block and sets the hostid
to zero.
Subnet Mask
o A process that extracts the address of the physical network (network/subnetwork portion) from
an IP address
Determine to the network ID, sub-network ID and the host ID, given the IP address and the subnet mask
The network class (A or B or C) of a given IP address can be easily determined by looking at the value of
the first 4 bits of the first byte. From the network class, the number of bytes used to represent the
network can be determined and hence the network ID can be determined. By performing a "AND" logical
operation of the IP address and the subnet mask, the sub-network ID can be determined. In the value
resulting from the "AND" operation, by removing the bytes used for the network ID, the remaining bits for
which the corresponding bit in the subnet mask is one, represents the sub-network ID.
o Given an IP address, we can find the subnet address the same way we found the network
address. We apply the mask to the address.
• we use binary notation for both the address and the mask and then apply the AND
operation to find the subnet address.
Example
What is the subnetwork address if the destination address is 200.45.34.56 and the subnet mask is
255.255.240.0?
Classless Addressing
Addressing mechanism in which the IP address space is not divided into classes
IP address block ranges are variable, as long as they are a power of 2 (2, 4, 8...)
Masking is also used as well as sub netting
CIDR
Classless Inter Domain Routing (CIDR) is a method for assigning IP addresses without using the standard
IP address classes like Class A, Class B or Class C. In CIDR, depending on the number of hosts present in
a network, IP addresses are assigned.
In CIDR notation, an IP address is represented as A.B.C.D /n, where "/n" is called the IP prefix or network
prefix. The IP prefix identifies the number of significant bits used to identify a network. For example,
192.9.205.22 /18 means, the first 18 bits are used to represent the network and the remaining 14 bits are
used to identify hosts.
Advantages of CIDR
The difference between classful IP addressing and classless IP addressing is in selecting the number of bits
used for the network ID portion of an IP address. In classful IP addressing, the network ID portion can
take only the predefined number of bits 8, 16, or 24. In classless addressing, any number of bits can be
assigned to the network ID.
ARP
The Address Resolution Protocol (ARP) is used to resolve IP addresses to MAC addresses. This is important
because on a network, devices find each other using the IP address, but communication between devices
requires the MAC address.
When a computer wants to send data to another computer on the network, it must know the MAC address
of the destination system. To discover this information, ARP sends out a discovery packet to obtain the
MAC address. When the destination computer is found, it sends its MAC address to the sending computer.
The ARP-resolved MAC addresses are stored temporarily on a computer system in the ARP cache. Inside
this ARP cache is a list of matching MAC and IP addresses. This ARP cache is checked before a discovery
packet is sent on to the network to determine if there is an existing entry. Entries in the ARP cache are
periodically flushed so that the cache doesn't fill up with unused entries.
MAC address
Media Access Control address is an identifier for assigned to most network adapters or Network Interface
Cards by the manufacturer for the purpose of identification. MAC address is used in MAC protocol sub
layer.
ARP is a protocol for mapping link layer addresses to a physical machine address that is recognized in the
local network. For example, in IP Version 4, addresses are 32 bits long, are hardware independent, but
are dependent upon the network to which a device is connected. In an Ethernet Local Area Network,
addresses for attached devices are 48 bits long. In other words, the IP address of a device changes when
the device is moved. So that we need a mapping mechanism to resolve IP addresses to Ethernet
addresses.
RARP
RARP is used to resolve a Ethernet MAC address to an IP address. All the mappings between the hardware
MAC addresses and the IP addresses of the hosts are stored in a configuration file in a host in the
network. This host is called the RARP server. This host responds to all the RARP requests. Normally, the IP
address of a system is stored in a configuration file in the local disk. When the system is started, it
determines its IP address from this file. In the case of a diskless workstation, its IP address cannot be
stored in the system itself. In this case, RARP can be used to get the IP address from a RARP server. RARP
uses the same packet format as ARP.
• Address Resolution Protocol is utilized for mapping IP network address to the hardware address
that uses data link protocol.
• Reverse Address Resolution Protocol is a protocol using which a physical machine in a LAN could
request to find its IP address from ARP table or cache from a gateway server.
• IP address of destination to physical address conversion is done by ARP, by broadcasting in LAN.
• Physical address of source to IP address conversion is done by RARP.
• ARP associates 32 bit IP address with 48 bit physical address.
DHCP
As its name indicates, DHCP provides dynamic IP address assignment. The Internet is a vast source of
information that is continuously updated and accessed via computers and other devices. For a device to
connect to the Internet, it is necessary that among other configurations, it must have an Internet Protocol
(IP) address. The IP address is the computer's address on the Internet. The protocol Bootstrap Protocol
(BOOTP) was the first Transmission Control Protocol/Internet Protocol (TCP/IP) network configuration tool
used to prevent the task of having to manually assign IP addresses by automating the process. The
improvement of BOOTP is Dynamic Host Configuration Protocol (DHCP).
DHCP relies on the existence of a DHCP server that is responsible for providing configuration information
to hosts. There is at least one DHCP server for an administrative domain. the DHCP server maintains a
pool of available addresses that it hands out to hosts on demand. When a network device newly added to
network, it searches the DHCP server to get an IP address. To contact a DHCP server, a newly attached
host sends a DHCPDISCOVER message to a special IP address (255.255.255.255) that is an IP broadcast
address. This means it will be received by all hosts and routers on that network. There is at least one
DHCP relay agent on each network, and it is configured with the IP address of the DHCP server. When a
relay agent receives a DHCPDISCOVER message, it unicasts it to the DHCP server and awaits the
response, which it will then send back to the newly added host. The process of relaying a message from a
host to a remote DHCP server is shown in figure
ICMP
The Internet Control Message Protocol (ICMP) is a helper protocol that supports IP with facility for
– Error reporting
– Simple queries
ICMP messages are encapsulated as IP datagrams.
Type/Code: Description
8/0 Echo Request
0/0 Echo Reply
13/0 Timestamp Request
14/0 Timestamp Reply
10/0 Router Solicitation
9/0 Router Advertisement
Routing
Routing tables are used to store information identifying the location of nodes on the network
Several techniques are used to make the size of the routing table manageable and to handle
issues such as security
Dynamic Routing
Dynamic routing relies on the routing protocol. Routing Protocols can be Distant Vector or Link-State.
Each node constructs a vector containing the distances"(costs) to all other nodes and distributes that
vector to its immediate neighbors.
1. The starting assumption for distance-vector routing is that each node knows the cost of the link
to each of its directly connected neighbors.
2. A link that is down is assigned an infinite cost
We can represent each node's knowledge about the distances to all other nodes as a table like the one
given in Table 1.
Note that each node only knows the information in one row of the table.
1. Every node sends a message to its directly connected neighbors containing its personal list of
distance. ( for example, A sends its information to its neighbors B,C,E, and F. )
2. If any of the recipients of the information from A find that A is advertising a path shorter than the
one they currently know about, they update their list to give the new path length and note that
they should send packets for that destination through A. ( node B learns from A that node E can
be reached at a cost of 1; B also knows it can reach A at a cost of 1, so it adds these to get the
cost of reaching E by means of A. B records that it can reach E at a cost of 2 by going through
A.)
3. After every node has exchanged a few updates with its directly connected neighbors, all nodes
will know the least-cost path to all the other nodes.
4. In addition to updating their list of distances when they receive updates, the nodes need to keep
track of which node told them about the path that they used to calculate the cost, so that they
can create their forwarding table. ( for example, B knows that it was A who said " I can reach E
in one hop" and so B puts an entry in its table that says " To reach E, use the link to A.)
In practice, each node's forwarding table consists of a set of triples of the form:
For example, Table 3 shows the complete routing table maintained at node B for the network in figure1.
Link State
Each node is assumed to be capable of finding out the state of the link to its neighbors (up or
down) and the cost of each link.
Strategy
1. Advertise about neighborhood: instead of sending its entire routing table, a router sends
information about its neighborhood only
2. Flooding: Each router sends this information to every router on the internetwork not just to its
neighbor. It does so process of flooding
3. Each router sends out information about the neighbor when there is a change in the table
4. Find and use the shortest path to reach any point in the network.
Two mechanisms needed
I. Reliable flooding
II. Route calculation using Dijkstra’s algorithm
LSP
Each router creates a packet called link state packet which contains the following information:
id of the node that created the LSP
cost of link to each directly connected neighbor
sequence number (SEQNO)
time-to-live (TTL) for this packet
Every router receives every LSP and then prepares a database, which represents a complete network
topology. This Database is known as Link State Database
I. Reliable Flooding
As the term “flooding” suggests, the basic idea is for a node to send its link-state information out on its
entire directly connected links, with each node that receives this information forwarding it out on its entire
links. This process continues until the information has reached all the nodes in the network
Strategy
Flooding works in the following way. Consider a node X that receives a copy of an LSP that originated at
some other node Y. Note that Y may be any other router in the same routing domain as X. X checks to see
if it has already stored a copy of an LSP from Y. If not, it stores the LSP. If it already has a copy, it
compares the sequence numbers; if the new LSP has a larger sequence number, it is assumed to be the
more recent, and that LSP is stored, replacing the old one. A smaller (or equal) sequence number would
imply an LSP older (or not newer) than the one stored, so it would be discarded and no further action
would be needed. If the received LSP was the newer one, X then sends a copy of that LSP to all of its
neighbors except the neighbor from which the LSP was just received. Since X passes the LSP on to all its
neighbors, who then turn around and do the same thing, the most recent copy of the LSP eventually
reaches all nodes
Each node maintains two lists, known as Tentative and Confirmed. Each of these lists contains a set of
entries of the form (Destination, Cost, NextHop). The algorithm works as follows:
.
Properties of Link State Routing
Stabilizes quickly
Keeps routing control traffic low
Responds rapidly to topology changes
The amount of information stored in each node is large.
In DV
–Talk only to neighbor
–Tell the neighbor everything that you learnt
In LS
– Talk to every node
– Tell them about the neighbor nodes link state
OSPF
This protocol is open, which means that its specification is in the public domain. It means that
anyone can implement it without paying license fees.
OSPF is based on the Dijkstra’s algorithm
The basic building block of link-state messages in OSPF is known as the link-state advertisement
(LSA).
OSPF is a link-state routing protocol that calls for the sending of link-state advertisements (LSAs)
to all other routers within the same hierarchical area.
Figure shows the packet format for a link-state advertisement. LSAs advertise the cost of links between
routers. The LS Age is the equivalent of a time to live. The Type field tells us that this is a type 1 LSA. the
Link-state ID and the Advertising router field are identical.
Each carries a 32-bit identifier for the router that created this LSA. The LS sequence number is used to
detect old or duplicate LSAs. The LS checksum is used to verify that data has not been corrupted . Length
is the length in bytes of the complete LSA. Link ID, metric and Link Data fields are used identify the link;
TOS is a type of service information
OSPF specifies that all the exchanges between routers must be authenticated.
OSPF provides Load Balancing. When several equal-cost routes to a destination exist, traffic is
distributed equally among them
OSPF allows sets of networks to be grouped together. Such a grouping is called an Area. Each
Area is self-contained
OSPF uses different message formats to distinguish the information acquired from within the
network (internal sources) with that which is acquired from a router outside (external sources).
is the process where a network device, assigns a public address to a host inside a private network. To
separate the addresses used inside private network and the ones used for the public network (Internet),
the Internet authorities have reserved three sets of addresses as private addresses, shown in table
The private addressing scheme works well for computers that only have to access resources inside the
network. However, to access resources outside the network, like the Internet, these computers have to
have a public address in order for responses to their requests to return to them. This is where NAT comes
into play.
Address Translation
All the outgoing packets go through the NAT router, which replaces the source address in the packet with
the global NAT address. All incoming packets also pass through the NAT router, which replaces the
destination address in the packet with the appropriate private address
Intradomain versus Interdomain they are commonly known as Interior gateway protocols and
Exterior-gateway protocols respectively.
In small and slowly changing network the network administrator can establish or modify routes by hand
i.e. manually. Administrator keeps a table of networks and updates the table whenever a network is added
or deleted from the autonomous system. The disadvantage of the manual system is obvious; such
systems are neither scalable nor adaptable to changes. Automated methods must be used to improve
reliability and response to failure. To automate the task this task, interior router (within a autonomous
system) usually communicate with one another, exchanging network routing information from which
reachability can be deduced. These routing methods are known as Interior gateway Protocols (IGP).
Two Interior gateway protocols widely used namely,
1. routing Information Protocol (RIP)
2. Open Shortest path first (OSPF)
Autonomous system
An autonomous system (AS) is a network or group of networks under a common administration and with
common routing policies. BGP is used to exchange routing information for the Internet and is the protocol
used between Internet service providers (ISP), which are different ASes. V The basic idea behind
autonomous systems is to provide an additional way to hierarchically aggregate routing information in a
large internet, thus improving scalability.
One feature of the autonomous system idea is that it enables some ASs to dramatically
reduce the amount of routing information they need to care about by using default routes. For example, if
a corporate network is connected to the rest of the Internet by a single router (this router is typically
called a border router since it sits at the boundary between the AS and the rest of the Internet), then it is
easy for a host or router inside the autonomous system to figure out where it should send packets that
are headed for a destination outside of this AS—they first go to the AS’s border router. This is the default
route. Similarly, a regional Internet service provider can keep track of how to reach the networks of all its
directly connected customers and can have a default route to some other provider (typically a backbone
provider) for everyone else.
We now divide the routing problem into two parts: routing within a single autonomous system and routing
between autonomous systems. Since another name for autonomous systems in the Internet is routing
domains, we refer to the two parts of the routing problem as interdomain routing and intradomain routing.
The interdomain routing problem is one of having different ASes share information with each other.
There have been two major interdomain routing protocols in the recent history of the Internet:
1. Exterior Gateway Protocol (EGP).
2. Border Gateway Protocol (BGP)
BGP
The Border Gateway Protocol (BGP) is an inter-autonomous system routing protocol. The replacement
for EGP is the Border Gateway Protocol. Today’s Internet consists of an interconnection of multiple
backbone networks (they are usually called service provider networks. The following figure illustrates the
BGP Model of the Internet
Classification of AS:
BGP Example
• Speaker for AS2 advertises reachability to P and Q
o network 128.96, 192.4.153, 192.4.32, and 192.4.3, can be reached directly from AS2
• Speaker for backbone advertises
o –networks 128.96, 192.4.153, 192.4.32, and 192.4.3 can be reached along the path
(AS1, AS2).
• Speaker can cancel previously advertised paths
• Stub AS: The border router injects a default route into the intradomain routing protocol.
• Multihomed and Transit ASs: The border routers inject routes that they have learned from
outside the AS.
• Transit ASs: The information learned from BGP may be “too much” to inject into the intradomain
protocol: if a large number of prefixes in inserted, large link-state packets will be circulated and
path calculations will get very complex.
IP Version 6 (IPV6)
Features
–128-bit addresses (classless)
–multicast
–real-time service
–authentication and security
–autoconfiguration
–end-to-end fragmentation
–protocol extensions
Header
–40-byte “base” header
–extension headers (fixed order, mostly fixed length)
–fragmentation
–source routing
–authentication and security
–other options
IPv6 addresses do not have classes, but the address space is still subdivided in various ways based on the
leading bits. Rather than specifying different address classes, the leading bits specify different uses of the
IPv6 address. The current assignment of prefixes is listed in Table
“link local use” addresses is useful for auto configuration`, “site local use” addresses are intended to allow
valid addresses to be constructed on a site and the multicast address space is for multicast, thereby
serving the same role as class D addresses in IPv4
Address Notation
An example would be
47CD:1234:4422:ACO2:0022:1234:A456:0124
• When there are many consecutive 0s, omit them:
47CD:0000:0000:0000:0000:0000:0000:A456:0124 becomes
47CD::A456:0124 (double colon means a group of 0s)
• Two types of IPv6 address can contain embedded IPv4 addresses. For example, an IPv4 host address
128.96.33.81 becomes
::FFFF:128.96.33.81 (the last 32 bits are an IPv4 address)
This notation facilitates the extraction of an IPv4 address from an IPv6 address
Packet Format
In a packet switching network, packets are introduced in the nodes (i.e. offered load), and the nodes in-
turn forward the packets (i.e. throughput) into the network. When the “offered load” crosses certain limit,
then there is a sharp fall in the throughput. This phenomenon is known as congestion. In other words,
when too much traffic is offered, congestion sets in and performance degrades sharply.
Congestion affects two vital parameters of the network performance, namely throughput and delay. The
throughput can be defined as the percentage utilization of the network capacity.
Congestion control refers to the mechanisms and techniques used to control congestion and keep the
traffic below the capacity of the network. The congestion control techniques can be broadly classified two
broad categories:
Open loop: Protocols to prevent or avoid congestion, ensuring that the system never enters a
Congested State.
Close loop: Protocols that allow system to enter congested state, detect it, and remove it.
1 Admission control is one such closed-loop technique, used in virtual circuit subnets where action is taken
once congestion is detected in the network. Different approaches can be followed:
First approach is once the congestion has been signaled, do not set-up new connections, once the
congestion is signaled. This type of approach is often used in normal telephone networks. When
the exchange is overloaded, then no new calls are established.
Second approach is to allow new virtual connections, but route these carefully so that none of the
congested router or none of the problem area is a part of this route.
Third approach is to negotiate different parameters between the host and the network, when the
connection is setup. During the setup time itself, Host specifies the volume and shape of traffic,
quality of service, maximum delay and other parameters, related to the traffic it would be
offering to the network. Once the host specifies its requirement, the resources needed are
reserved along the path, before the actual packet follows.
2 Choke Packet Technique a closed loop control technique, can be applied in both virtual circuit and
datagram subnets.
Each router monitors the utilisation of its outgoing lines
Whenever the utilization rises above some threshold, a warning flag is set for that link
When a newly arriving data packet arrives for routing over that link, the router extracts the
packet's source address and sends a "choke packet" back to the source. This choke packet
contains the destination address
The original data packet is tagged so that it will not generate any more choke packets, then
forwarded
When the source host gets the choke packet, it is required to reduce the traffic sent to the
particular destination by X%; it ignores other choke packets for the same destination for a fixed
time interval
The following figure depicts the functioning of choke packets.
Figure depicts the functioning of choke packets, (a) Heavy traffic between nodes P and Q, (b) Node Q
sends the Choke packet to P, (c) Choke packet reaches P, (d) P reduces the flow and send a reduced flow
out, (e) Reduced flow reaches node Q
4 Load Shedding
Another simple closed loop technique is Load Shedding; it is one of the simplest and more effective
techniques. In this method, whenever a router finds that there is congestion in the network, it simply
starts dropping out the packets. One of the technique is Random Early Detection.
Random Early Detection (RED): There are different methods by which a host can find out which
packets to drop. Simplest way can be just choose the packets randomly which has to be dropped.
More effective ways are there but they require some kind of cooperation from the sender too. For
many applications, some packets are more important than others. So, sender can mark the
packets in priority classes to indicate how important they are. If such a priority policy is
implemented than intermediate nodes can drop packets from the lower priority classes and use
the available bandwidth for the more important packets.
5 Jitter is the variation in the packet arrival times belonging to the same flow. For example, in a real time
video broad casts, 99% of packets to be delivered with delay in the range of 24.5msec to 25.5msec might
be acceptable. The chosen range must be feasible.
When a packet arrives at a router, the router checks to see how much packet is behind or ahead of its
schedule. If the packet is ahead of its schedule, it is held just long enough to get it back on schedule. If its
behind schedule, the router tries to get it out quickly.
Unit IV
UDP – TCP – Adaptive Flow Control – Adaptive Retransmission - Congestion control – Congestion
avoidance – QoS
TCP
Stands for Transmission Control Protocol
- TCP provides a connection oriented, reliable, byte stream service. The term connection-oriented
means the two applications using TCP must establish a TCP connection with each other before
they can exchange data. It is a full duplex protocol, meaning that each TCP connection supports a
pair of byte streams, one flowing in each direction. TCP includes a flow-control mechanism for
each of these byte streams that allow the receiver to limit how much data the sender can
transmit. TCP also implements a congestion-control mechanism.
TCP data is encapsulated in an IP datagram. The figure shows the format of the TCP header. Its normal
size is 20 bytes unless options are present. Each of the fields is discussed below:
TCP vs UDP
TCP UDP
- Transmission control protocol - User Datagram protocol
- Faster than TCP
- Slower than UDP - Unreliable
- Reliable - Connection less
- Connection oriented - No flow control
- Provides flow control and error - UDP doesn’t offer error connection
control & delivery
- TCP offers error connection and - Provides or sends smaller packets
Guaranteed Delivery - Unordered message delivery
- Provides or sends larger packets - Light weight- No ordering of
- Ordered message delivery messages and no tracking
- Heavyweight-when the segment connections etc
arrives in the wrong order, resend - No Acknowledgement
requests have to be sent, and all
the out of sequence parts have to
be put back together
- If any segments are lost, the
receiver Will send ACK to intimate
lost segments
It involves exchange of three messages between client and server as illustrated by the timeline given in
figure
1. The client sends the starting sequence number (x) it plans to use.
2. The server responds with a ACK segment (x+1) that acknowledges the client sequence number
and starts its own sequence number (y).
3. Finally client responds with a third segment (y+1) that acknowledges the server sequence
number
Three way handshake for connection termination
Three-way handshaking for connection termination as shown in figure
1. The client TCP, after receiving a close command from the client process, sends the first segment,
a FIN segment. The FIN segment consumes one sequence number(x)
2. The server TCP, after receiving the FIN segment, informs its process of the situation and sends
the second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment from the
client and at the same time to announce the closing of the connection in the other direction. The
FIN +ACK segment consumes one sequence number(y)
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN
segment from the TCP server. This segment contains the acknowledgment number(y+1), which
is 1 plus the sequence number received in the FIN segment from the server.
Client program
The client program can be in one of the following states:
CLOSED, SYS-SENT, ESTABLISHED, FIN-WAIT1, FIN-WAIT2 and TIME-WAIT
1. The client TCP starts at CLOSED-STATE
2. The client TCP can receive an active open request from the client application program. It sends
SYN segment to the server TCP and goes to the SYN-SENT state.
3. While in this state, the client TCP can receive a SYN+ACK segment from the other TCP and goes
to the ESTABLISHED state. This is data transfer state. The client remains in this state as long as
sending and receiving data.
4. While in this state, the client can receive a close request from the client application program. It
sends FIN segment to the other TCP and goes to FIN-WAIT1 state.
5. While in this state, the client TCP waits to receive an ACK from the server TCP. When the ACK is
received, it goes to the FIN-WAITE2 state. It does not send any thing. Now the connection is
closed in one direction.
6. The client remains in this state, waiting for the server TCP to close the connection from the other
end. If the client TCP receives FIN segment from the other end, it sends an ACK and goes to
TIME-WAIT state.
7. When the client in this state, it starts time waited timer and waits until this timer goes off. After
the time out the client goes to CLOSED state.
Server program
The server program can be in one of the following states:
CLOSED, LISTEN, SYN-RECD, ESTABLISHED, CLOSE-WAIT, LAST-ACK
1. The server TCP starts in the CLOSED-STATE.
2. While in this state, the server TCP can receive a passive open request from the server application
program. It goes to LISTEN state.
3. While in this state, the server TCP can receive a SYN segment from the client TCP. It sends a
SYN+ACK segment to the client TCP and goes to the SYN-RECD state.
4. While in this state, the server TCP can receive ACK segment from the client TCP. Then it goes to
ESTABLISHED state. This is data transfer state. The sender remains this state as long as it is
receiving and sending data.
5. While in this state, the server TCP can receive a FIN segment from the client, which means that
the client wishes to close the connection. It can send an ACK segment to the client and goes to
the CLOSE-WAIT state.
6. While in this state, the server waits until it receives a close request from the server application
program. It then sends a FIN segment to the client and goes to LAST-ACK state.
7. While in this state, the server waits for the last ACK segment. If then goes to the CLOSED state.
– Flow control defines the amount of data a source can send before receiving an ACK from the
receiver.
– Sliding window protocol is used by TCP for flow control.
TCP Timers
The TCP uses the following timers during the data transmission
• Retransmission timer
• Persistence timer
• Keep alive timer
• Time waited timer
Retransmission timer: used to calculate the retransmission time of the next segment.
Retransmission time = 2 x RTT where RTT is round trip time.
RTT= ∞ (previous RTT) + (1-∞) (Current RTT)
Persistence timer: When the sender receives the ACK with a window size zero, it starts the persistence
timer. When it goes off, the sender sends special segment called probe. The probe is an alert that alerts
the receiver that the ACK was lost and should be resent.
Keep alive timer: is used to prevent a long idle connection between sender and receiver
Time waited timer: is used during the connection termination
Adaptive Retransmission
• The length of time we use for retransmission timer is very important. If it is set too low, we may
start retransmitting a segment earlier, if we set the timer too long, we waste time that reduces
overall performance.
• The solution is to use a dynamic algorithm that constantly adjusts the timeout interval based on
continuous measurement of n/w performance
1. Original Algorithm
• every time TCP sends a data segment, it records the time. When an ACK for that
segment arrives, TCP reads the time again and then takes the difference between these
two times as a SampleRTT
• TimeOut = 2 * EstimatedRTT
2. Karn/Partridge Algorithm
• One problem that occurs with dynamic algorithm(Original algorithm) while estimating RTT
dynamically. The following figure illustrates this.
• Whenever a segment is retransmitted and then an ACK arrives at the sender, it is impossible to
determine if this ACK should be associated with the first or the second transmission of the
segment for the purpose of measuring the sample RTT. To compute accurate SampleRTT, it is
necessary to know, the ACK is associated with which transmission
• if you assume that the ACK is for the original transmission but it was really for the second, then
the SampleRTT is too large as shown in (a)
• if you assume that the ACK is for the second transmission but it was actually for the first, then
the SampleRTT is too small as shown in (b).
3. Jacobson/Karel’s Algorithm
• TimeOut = 2 * EstimatedRTT
Where the constant value (2) was inflexible because it failed to respond when the variance
went up. The experience shows that.
• In the new approach, the sender measures a new SampleRTT as before. It then
where δ is a fraction between 0 and 1. That is, calculate both the mean RTT and the
variation in that mean.
• TCP then computes the timeout value as a function of both EstimatedRTT and Deviation as
follows:
Congestion
1. Slow start
2. Additive Increase and Multiplicative Decrease (AMID)
3. Fast retransmit and Fast Recovery
4. Equation based congestion control
To avoid congestion before it happens, it is necessary to stop this exponential growth. TCP
performs another algorithm called additive increase.
o When the congestion window reaches the threshold value, the size of the congestion window
is increased by 1.
o TCP increases the congestion window additively until the time out occurs.
Multiplicative Decrease (MD)
o If time out occurs, the threshold must be set to one half of the last congestion window
size and congestion window size should start from 1 again.
o The threshold is reduced to one half of the previous congestion window size each time a
time out occurs means, the threshold is decreased in a multiplicative manner.
Fast retransmission
– The idea of fast retransmit is straightforward. Every time a data packet arrives at the
receiving side, the receiver responds with an acknowledgment.
– When a packet arrives out of order i.e., not received by the receiver, it resends the same
acknowledgment that it sent the last time. This second transmission of the same
acknowledgment is called a duplicate ACK.
– When the sender sees a duplicate ACK, it knows that the receiver must have received a
packet out of order, which suggests that an earlier packet might have been lost.
– The sender waits until it sees some number of duplicate ACKs and then retransmits the
missing packet. In practice, TCP waits until it has seen three duplicate ACKs before
retransmitting the packet.
– After the retransmission of the lost segment, the receiver will send a cumulative ACK to the
sender.
The following figure illustrates this.
In this example, the destination receives packets 1 and 2, but packet 3 is lost in the network. Thus,
the destination will send a duplicate ACK for packet 2 when packet 4 arrives, again when packet 5
arrives, and so on. When the sender sees the third duplicate ACK for packet 2—the one sent because
the receiver had gotten packet 6—it retransmits packet 3. When the retransmitted copy of packet 3
arrives at the destination, the receiver then sends a cumulative ACK for everything up to and
including packet 6 back to the source.
Fast recovery
– When the fast retransmit mechanism signals congestion, rather than dropping the congestion
window and starting the slow start, it is possible to use the ACKs that are still in the pipe to
reduce the sending of packets. This mechanism is called fast recovery.
– It removes the slow start phase and cuts the congestion window in half and resumes additive
increase.
which says the transmission rate must be inversely proportional to the round-trip time (RTT) and the
square root of the loss rate (ρ).
It refers predict when congestion is about to happen and reduce the rate of data transmission before
packets being discarded. Congestion avoidance can be either
1. router-centric (Router Based Congestion Avoidance): a) DECbit and b) RED gateways
2. host-centric (Source Based Congestion Avoidance): c) TCP Vegas
a) DECbit
– The router sets the congestion bit if average queue length > 1
– The router attempts to balance throughout against delay
– The algorithm uses a threshold of 50%. If less than 50% of the ACK’s for a connection show the
congestion bit to be set we increase the CongestionWindow setting by 1.
– Once more than 50% of the ACK’s have the congestion bit set we decrease the Congestion
Window by .85.times the previous value.
• This is a proactive approach in which the router discards one or more packets before the buffer
becomes completely full.
• In RED, First, each time a packet arrives, the RED algorithm computes the average queue length,
AvgLen.
• AvgLen is computed as
where 0 <Weight < 1 and SampleLen is the length of the queue when a sample measurement is
made.
• Second, RED has two queue length thresholds that trigger certain activity: MinThreshold and
MaxThreshold. When a packet arrives at the gateway, RED compares the current AvgLen with
these two thresholds, according to the following rules:
if AvgLen ≤ MinThreshold
−→ calculate probability P
if MaxThreshold ≤ AvgLen
That is, if the average queue length is smaller than the lower threshold, no action is taken, and if
the average queue length is larger than the upper threshold, then the packet is always dropped.
If the average queue length is between the two thresholds, then the newly arriving packet is
dropped with some probability P. This situation is depicted in following figure
To summarize it,
– If AvgLen is lower than some lower threshold, congestion is assumed to be minimal or non-
existent and the packet is queued.
– If AvgLen is greater than some upper threshold, congestion is assumed to be serious and the
packet is discarded.
– If AvgLen is between the two thresholds, this might indicate the onset of congestion. The
probability of congestion is then calculated.
c) TCP Vegas
TCP Vegas was the nickname used for the next version of the TCP/IP UNIX stack. These approaches are
host-centric. The basis of the technique is:
Compare the current RTT with the current window size:
(CurrentWindow – OldWindow)X(CurrentRTT-OldRTT)
When this value is greater than zero assume congestion is approaching and incrementally decrease the
window. When the value is negative or zero values, incrementally increase the window size
There are some techniques that can be used to improve the quality of service. There are four common
methods:
1. scheduling
2. traffic shaping
3. admission control
4. resource reservation
1. Scheduling
Several scheduling techniques are designed to improve the quality of service. Two of them are
1. FIFO queuing
2. Fair Queuing (FQ) (Weighted fair queuing).
a) FIFO queuing
If the average arrival rate is > the average processing rate, the queue will fill up and new packets will be
discarded.
The two techniques can be combined to credit an idle host and at the same time regulate
the traffic.
3. Admission control
4. Resource Reservation
Some of the approaches that have been developed to provide a range of qualities of service. These can be
divided into two broad categories:
• fine-grained approaches, which provide QoS to individual applications or flows
• coarse-grained approaches, which provide QoS to large classes of data or aggregated traffic
INTEGRATED SERVICES
Integrated Services, sometimes called IntServ, is a fiow-based QoS model, which means that a user needs
to create a flow, a kind of virtual circuit, from the source to the destination and inform all routers of the
resource requirement.
Signaling
Flow Specification
When a source makes a reservation, it needs to define a flow specification. A flow spedfication has two
parts:
Rspec (resource specification) and
Tspec (traffic specification).
Rspec defines the resource that the flow needs to reserve (buffer, bandwidth, etc.).
Tspec defines the traffic characterization of the flow.
Service Classes
This type of service is designed for real-time traffic that needs a guaranteed minimum end-to-end delay.
The end-to-end delay is the sum of the delays in the routers, the propagation delay in the media, and the
setup mechanism. This type of service guarantees that the packets will arrive within a certain delivery
time and are not discarded if flow traffic stays within the boundary of Tspec. The guaranteed services are
quantitative services.
This type of service is designed for applications that can accept some delays, but are sensitive to an
overloaded network and to the danger of losing packets. Good examples of these types of applications are
file transfer, e-mail, and Internet access. The controlled load service is a qualitative type of service
RSVP
In the Integrated Services model, an application program needs resource reservation. In IntServ model,
the resource reservation is for a flow.
This means that if we want to use IntServ at the IP level, we need to create a flow, a kind of virtual-circuit
network, out of the IP, which was originally designed as a datagram packet-switched network. A virtual-
circuit network needs a signaling system to set up the virtual circuit before data traffic can start. The
resource reservation protocol (RSVP) is a signaling protocol to help IP create a flow and consequently
make a resource reservation. RSVP Messages
Path Messages
In Receiver Based Reservation, the receivers, not the sender, make the reservation. However, the
receivers do not know the path traveled by packets before the reservation is made. The path is needed for
the reservation. To solve the problem, RSVP uses Path messages. A Path message travels from the sender
and reaches all receivers in the multicast path. On the way, a Path message stores the necessary
information for the receivers. A Path message is sent in a multicast environment; a new message is
created when the path diverges. The following figure shows path messages.
Resv Messages After a receiver has received a Path message, it sends a Resv message. The Resv message
travels toward the sender (upstream) and makes a resource reservation on the routers that support RSVP.
If a router does not support RSVP on the path, it routes the packet based on the best-effort delivery
methods the following figure shows the Resv messages.
Reservation Styles
When there is more than one flow, the router needs to make a reservation to accommodate
all of them. RSVP defines three types of reservation styles, as shown in the following figure
Wild Card Filter Style In this style, the router creates a single reservation for all senders. The reservation
is based on the largest request. This type of style is used when the flows from different senders do not
occur at the same time.
Fixed Filter Style In this style, the router creates a distinct reservation for each flow. This means that if
there are n flows, n different reservations are made. This type of style is used when there is a high
probability that flows from different senders will occur at the same time.
Shared Explicit Style In this style, the router creates a single reservation which can be shared by a set of
flows.
Soft State
The reservation information (state) stored in every node for a flow needs to be refreshed periodically. This
is referred to as a soft state. The default interval for refreshing is currently 30 s.
There are at least two problems with Integrated Services that may prevent its full implementation in the
Internet: scalability and service-type limitation.
1. Scalability
The Integrated Services model requires that each router keep information for each flow. As the
Internet is growing every day, this is a serious problem.
2. Service-Type Limitation
The Integrated Services model provides only two types of services, guaranteed and control-load.
Those opposing this model argue that applications may need more than these two types of
services.
DIFFERENTIATED SERVICES
Differentiated Services (DS or Diffserv) was introduced by the IETF (Internet Engineering Task Force) to
handle the shortcomings of Integrated Services. Two fundamental changes were made:
1. The main processing was moved from the core of the network to the edge of the network. This solves
the scalability problem. The routers do not have to store information about flows. The applications, or
hosts, define the type of service they need each time they send a packet.
2. The per-flow service is changed to per-class service. The router routes the packet based on the class of
service defined in the packet, not the flow. This solves the service-type limitation problem. We can define
different types of classes based on the needs of applications.
DS Field
In Diffserv, each packet contains a field called the DS field. The value of this field is set at the boundary of
the network by the host or the first router designated as the boundary router.
The DS field contains two subfields: DSCP and CU.
• The DSCP (Differentiated Services Code Point) is a 6-bit subfield that defines the per-hop
behavior (PHB).
• The 2-bit CU (currently unused) subfield is not currently used.
Per-Hop Behavior
The Diffserv model defines per-hop behaviors (PHBs) for each node that receives a packet. There are two
PHBs are defined:
a. EF PHB
b. AF PHB.
AF: The AF (assured forwarding) delivers the packet with a high assurance as long as the class traffic does
not exceed the traffic profile of the node.
Unit V
Email (SMTP, MIME, IMAP, POP3) – HTTP – DNS- SNMP – Telnet – FTP – Security – PGP - SSH
First, users interact with a mail reader when they compose, file, search, and read their email.
There are number of mail readers available, just like there are Web browsers.
Second, there is a mail daemon (or process) running on each host. This process plays the role of a post
office.
Mail readers give the daemon messages they want to send to other users, the daemon uses SMTP running
over TCP to transmit the message to a daemon running on another machine, and the daemon puts
incoming messages into the user’s mailbox at the receiver machine from where that user’s mail reader
can later find it.
The mail traverses one or more mail gateways on its route from the sender’s host to the receiver’s host.
The intermediate nodes are called “gateways”, their job is to store and forward email messages.
The reason for storing the message by gateway is that the recipient’s machine may not always be up, in
which case the mail gateway holds the message until it can be delivered.
SMTP
MIME
POP and IMAP
IMAP State Transition Diagram
IMAP is similar to SMTP in many ways. It is a client/server protocol running over TCP, where the client
issues commands and the mail server responds. The exchange begins with the client authenticating him-
or herself, and identifying the mailbox he or she wants to access. This can be represented by the simple
state transition diagram shown in below figure.
In this diagram, LOGIN, AUTHENTICATE, SELECT, EXAMINE, CLOSE, and LOGOUT are example commands
that the client can issue, while OK is one possible server response.
Other common commands include FETCH, STORE, DELETE, and EXPUNGE, with the obvious meanings.
Additional server responses include NO and BAD commands.
HTTP
HTTP Messages
1. Request message
2. Response Message
HTTP uses the client-server model: An HTTP client opens a connection and sends a request message to an
HTTP server; the server then returns a response message, usually containing the resource that was
requested. After delivering the response, the server closes the connection.
HTTP Methods
HEAD: Used for retrieving meta-information written in response headers
GET: Used for requesting a specified resource.
POST: Used for submitting data to be processed
PUT: Used for uploading the specified resource.
DELETE: Used for deleting the specified resource.
HTTP Headers
Headers are name/value pairs that appear in both request and response messages. The name of the
header is separated from the value by a single colon. For example, this line in a request message:
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)
provides a header called User-Agent whose value is Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1).
The purpose of this particular header is to supply the web server with information about the type of
browser making the request.
A request message consists of a status line, headers and body as shown below
The status line also has three parts separated by spaces:
• The version of HTTP being used.
• A response status code that gives the result of the request.
• An English reason phrase describing the status code.
For an example, HTTP/1.0 200 OK
Or
HTTP/1.0 404 Not Found
An HTTP request can fail because of a network error or because of problems encountered while the
request is executing on the web server.
HTTP status codes are returned by web servers to describe if and how a request was processed. The codes
are grouped by the first digit:
1xx - Informational
Any code starting with '1' is an intermediate response and indicates that the server has received
the request but has not finished processing it.
2xx – Successful
These codes are used when a request has been successfully processed.
3xx – Redirection
Codes starting with a '3' indicate that the request was processed, but the browser should get the
resource from another location.
4xx - Client Error
The server returns these codes when they is a problem with the client's request.
5xx - Server Error
A status code starting with the digit 5 indicates that an error occurred on the server while
processing the request. For example:
Message: Description:
100 Continue Only a part of the request has been received by the server, but as long as it has not
been rejected, the client should continue with the request
202 Accepted The request is accepted for processing, but the processing is not complete
302 Found The requested page has moved temporarily to a new url
303 See Other The requested page can be found under a different url
404 Not Found The server can not find the requested page
SNMP
Simple Network Management Protocol, running over UDP, is used to configure remote devices, monitor
network performance, audit network usage, and detect network faults or inappropriate access.
The SNMP is composed of three major elements
1. Managers are responsible for communicating with network devices that implement SNMP Agents
2. Agents reside in devices such as workstations, switches, routers, microwave radios, printers, and
provide information to Managers.
3. MIBs (Management Information Base) describe data objects to be managed by an Agent within a
device.
SNMP is based on the manager/agent model consisting of an SNMP manager, an SNMP agent, a database
of management information, managed SNMP devices and the network protocol.
The SNMP manager provides the interface between the human network manager and the management
system.
The SNMP agent provides the interface between the manager and the physical devices
The SNMP manager and agent use an SNMP Management Information Base (MIB) and a relatively small
set of commands to exchange information.
All SNMP objects are numbered. So, the top level after the root is ISO, and has the number “1”. The next
level, ORG, has the number “3”, since it is the 3rd object under ISO, and so on. OIDs are always written
in a numerical form, instead of a text form. So the top three object levels are written as 1.3.1 not
iso\org\dod.
The MIB is extensible, which means that hardware and software manufacturers can add new objects to
the MIB.
These new MIB definitions must be added both to the network element and to the network management
system.
Applications
Here are some typical uses of SNMP:
Monitoring device performance
• Detecting device faults, or recovery from faults
• Collecting long term performance data
• Remote configuration of devices
• Remote device control
TELNET
Logging
To access the system, the user logs into the system with a user id or log-in name. The system checks the
password to prevent an unauthorized user from accessing the resources. The following figure shows the
logging process.
Local login
When a user logs into a local timesharing system, it is called local log-in. As a user types at a terminal or
at a workstation running a terminal emulator, the keystrokes are accepted by the terminal driver. The
terminal driver passes the characters to the operating system. The operating system, in turn, interprets
the combination of characters and invokes the desired application program or utility.
Remote login
When a user wants to access an application program or utility located on a remote machine, he/she
performs remote log-in. Here the TELNET client and server programs come into use. The user sends the
keystrokes to the terminal driver, where the local operating system accepts the characters but does not
interpret them. The characters are sent to the TELNET client, which transforms the characters to a
universal character set called network virtual terminal (NVT) characters and delivers them to the local
TCP/IP protocol stack.
The commands or text, in NVT form, travel through the Internet and arrive at the TCP/IP stack at the
remote machine. Here the characters are delivered to the operating system and passed to the TELNET
server, which changes the characters to the corresponding characters understandable by the remote
computer. However, the characters cannot be passed directly to the operating system because the remote
operating system is not designed to receive characters from a TELNET server: It is designed to receive
characters from a terminal driver. The solution is to add a piece of software called a pseudoterminal driver
which pretends that the characters are coming from a terminal. The operating system then passes the
characters to the appropriate application program.
In a heterogeneous network system, if we want to access any remote computer in the world, we must first
know what type of computer we will be connected to, and we must also install the specific terminal
emulator used by that computer. TELNET solves this problem by defining a universal interface called the
network virtual terminal (NVT) character set. Via this interface, the client TELNET translates characters
(data or commands) that come from the local terminal into NVT form and delivers them to the network.
The server TELNET, on the other hand, translates data and commands from NVT form into the form
acceptable by the remote computer. The following figure explains this concept.
Security
Need of security
When systems are connected through the network, attacks are possible during transmission time.
Cryptography
It is a science of writing Secret code using mathematical techniques. The many schemes used for
enciphering constitute the area of study known as cryptography. The sender applies an encryption
function to the original plaintext message, the resulting ciphertext message is sent over the network, and
the receiver applies a reverse function (called decryption) to recover the original plaintext.
Encryption: The process of converting from plaintext to cipher text.
Decryption: The process of converting from cipher text to plain text.
Cryptographic Algorithms
There are three types of cryptographic algorithms:
1. secret key algorithms,
2. public key algorithms,
3. Hashing(Message Digest) algorithms.
Symmetric/Secret/Private key
Secret key algorithms are symmetric in the sense that both participants1 in the communication share a
single key. Below figure illustrates the use of secret key encryption to transmit data over Examples: DES
(Data Encryption Standard) IDEA (International Data Encryption Algorithm)
Asymmetric/Public Key
public key cryptography involves each participant having a private key that is shared with no one else and
a public key that is published so everyone knows it. To send a secure message to this participant, you
encrypt the message using the widely known public key. The participant then decrypts the message using
his or her private key. This scenario is depicted in below figure
Examples: RSA
The third type of cryptography algorithm is called a hash or message digest function. Unlike the preceding
two types of algorithms, cryptographic hash functions typically don’t involve the use of keys instead it
computes a cryp- tographic checksum over a message. That is, a cryptographic checksum protects the
receiver from malicious changes to the message. This is because all cryptographic hash algorithms are
carefully selected to be one-way functions.
Example: The most widely used cryptographic checksum algorithm is Message Digest version 5 (MD5) and
SHA
Its purpose is to provide a standard method for protecting sensitive commercial and unclassified data.
DES has three distinct phases:
1 The 64 bits in the block are permuted (shuffled).
2 Sixteen rounds of an identical operation are applied to the resulting data and the key.
3 The inverse of the original permutation is applied to the result.
Initial Permutation
The permutation shuffles the bits.
Final Permutation
The final permutation is the inverse of the initial permutation
Details of Each Round
Encryption
Plain Text m
Decryption
Cipher text c
Message digest
Message digest functions also called hash functions, are used to produce digital summaries of information
called message digests. Message digests are commonly 128 bits to 160 bits in length and provide a digital
identifier for each digital file or document.
Message digest functions are mathematical functions that process information to produce a different
message digest for each unique document
The following figure shows the basic message digest process.
MD5
Stands for Message digest 5
Message digests are commonly used in conjunction with public key technology to create digital signatures
or "digital thumbprints" that are used for authentication, and integrity. Message digests also are
commonly used to provide data integrity for electronic files and documents. Two of the most commonly
used message digest algorithms today are MD5, and SHA-1.
The following five steps are performed to compute the message digest of the message.
Step 5. Output
where d0, d1, d2, d3 are four32-bit words, m0, m1, m2, m3, m4, m5 are digested as sixteen 32-bit words
The function F (a, b, c) is a combination of bitwise operations (OR, AND, NOT) on its arguments. The Ti s
are constants. The operator rotates the operand left by n bits.
In the second phase,
• F is replaced by a slightly different function G.
• The constants T1 through T16 are replaced by another set (T17 through T32).
In the third phase,
• G is replaced by yet another function H, which is just the XOR function.
• Another set of constants (T33 through T48) are used.
In the fourth phase,
• H is replaced by the function I, which is which is a combination of bitwise XOR, OR, and NOT
• Another set of constants (T49 through T64) are used.
The algorithm now proceeds to digest the next 16 bytes of the message until there is no more to be
digested; the output of the last stage is the message digests.
PGP
SSH
The run these applications over a secure ssh, it uses a technique called port forwarding. The idea is
illustrated in the above figure where we see a client on host A indirectly communicating with a server on
host B by forwarding its traffic through an SSH connection. The mechanism is called port forwarding
because when messages arrive at the well-known SSH port on the server, SSH first decrypts the contents,
and then “forwards” the data to the actual port at which the server is listening.
Unit – I
In telecommunications and computer networks, a channel access method or multiple access method
allows several terminals connected to the same physical medium to transmit over it and to share its
capacity.
A multiple access method is based on a multiplex method that allows several data streams or signals to
share the same communication channel or physical media.
1) Channelization methods: The following are common channelization channel access methods:
2) Packet mode methods: The following are examples of packet mode channel access methods:
– Carrier sense multiple access (CSMA)
– Carrier sense multiple access with collision detection (CSMA/CD)
– Carrier sense multiple access with collision avoidance (CSMA/CA)
Multiple Access techniques specify the way signals from different sources are to be combined efficiently for
transmission over a given radio frequency band and then separated at the destination without mutual
interference.
There are three basic multiple access techniques in use in cellular systems:
TDMA
The time division multiple access (TDMA) channel access scheme is based on the time division multiplex
(TDM) scheme
It allows several users to share the same frequency channel by dividing the signal into different time slots.
Time Division Multiple Access (TDMA) is used by several cellular communication systems.
It specifies how signals from different sources can be combined efficiently for transmission over a given
radio frequency band and then separated at the destination without mutual interference. Multiple access
techniques enable many users to share the available spectrum in an efficient way.
FDMA
The frequency division multiple access (FDMA) channel-access scheme is based on the frequency-division
multiplex (FDM) scheme
FDMA is a channel access method that is used by radio systems to share a certain radio spectrum between
multiple users.
FDMA gives users an individual allocation of one or several frequency bands or channels.
The users are individually allocated one or several frequency bands, allowing them to access the radio
system without interfering with each other
CDMA:
The code division multiple access (CDMA) scheme is based on spread spectrum
In which all the users are permitted to transmit simultaneously, operate at the same nominal frequency
and use the entire system's spectrum.
Because all the users can transmit simultaneously throughout the all system frequency spectrum, a
private code must be assigned to each user, so that his transmission can be identified. This privacy is
achieved by the use of spreading codes or pseudo number code (PN).
The information from an individual user is modulated by means of the unique PN code assigned to each
user. All the PN-modulated signals from different users are then transmitted over the entire CDMA
frequency channel.
At the receiving end, the desired signal is recovered by despreading the signal with a copy of the PN code
for the individual user. All the other signals (belonging to other users), whose PN-codes do not match that
of the desired signal, are not despread and as a result are perceived as noise by the receiver.
• The GSM cellular system combines the use of FDMA and TDMA
• GPRS packet switched service use FDMA
• Wireless LANs are based on FDMA
• HIPERLAN/2 wireless networks combine FDMA with dynamic TDMA
• Bluetooth packet mode communication combines frequency hopping with CSMA/CA
• Most second generation cellular systems are based on TDMA.
• 3G cellular systems are primarily based upon CDMA
Guided Transmission Media: uses a "cabling" system that guides the data signals along a specific path.
Unguided: The medium transmits the waves but does not guide
The popularity can be attributed to the fact that it is lighter, more flexible, and easier to install
than coaxial or fiber optic cable
It is also cheaper and can achieve greater speeds than its coaxial competition.
Ideal solution for most network environments.
Two main types of twisted-pair cabling are:
Unshielded Twisted Pair (UTP)
more commonplace than STP and is used for most networks
Shielded Twisted Pair (STP)
used in environments in which greater resistance to EMI and attenuation is
required.
the greater resistance comes at a price.
This extra protection increases the distances that data signals can travel over STP but
also increases the cost of the cabling
UTP: one or more pairs of twisted copper wires insulated and contained in a plastic cover
Uses RJ-45 telephone connector
STP: Same as UTP but with a aluminium/ polyester shield.
Connectors are more awkward to work with
Coaxial Cable
Size of Coax
RG-8, RG-11
50 ohm Thick Ethernet
RG-58
50 ohm Thin Ethernet
RG-59
75 ohm Cable T.V.
Fiber Optic
2 Unguided Media
Provides a means for transmitting electro-magnetic signals through air but do not guide them.
Also referred to as wireless transmission
Wireless communications uses specific frequency bands which separates the ranges.
Main types: radio waves, microwaves, Bluetooth and Infrared.
Transmission and reception are achieved by means of antennas
For transmission, an antenna radiates and electromagnetic radiation in the air
For reception, the antenna picks up electromagnetic waves from the surrounding
medium
The antenna plays a key role; the characteristics of the antenna and the frequency that
it receives
Unit II
Multicasting
A message can be unicast, multicast, or broadcast. Let us clarify these terms as they
relate to the Internet
In unicasting, the router forwards the received packet through only one of its interfaces.
In multicasting, the router may forward the received packet through several of its interfaces.
Applications
Multicasting has many applications today such as access to distributed databases, information
broadcasting of News, teleconferencing, and distance learning.
Multicast Routing
When a router receives a multicast packet, the situation is different from when it receives a unicast
packet. A multicast packet may have destinations in more than one network. Forwarding of a single
packet to members of a group requires a shortest path tree. If we have n groups, we may need n shortest
path trees. Two approaches have been used to solve the problem:
• Source-based trees
• group-shared trees
In the source-based tree approach, each router needs to have one shortest path tree for each group.
In the group-shared tree approach, only the core router, which has a shortest path tree for each group, is
involved in multicasting.
Authentication Protocols
The client and server authenticate each other using a simple three-way handshake
Protocol. The follwoing figure illustrates this.
1. The client first selects a random number x and encrypts it using its secret key, which we denote
as CHK (client handshake key). The client then sends E(x, CHK), along with an identifier
(ClientId), for itself to the server.
2. The server uses the key SHK for server handshake key to decrypt the random number. Then
server adds 1 to the number and sends the result back to the client. It also sends back a random
number y that has been encrypted with SHK.
3. The client also decrypts the random number y the server sent it, encrypts this number plus 1,
and sends the result to the server.
After the third message, each side has authenticated itself to the other.
The fourth message in figure corresponds to the server sending the client a session key (SK), encrypted
using SHK. The advantage of using a session key is making it harder for an attacker to gather data.
2. Kerberos
Kerberos is an authentication service developed as a part of project Athena at MIT. In Greek, it is a three-
headed watchdog that guards the entrance to the underworld.
Kerberos provides a centralized authentication server whose function is to provide authentication.
When two participants want to communicate and know nothing about each other, but both trust a third
party. This third party is sometimes called an authentication server, and it uses a protocol called Kerberos
to help the two participants authenticate each other. The following figure illustrates this.
A Digital certificate
Digital certificates are the equivalents of a driver’s license any other form of identity. The only difference
is that a digital certificate is used in conjunction with a public key encryption system. Digital certificates
are electronic files that simply work as an online passport.
The most common use of a digital certificate is to verify that a user sending a message is who he or she
claims to be, and to provide the receiver with the means to encode a reply.
An individual wishing to send an encrypted message applies for a digital certificate from a Certificate
Authority (CA). The CA issues an encrypted digital certificate containing the applicant's public key and a
variety of other identification information. The CA makes its own public key readily available through print
publicity or perhaps on the Internet.
The recipient of an encrypted message uses the CA's public key to decode the digital certificate attached
to the message, verifies it as issued by the CA and then obtains the sender's public key and identification
information held within the certificate. With this information, the recipient can send an encrypted reply.
B Digital Signature
A digital signature (not to be confused with a digital certificate) is an electronic signature that can be used
to authenticate the identity of the sender of a message or the signer of a document, and possibly to
ensure that the original content of the message or document that has been sent is unchanged.
To verify the contents of digitally signed data, the recipient generates a new message digest from the data
that was received, decrypts the original message digest with the originator's public key, and compares the
decrypted digest with the newly generated digest. If the two digests match, the integrity of the message
is verified.