0% found this document useful (0 votes)
39 views

Module 1 - Introduction To Networks

The document provides an introduction to computer networks, including network hardware and different types of networks categorized by transmission technology and scale. It discusses personal area networks, local area networks, metropolitan area networks, and wide area networks, describing common examples and technologies for each type of network.

Uploaded by

shreeshubha272
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Module 1 - Introduction To Networks

The document provides an introduction to computer networks, including network hardware and different types of networks categorized by transmission technology and scale. It discusses personal area networks, local area networks, metropolitan area networks, and wide area networks, describing common examples and technologies for each type of network.

Uploaded by

shreeshubha272
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Computer Networks -21CS52 Module1

Dayananda Sagar Academy of Technology & Management


(Affiliated to Visvesvaraya Technological University,Belagavi & Approved by AICTE,New Delhi)
Opp. Art of Living, Udayapura, Kanakapura Road,Bangalore – 560082
Department of Information Science & Engineering
Accredited by NAAC A+ and Accredited by NBA, New Delhi
2023-2024

Computer Networks -21CS52


Module-1 Notes
Dr. Rajesh L,
Associate Professor
ISE, DSATM

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 1


Computer Networks -21CS52 Module1

Module 1: Introduction to Networks


Syllabus:
Introduction to networks: Network hardware, Network software, Reference models, Physical
Layer: Guided transmission media, Wireless transmission

1.1 Introduction
Each of the past three centuries was dominated by a single new technology. The 18th century
was the era of the great mechanical systems accompanying the Industrial Revolution. The 19th
century was the age of the steam engine. During the 20th century, the key technology was
information gathering, processing, and distribution. Among other developments, we saw the
installation of worldwide telephone networks, the invention of radio and television, the birth and
unprecedented growth of the computer industry, the launching of communication satellites, and, of
course, the Internet.
As a result of rapid technological progress, these areas are rapidly converging in the 21st
century and the differences between collecting, transporting, storing, and processing information are
quickly disappearing. Two computers are said to be interconnected if they are able to exchange
information. The connection need not be via a copper wire; fiber optics, microwaves, infrared, and
communication satellites can also be used. Networks come in many sizes, shapes and forms.

Computer network refers to interconnected computing devices that can exchange data and share
resources with each other. These networked devices use a system of rules, called communications
protocols, to transmit information over physical or wireless technologies.
Computer networking refers to connected computing devices (such as laptops, desktops, servers,
Smartphone’s, and tablets) and an ever-expanding array of IoT devices (such as cameras, door locks,
doorbells, refrigerators, audio/visual systems, thermostats, and various sensors) that communicate
with one another.
Distributed system is a collection of independent computers appears to its users as a single coherent
system. A distributed system is simply any environment where multiple computers or devices are
working on a variety of tasks and components, all spread across a network. Components within

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 2


Computer Networks -21CS52 Module1

distributed systems split up the work, coordinating efforts to complete a given job more efficiently
than if only a single device ran it.

1.2 Network Hardware


There is no generally accepted taxonomy into which all computer networks fit, but two dimensions
stand out as important: transmission technology and scale.

There are two types of transmission technology that are in widespread use: broadcast links and point-
to-point links.
 Point-to-point links connect individual pairs of machines. To go from the source to the
destination on a network made up of point-to-point links, short messages, called packets in
certain contexts, may have to first visit one or more intermediate machines. Often multiple
routes, of different lengths, are possible, so finding good ones is important in point-to-point
networks. Point-to-point transmission with exactly one sender and exactly one receiver is
sometimes called unicasting.
 In Broadcast network the communication channel is shared by all the machines on the
network; packets sent by any machine are received by all the others. An address field within
each packet specifies the intended recipient. A wireless network is a common example of a
broadcast link. Some broadcast systems also support transmission to a subset of the machines,
which known as multicasting.

An alternative criterion for classifying networks is by scale. Distance is important as a classification


metric because different technologies are used at different scales.
In Fig. 1-1 we classify multiple processor systems by their rough physical size.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 3


Computer Networks -21CS52 Module1

Figure 1-1. Classification of interconnected processors by scale.

1.2.1 Personal Area Networks


PANs (Personal Area Networks) let devices communicate over the range of a person. A common
example is a wireless network that connects a computer with its peripherals. Almost every computer
has an attached monitor, keyboard, mouse, and printer. Many new users have a hard time finding the
right cables and plugging them into the right little holes (even though they are usually color coded)
that most computer vendors offer the option of sending a technician to the user’s home to do it. To
help these users, some companies got together to design a short-range wireless network called
Bluetooth to connect these components without wires.

Figure 1-2. Bluetooth PAN configuration.

In the simplest form, Bluetooth networks use the master-slave paradigm of Fig. 1-2. The system unit
(the PC) is normally the master, talking to the mouse, keyboard, etc., as slaves. The master tells the

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 4


Computer Networks -21CS52 Module1

slaves what addresses to use, when they can broadcast, how long they can transmit, what frequencies
they can use, and so on.
1.2.2 Local Area Networks
A LAN is a privately owned network that operates within and nearby a single building like a home,
office or factory. LANs are widely used to connect personal computers and consumer electronics to
let them share resources (e.g., printers) and exchange information. When LANs are used by
companies, they are called enterprise networks.

Figure 1-3. Wireless and wired LANs. (a) 802.11. (b) Switched Ethernet.
Wireless LANs are very popular these days, In most cases, each computer talks to a device in the
ceiling as shown in Fig. 1-3(a). This device, called an AP (Access Point), wireless router, or base
station, relays packets between the wireless computers and also between them and the Internet. There
is a standard for wireless LANs called IEEE 802.11, popularly known as WiFi, It runs at speeds
anywhere from 11 to hundreds of Mbps.

Wired LANs use a range of different transmission technologies. Most of them use copper wires, but
some use optical fiber. Typically, wired LANs run at speeds of 100 Mbps to 1 Gbps, have low delay
(microseconds or nanoseconds), and make very few errors. The topology of many wired LANs is
built from point-to-point links. IEEE 802.3, popularly called Ethernet.

1.2.3 Metropolitan Area Networks


A MAN (Metropolitan Area Network) covers a city. The best-known examples of MANs are the
cable television networks available in many cities.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 5


Computer Networks -21CS52 Module1

These systems grew from earlier community antenna systems used in areas with poor over-the-air
television reception. In those early systems, a large antenna was placed on top of a nearby hill and a
signal was then piped to the subscribers’ houses.
Recent developments in highspeed wireless Internet access have resulted in another MAN, which has
been standardized as IEEE 802.16 and is popularly known as WiMAX.

Figure 1-4. A metropolitan area network based on cable TV.

1.2.4 Wide Area Networks


A WAN (Wide Area Network) spans a large geographical area, often a country or continent. We
will begin our discussion with wired WANs, using the example of a company with branch offices in
different cities.
The WAN in Fig. 1-4 is a network that connects offices in Perth, Melbourne, and Brisbane. Each of
these offices contains computers(hosts) intended for running user programs.

Switching elements, or just switches, are specialized computers that connect two or more
transmission lines. When data arrive on an incoming line, the switching element must choose an
outgoing line on which to forward them. These switching computers have been called by various
names in the past; the name router is now most commonly used.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 6


Computer Networks -21CS52 Module1

Figure 1-5. WAN that connects three branch offices in Australia.


A virtual private network, or VPN, is an encrypted connection over the Internet from a device to a
network. The encrypted connection helps ensure that sensitive data is safely transmitted, traffic
remains private as it travels. It prevents unauthorized people from eavesdropping on the traffic and
allows the user to conduct work remotely. VPN technology is widely used in corporate
environments. An employee can work outside the office and still securely connect to the corporate
network. Even smartphones and tablets can connect through a VPN.

Figure 1-6. WAN using a virtual private network.

The second variation is that the subnet may be run by a different company. The subnet operator is
known as a network service provider and the offices are its customers.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 7


Computer Networks -21CS52 Module1

There may be many paths in the network that connect these two routers. How the network makes the
decision as to which path to use is called the routing algorithm. Many such algorithms exist. How
each router makes the decision as to where to send a packet next is called the forwarding algorithm.

Figure 1-7. WAN using an ISP network.


1.2.5 Internetworks
Many networks exist in the world, often with different hardware and software. People connected to
one network often want to communicate with people attached to a different one. The fulfillment of
this desire requires that different, and frequently incompatible, networks be connected. A collection
of interconnected networks is called an internetwork or internet.
Subnet refers to the collection of routers and communication lines owned by the network operator. A
network is formed by the combination of a subnet and its hosts. We know that an internet is formed
when distinct networks are interconnected. In our view, connecting a LAN and a WAN or connecting
two LANs is the usual way to form an internetwork, but there is little agreement in the industry over
terminology in this area.

There are two rules of thumb that are useful. First, if different organizations have paid to construct
different parts of the network and each maintains its part, we have an internetwork rather than a single
network. Second, if the underlying technology is different in different parts (e.g., broadcast versus
point-to-point and wired versus wireless), we probably have an internetwork. To go deeper, we need
to talk about how two different networks can be connected. The general name for a machine that
makes a connection between two or more networks and provides the necessary translation, both in

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 8


Computer Networks -21CS52 Module1

terms of hardware and software, is a gateway. Gateways are distinguished by the layer at which they
operate in the protocol hierarchy.

1.3 Network Software


1.3.1 Protocol Hierarchies
To reduce their design complexity, most networks are organized as a stack of layers or levels,
each one built upon the one below it. The number of layers, the name of each layer, the contents of
each layer, and the function of each layer differ from network to network. The purpose of each layer
is to offer certain services to the higher layers while shielding those layers from the details of how the
offered services are actually implemented. In a sense, each layer is a kind of virtual machine, offering
certain services to the layer above it. The fundamental idea is that a particular piece of software (or
hardware) provides a service to its users but keeps the details of its internal state and algorithms
hidden from them.

Basically, a protocol is an agreement between the communicating parties on how


communication is to proceed. A five-layer network is illustrated in Fig. 1-8. The entities comprising
the corresponding layers on different machines are called peers. The peers may be software
processes, hardware devices, or even human beings. In other words, it is the peers that communicate
by using the protocol to talk to each other.

Figure 1-8. Layers, protocols, and interfaces.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 9


Computer Networks -21CS52 Module1

In reality, no data are directly transferred from layer n on one machine to layer n on another machine.
Instead, each layer passes data and control information to the layer immediately below it, until the
lowest layer is reached. Below layer 1 is the physical medium through which actual communication
occurs. In Fig. 1-9, virtual communication is shown by dotted lines and physical communication by
solid lines.

Between each pair of adjacent layers is an interface. The interface defines which primitive operations
and services the lower layer makes available to the upper one. A set of layers and protocols is called a
network architecture. The specification of an architecture must contain enough information to allow
an implementer to write the program or build the hardware for each layer so that it will correctly obey
the appropriate protocol. A list of the protocols used by a certain system, one protocol per layer, is
called a protocol stack.
So an interface is the connection between systems or applications, while a protocol defines the rules
for data exchange between these systems or applications.

Virtual Communication versus Actual Communication

The main service provided is to transfer data packets from the network layer on the sending machine
to the network layer on the receiving machine. Data link layer of the sending machine transmits
accepts data from the network layer and sends them to the data link layer of the destination machine
which hands them to the network layer there.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 10


Computer Networks -21CS52 Module1

In actual communication, the data link layer transmits bits via the physical layers and physical
medium. However virtually, this can be visualized as the two data link layers communicating with
each other using a data link protocol.

1.3.2 Design Issues for the Layers

Some of the key design issues that occur in computer networks will come up in layer after layer.
Below, we will briefly mention the more important ones.
Reliability is the design issue of making a network that operates correctly even though it is made up
of a collection of components that are themselves unreliable. Think about the bits of a packet
traveling through the network. There is a chance that some of these bits will be received damaged
(inverted) due to fluke electrical noise, random wireless signals, hardware flaws, software bugs and
so on. How is it possible that we find and fix these errors?
One mechanism for finding errors in received information uses codes for error detection.
Information that is incorrectly received can then be retransmitted until it is received correctly. More
powerful codes allow for error correction, where the correct message is recovered from the possibly
incorrect bits that were originally received.
Routing: Another reliability issue is finding a working path through a network. Often there are
multiple paths between a source and destination, and in a large network, there may be some links or
routers that are broken.
A second design issue concerns the evolution of the network. Over time, networks grow larger and
new designs emerge that need to be connected to the existing network. We have recently seen the key
structuring mechanism used to support change by dividing the overall problem and hiding
implementation details: protocol layering. [addressing or naming].
Internetworking: mechanisms for disassembling, transmitting, and then reassembling messages.
Scalable: Designs that continue to work well when the network gets large
Statistical multiplexing: meaning sharing based on the statistics of demand
Flow control: Feedback from the receiver to the sender
Congestion: overloading of the network
Real time: delivery at the same time that they provide service to applications
Confidentiality: defend against to threats
Authenticity: prevent someone from impersonating someone else

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 11


Computer Networks -21CS52 Module1

Integrity: prevent surreptitious changes to messages


QoS: Quality of service is the name given to mechanisms that reconcile above competing demands.

1.3.3 Connection-Oriented Versus Connectionless Service


Connection-oriented service is modeled after the telephone system. To talk to someone, you pick up
the phone, dial the number, talk, and then hang up. Similarly, to use a connection-oriented network
service, the service user first establishes a connection, uses the connection, and then releases the
connection. The essential aspect of a connection is that it acts like a tube: the sender pushes objects
(bits) in at one end, and the receiver takes them out at the other end. In most cases the order is
preserved so that the bits arrive in the order they were sent.

In some cases when a connection is established, the sender, receiver, and subnet conduct a
negotiation about the parameters to be used, such as maximum message size, quality of service
required, and other issues. Typically, one side makes a proposal and the other side can accept it, reject
it, or make a counterproposal. A circuit is another name for a connection with associated resources,
such as a fixed bandwidth. This dates from the telephone network in which a circuit was a path over
copper wire that carried a phone conversation.

In contrast to connection-oriented service, connectionless service is modeled after the postal system.
Each message (letter) carries the full destination address, and each one is routed through the
intermediate nodes inside the system independent of all the subsequent messages. There are different
names for messages in different contexts; a packet is a message at the network layer. When the
intermediate nodes receive a message in full before sending it on to the next node, this
is called store-and-forward switching. The alternative, in which the onward transmission of a
message at a node starts before it is completely received by the node, is called cut-through
switching. Normally, when two messages are sent to the same destination, the first one sent will be
the first one to arrive. However, it is possible that the first one sent can be delayed so that the second
one arrives first.

Usually, a reliable service is implemented by having the receiver acknowledge the receipt of each
message so the sender is sure that it arrived. The acknowledgement process introduces overhead and

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 12


Computer Networks -21CS52 Module1

delays, which are often worth it but are sometimes undesirable. Reliable connection-oriented service
has two minor variations: message sequences and byte streams. Unreliable (meaning not
acknowledged) connectionless service is often called datagram service, in analogy with telegram
service, which also does not return an acknowledgement to the sender.

Figure 1-10. Six different types of service.

1.3.4 Service Primitives


A service is formally specified by a set of primitives (operations) available to user processes to
access the service. These primitives tell the service to perform some action or report on an action
taken by a peer entity. If the protocol stack is located in the operating system, the primitives are
normally system calls. The set of primitives available depends on the nature of the service being
provided. The primitives for connection-oriented service are different from those of connectionless
service.

Figure 1-11. Six service primitives that provide a simple connection-oriented service.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 13


Computer Networks -21CS52 Module1

Figure 1-12. A simple client-server interaction using acknowledged datagrams.

1.3.5 The Relationship of Services to Protocols


A service is a set of primitives (operations) that a layer provides to the layer above it. The service
defines what operations the layer is prepared to perform on behalf of its users, but it says nothing at
all about how these operations are implemented.
A protocol, in contrast, is a set of rules governing the format and meaning of the packets, or messages
that are exchanged by the peer entities within a layer. Entities use protocols to implement their
service definitions.
A service is like an abstract data type or an object in an object-oriented language. It defines
operations that can be performed on an object but does not specify how these operations are
implemented. In contrast, a protocol relates to the implementation of the service and as such is not
visible to the user of the service.

Figure 1-13. The relationship between a service and a protocol.

1.4 Reference Models


We will discuss two important network architectures: the OSI reference model and the TCP/IP
reference model.
1.4.1 The OSI Reference Model
The OSI model (minus the physical medium) is shown in Fig. 1-14. This model is based on a
proposal developed by the International Standards Organization (ISO) as a first step toward

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 14


Computer Networks -21CS52 Module1

international standardization of the protocols used in the various layers (Day and Zimmermann,
1983). It was revised in 1995 (Day, 1995). The model is called the ISO OSI (Open Systems
Interconnection) model, it is a Reference Model because it deals with connecting open systems—
that is, systems that are open for communication with other systems.
The OSI model has seven layers. The principles that were applied to arrive at the seven layers can be
briefly summarized as follows:
1. A layer should be created where a different abstraction is needed.
2. Each layer should perform a well-defined function.
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown together in
the same layer out of necessity and small enough that the architecture does not become unwieldy.

Figure 1-14. The OSI reference model.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 15


Computer Networks -21CS52 Module1

Layer 1. The Physical Layer


The physical layer is concerned with transmitting raw bits over a communication channel.
The physical layer transports data using electrical, mechanical or procedural interfaces. This layer is
responsible for sending computer bits from one device to another along the network. It determines
how physical connections to the network are set up and how bits are represented into predictable
signals as they're transmitted either electrically, optically or via radio waves.
Layer 2. The data-link layer
The data-link, or protocol layer, in a program handles moving data into and out of a physical link in a
network. This layer handles problems that occur as a result of bit transmission errors. It accomplishes
this task by having the sender break up the input data into data frames (typically a few hundred or a
few thousand bytes) and transmit the frames sequentially. This layer also permits the transmission of
data to Layer 3, the network layer, where it's addressed and routed.

The data-link layer can be further divided into two sublayers. The higher layer, which is called logical
link control (LLC), is responsible for multiplexing, flow control, acknowledgement and notifying
upper layers if transmit/receive (TX/RX) errors occur.

The media access control sublayer is responsible for tracking data frames using MAC addresses of
the sending and receiving hardware. It's also responsible for organizing each frame, marking the
starting and ending bits and organizing timing regarding when each frame can be sent along the
physical layer medium.

Layer 3. The network layer


The network layer controls the operation of the subnet. A key design issue is determining how
packets are routed from source to destination. Routes can be based on static tables that are ‘‘wired
into’’ the network and rarely changed, or more often they can be updated automatically to avoid
failed components. From a TCP/IP perspective, this is where IP addresses are applied for routing
purposes. Handling congestion is also a responsibility of the network layer, in conjunction with
higher layers that adapt the load they place on the network. More generally, the quality of service
provided (delay, transit time, jitter, etc.) is also a network layer issue.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 16


Computer Networks -21CS52 Module1

Layer 4. The transport layer


The transport layer is a true end-to-end layer; it carries data all the way from the source to the
destination. The basic function of the transport layer is to accept data from above it, split it up into
smaller units if need be, pass these to the network layer, and ensure that the pieces all arrive correctly
at the other end. The transport layer also determines what type of service to provide to the session
layer, the type of service is determined when the connection is established.

Layer 5. The session layer


The session layer allows users on different machines to establish sessions between them. Sessions
offer various services, including dialog control (keeping track of whose turn it is to transmit), token
management (preventing two parties from attempting the same critical operation simultaneously),
and synchronization. Examples of session layer protocols include X.225 and Zone Information
Protocol (ZIP).
Layer 6. The presentation layer
presentation layer is concerned with the syntax and semantics of the information transmitted. In
order to make it possible for computers with different internal data representations to communicate,
the data structures to be exchanged can be defined in an abstract way, along with a standard encoding
to be used ‘‘on the wire.’’ The presentation layer manages these abstract data structures and allows
higher-level data structures (e.g., banking records) to be defined and exchanged. This layer also
handles the encryption and decryption that the application layer requires.
Layer 7. The application layer
The application layer contains a variety of protocols that are commonly needed by users.
The application layer enables the user -- human or software -- to interact with the application or
network whenever the user elects to read messages, transfer files or perform other network-related
tasks. Other application protocols are used for file transfer, electronic mail, and network news.
Pros and cons of the OSI model
The OSI model has a number of advantages, including the following:
 It's considered a standard model in computer networking.
 The model supports connectionless, as well as connection-oriented, services. Users can take
advantage of connectionless services when they need faster data transmissions over the
internet and the connection-oriented model when they're looking for reliability.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 17


Computer Networks -21CS52 Module1

 It has the flexibility to adapt to many protocols.


 The model is more adaptable and secure than having all services bundled in one layer.

The disadvantages of the OSI model include the following:

 It doesn't define any particular protocol.

 The session layer, which is used for session management, and the presentation layer, which deals
with user interaction, aren't as useful as other layers in the OSI model.

 Some services are duplicated at various layers, such as the transport and data-link layers.

 Layers can't work in parallel; each layer must wait to receive data from the previous layer.

1.4.2 The TCP/IP Reference Model

The ARPANET was a research network sponsored by the DoD (U.S. Department of Defense). It
eventually connected hundreds of universities and government installations, using leased telephone
lines. When satellite and radio networks were added later, the existing protocols had trouble
interworking with them, so a new reference architecture was needed. Thus, from nearly the
beginning, the ability to connect multiple networks in a seamless way was one of the major design
goals.
This architecture later became known as the TCP/IP Reference Model, after its two primary
protocols. It was first described by Cerf and Kahn (1974), and later refined and defined as a standard
in the Internet community (Braden, 1989). The design philosophy behind the model is discussed by
Clark (1988). TCP/IP Model helps you to determine how a specific computer should be connected to
the internet and how data should be transmitted between them. It helps you to create a virtual network
when multiple computer networks are connected together. The purpose of TCP/IP model is to allow
communication over large distances.
TCP/IP stands for Transmission Control Protocol/ Internet Protocol. TCP/IP Stack is specifically
designed as a model to offer highly reliable and end-to-end byte stream over an unreliable
internetwork.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 18


Computer Networks -21CS52 Module1

Figure 1-15. TCP / IP reference model.

Application layer: On top of the transport layer is the application layer. It contains all the higher-
level protocols. This layer interacts with software applications to implement a communicating
component. Example of the application layer is an application such as file transfer, email, remote
login, etc.

 Application-layer helps you to identify communication partners, determining resource


availability, and synchronizing communication.
 It allows users to log on to a remote host
 This layer provides various e-mail services
 This application offers distributed database sources and access for global information about
various objects and services.

Transport Layer: Transport layer builds on the network layer in order to provide data transport from
a process on a source system machine to a process on a destination system. Two end-to-end transport
protocols have been defined here. The first one, TCP (Transmission Control Protocol), is a reliable
connection-oriented protocol that allows a byte stream originating on one machine to be delivered
without error on any other machine in the internet. It segments the incoming byte stream into discrete
messages and passes each one on to the internet layer. At the destination, the receiving TCP process
reassembles the received messages into the output stream. TCP also handles flow control to make
sure a fast sender cannot swamp a slow receiver with more messages than it can handle.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 19


Computer Networks -21CS52 Module1

The second protocol in this layer, UDP (User Datagram Protocol), is an unreliable, connectionless
protocol for applications that do not want TCP’s sequencing or flow control and wish to provide their
own.

Internet Layer: An internet layer is a second layer of TCP/IP layes of the TCP/IP model. It is also
known as a network layer. The main work of this layer is to send the packets from any network, and
any computer still they reach the destination irrespective of the route they take.

The internet layer defines an official packet format and protocol called IP (Internet Protocol), plus a
companion protocol called ICMP (Internet Control Message Protocol) that helps it function. The
job of the internet layer is to deliver IP packets where they are supposed to go. Layer-management
protocols that belong to the network layer are:

1. Routing protocols
2. Multicast group management
3. Network-layer address assignment.

Network Interface Layer: This layer is also called a network access layer. It helps you to defines
details of how data should be sent using the network. It also includes how bits should optically be
signaled by hardware devices which directly interfaces with a network medium, like coaxial, optical,
coaxial, fiber, or twisted-pair cables.

A network layer is a combination of the data line and defined in the article of OSI reference model.
This layer defines how the data should be sent physically through the network. This layer is
responsible for the transmission of the data between two devices on the same network.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 20


Computer Networks -21CS52 Module1

Differences between the OSI and TCP/IP models include the following:

1.5 Guided Transmission Media


The purpose of the physical layer is to transport bits from one machine to another. Various physical
media can be used for the actual transmission. Each one has its own niche in terms of bandwidth,
delay, cost, and ease of installation and maintenance. Media are roughly grouped into guided media,

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 21


Computer Networks -21CS52 Module1

such as copper wire and fiber optics, and unguided media, such as terrestrial wireless, satellite, and
lasers through the air.
1.5.1 Magnetic Media
One of the most common ways to transport data from one computer to another is to write them onto
magnetic tape or removable media (e.g., recordable DVDs), physically transport the tape or disks to
the destination machine, and read them back in again

1.5.2 Twisted Pairs


A twisted pair consists of two insulated copper wires, typically about 1 mm thick. The wires are
twisted together in a helical form, just like a DNA molecule. Twisting is done because two parallel
wires constitute a fine antenna. When the wires are twisted, the waves from different twists cancel
out, so the wire radiates less effectively. A signal is usually carried as the difference in voltage
between the two wires in the pair. This provides better immunity to external noise because the noise
tends to affect both wires the same, leaving the differential unchanged. Twisted pairs can be used for
transmitting either analog or digital information.
The bandwidth depends on the thickness of the wire and the distance traveled, but several
megabits/sec can be achieved for a few kilometers in many cases.

Figure 1-16. Category 5 UTP cable with four twisted pairs.

Category 3 cables with a similar cable that uses the same connector, but has more twists per meter.
More twists result in less crosstalk and a better-quality signal over longer distances, making the
cables more suitable for high-speed computer communication, especially 100-Mbps and 1-Gbps
Ethernet LANs.
Category 5 cabling, or ‘‘Cat 5.’’ twisted pair consists of two insulated wires gently twisted together.
Four such pairs are typically grouped in a plastic sheath to protect the wires and keep them together.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 22


Computer Networks -21CS52 Module1

Category 6 or even Category 7. These categories has more stringent specifications to handle signals
with greater bandwidths. Some cables in Category 6 and above are rated for signals of 500 MHz and
can support the 10-Gbps links that will soon be deployed.

Through Category 6, these wiring types are referred to as UTP (Unshielded Twisted Pair) as they
consist simply of wires and insulators. In contrast to these, Category 7 cables have shielding on the
individual twisted pairs, as well as around the entire cable (but inside the plastic protective sheath).
Shielding reduces the susceptibility to external interference and crosstalk with other nearby cables to
meet demanding performance specifications.

1.5.3 Coaxial Cable


It has better shielding and greater bandwidth than unshielded twisted pairs, so it can span longer
distances at higher speeds. Two kinds of coaxial cable are widely used. One kind, 50-ohm cable, is
commonly used when it is intended for digital transmission from the start. The other kind, 75-ohm
cable, is commonly used for analog transmission and cable television. A coaxial cable consists of a
stiff copper wire as the core, surrounded by an insulating material. The insulator is encased by a
cylindrical conductor, often as a closely woven braided mesh. The outer conductor is covered in a
protective plastic sheath. A cutaway view of a coaxial cable is shown in Fig. 1-17.

Figure 1-17. A coaxial cable.

1.5.4 Power Lines


Power lines deliver electrical power to houses, and electrical wiring within houses distributes the
power to electrical outlets. Electrical signals are sent at 50–60 Hz and the wiring attenuates the much
higher frequency (MHz) signals needed for high-rate data communication. The electrical properties of
the wiring vary from one house to the next and change as appliances are turned on and off, which
causes data signals to bounce around the wiring. Transient currents when appliances switch on and
off create electrical noise over a wide range of frequencies.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 23


Computer Networks -21CS52 Module1

Figure 1-18. A network that uses household electrical wiring.

1.5.5 Fiber Optics


Fiber optics are used for long-haul transmission in network backbones, highspeed LANs. and high-
speed Internet access such as FttH (Fiber to the Home). An optical transmission system has three
key components: the light source, the transmission medium, and the detector. Conventionally, a pulse
of light indicates a 1 bit and the absence of light indicates a 0 bit. The transmission medium is an
ultra-thin fiber of glass. The detector generates an electrical pulse when light falls on it. By attaching
a light source to one end of an optical fiber and a detector to the other, we have a unidirectional data
transmission system that accepts an electrical signal, converts and transmits it by light pulses, and
then reconverts the output to an electrical signal at the receiving end.

Fiber optics is the technology used by internet services such as Verizon Fios home internet to
transmit information as pulses of light through strands of fiber made of glass or plastic over long
distances. A fiber-optic cable contains anywhere from a few to hundreds of optical fibers within a
plastic casing. Also known as optic cables or optical fiber cables, they transfer data signals in the
form of light and travel hundreds of miles significantly faster than those used in traditional electrical
cables. And because fiber-optic cables are non-metallic, they are not affected by electromagnetic
interference (i.e. lightening) that can reduce speed of transmission. Fiber cables are also safer as they
do not carry a current and therefore cannot generate a spark.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 24


Computer Networks -21CS52 Module1

Comparison of Fiber Optics and Copper Wire

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 25


Computer Networks -21CS52 Module1

1.6 Wireless Transmission

It is noteworthy that modern wireless digital communication began in the Hawaiian Islands, where
large chunks of Pacific Ocean separated the users from their computer center and the telephone
system was inadequate. Our age has given rise to information junkies: people who need to be online
all the time. For these mobile users, twisted pair, coax, and fiber optics are of no use. They need to
get their ‘‘hits’’ of data for their laptop, notebook, shirt pocket, palmtop, or wristwatch computers
without being tethered to the terrestrial communication infrastructure. For these users, wireless
communication is the answer.

1.6.1 The Electromagnetic Spectrum


When electrons move, they create electromagnetic waves that can propagate through space (even in a
vacuum). These waves were predicted by the British physicist James Clerk Maxwell in 1865 and first
observed by the German physicist Heinrich Hertz in 1887. The number of oscillations per second of a
wave is called its frequency, f, and is measured in Hz (in honor of Heinrich Hertz). The distance
between two consecutive maxima (or minima) is called the wavelength, which is universally
designated by the Greek letter λ (lambda). When an antenna of the appropriate size is attached to an
electrical circuit, the electromagnetic waves can be broadcast efficiently and received by a receiver
some distance away. All wireless communication is based on this principle.
In a vacuum, all electromagnetic waves travel at the same speed, no matter what their frequency. This
speed, usually called the speed of light, c, is approximately 3 × 108 m/sec, or about 1 foot (30 cm)
per nanosecond. (A case could be made for redefining the foot as the distance light travels in a
vacuum in 1 nsec rather than basing it on the shoe size of some long-dead king.) In copper or fiber the
speed slows to about 2/3 of this value and becomes slightly frequency dependent. The speed of light
is the ultimate speed limit. No object or signal can ever move faster than it.
The fundamental relation between f, λ, and c (in a vacuum) is
λf = c …………………….. (2.4)
Since c is a constant, if we know f, we can find λ, and vice versa. As a rule of thumb, when λ is in
meters and f is in MHz, λf ∼∼ 300. For example, 100-MHz waves are about 3 meters long, 1000-
MHz waves are 0.3 meters long, and 0.1-meter waves have a frequency of 3000 MHz. The
electromagnetic spectrum is shown in Fig. 2-10. The radio, microwave, infrared, and visible light

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 26


Computer Networks -21CS52 Module1

portions of the spectrum can all be used for transmitting information by modulating the amplitude,
frequency, or phase of the waves.
Ultraviolet light, X-rays, and gamma rays would be even better, due to their higher frequencies, but
they are hard to produce and modulate, do not propagate well through buildings, and are dangerous to
living things. The bands listed at the bottom of Fig. 2-10 are the official ITU (International
Telecommunication Union) names and are based on the wavelengths, so the LF band goes from 1 km
to 10 km (approximately 30 kHz to 300 kHz). The terms LF, MF, and HF refer to Low, Medium, and
High Frequency, respectively. Clearly, when the names were assigned nobody expected to go above
10 MHz, so the higher bands were later named the Very, Ultra, Super, Extremely, and Tremendously
High Frequency bands. Beyond that there are no names, but Incredibly, Astonishingly, and
Prodigiously High Frequency (IHF, AHF, and PHF) would sound nice.
We know from Shannon [Eq. (2-3)] that the amount of information that a signal such as an
electromagnetic wave can carry depends on the received power and is proportional to its bandwidth.
From Fig. 1-19 it should now be obvious why networking people like fiber optics so much. Many
GHz of bandwidth are available to tap for data transmission in the microwave band, and even more in
fiber because it is further to the right in our logarithmic scale. As an example, consider the 1.30-
micron band of Fig. 2-7, which has a width of 0.17 microns. If we use Eq. (2-4) to find the start and
end frequencies from the start and end wavelengths, we find the frequency range to be about 30,000
GHz. With a reasonable signalto-noise ratio of 10 dB, this is 300 Tbps.

Figure 1-19. The electromagnetic spectrum and its uses for communication.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 27


Computer Networks -21CS52 Module1

Most transmissions use a relatively narrow frequency band (i.e., Δf / f << 1). They concentrate their
signals in this narrow band to use the spectrum efficiently and obtain reasonable data rates by
transmitting with enough power. However, in some cases, a wider band is used, with three variations.
In frequency hopping spread spectrum, the transmitter hops from frequency to frequency hundreds
of times per second. It is popular for military communication because it makes transmissions hard to
detect and next to impossible to jam. It also offers good resistance to multipath fading and
narrowband interference because the receiver will not be stuck on an impaired frequency for long
enough to shut down communication.

A second form of spread spectrum, direct sequence spread spectrum, uses a code sequence to
spread the data signal over a wider frequency band. It is widely used commercially as a spectrally
efficient way to let multiple signals share the same frequency band. These signals can be given
different codes, a method called CDMA (Code Division Multiple Access) that we will return to later
in this chapter.
This method is shown in contrast with frequency hopping in Fig. 1-20. It forms the basis of 3G
mobile phone networks and is also used in GPS (Global Positioning System). Even without different
codes, direct sequence spread spectrum, like frequency hopping spread spectrum, can tolerate
narrowband interference and multipath fading because only a fraction of the desired signal is lost. It is
used in this role in older 802.11b wireless LANs.

Figure 1-20. Spread spectrum and ultra-wideband (UWB) communication.

A third method of communication with a wider band is UWB (Ultra-Wide Band) communication.
UWB sends a series of rapid pulses, varying their positions to communicate information. The rapid
transitions lead to a signal that is spread thinly over a very wide frequency band. UWB is defined as
signals that have a bandwidth of at least 500 MHz or at least 20% of the center frequency of their

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 28


Computer Networks -21CS52 Module1

frequency band. UWB is also shown in Fig. 1-21. With this much bandwidth, UWB has the potential
to communicate at high rates. Because it is spread across a wide band of frequencies, it can tolerate a
substantial amount of relatively strong interference from other narrowband signals. Just as
importantly, since UWB has very little energy at any given frequency when used for short-range
transmission, it does not cause harmful interference to those other narrowband radio signals. It is said
to underlay the other signals.

1.6.2 Radio Transmission


Radio frequency (RF) waves are easy to generate, can travel long distances, and can penetrate
buildings easily, so they are widely used for communication, both indoors and outdoors. Radio waves
also are omnidirectional, meaning that they travel in all directions from the source, so the transmitter
and receiver do not have to be carefully aligned physically.
The properties of radio waves are frequency dependent. At low frequencies, radio waves pass through
obstacles well, but the power falls off sharply with distance from the source—at least as fast as 1/r 2
in air—as the signal energy is spread more thinly over a larger surface. This attenuation is called path
loss. At high frequencies, radio waves tend to travel in straight lines and bounce off obstacles. Path
loss still reduces power, though the received signal can depend strongly on reflections as well. High-
frequency radio waves are also absorbed by rain and other obstacles to a larger extent than are low-
frequency ones. At all frequencies, radio waves are subject to interference from motors and other
electrical equipment.
It is interesting to compare the attenuation of radio waves to that of signals in guided media. With
fiber, coax and twisted pair, the signal drops by the same fraction per unit distance, for example 20
dB per 100m for twisted pair. With radio, the signal drops by the same fraction as the distance
doubles, for example 6 dB per doubling in free space.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 29


Computer Networks -21CS52 Module1

Figure 1-22. (a) In the VLF, LF, and MF bands, radio waves follow the curvature of the earth. (b) In the HF band, they
bounce off the ionosphere.

In the HF and VHF bands, the ground waves tend to be absorbed by the earth. However, the waves
that reach the ionosphere, a layer of charged particles circling the earth at a height of 100 to 500 km,
are refracted by it and sent back to earth, as shown in Fig. 1-22(b). Under certain atmospheric
conditions, the signals can bounce several times. Amateur radio operators (hams) use these bands to
talk long distance. The military also communicate in the HF and VHF bands.

1.6.3 Microwave Transmission


Above 100 MHz, the waves travel in nearly straight lines and can therefore be narrowly focused.
Concentrating all the energy into a small beam by means of a parabolic antenna (like the familiar
satellite TV dish) gives a much higher signal to-noise ratio, but the transmitting and receiving
antennas must be accurately aligned with each other.
Microwaves travel in a straight line, so if the towers are too far apart, the earth will get in the way
(think about a Seattle-to-Amsterdam link). Thus, repeaters are needed periodically. The higher the
towers are, the farther apart they can be. The distance between repeaters goes up very roughly with
the square root of the tower height. For 100-meter-high towers, repeaters can be 80 km apart.
Unlike radio waves at lower frequencies, microwaves do not pass through buildings well. In addition,
even though the beam may be well focused at the transmitter, there is still some divergence in space.
Some waves may be refracted off low-lying atmospheric layers and may take slightly longer to arrive
than the direct waves. The delayed waves may arrive out of phase with the direct wave and thus
cancel the signal. This effect is called multipath fading and is often a serious problem. It is weather
and frequency dependent. Some operators keep 10% of their channels idle as spares to switch on
when multipath fading temporarily wipes out some frequency band.

The demand for more and more spectrum drives operators to yet higher frequencies. Bands up to 10
GHz are now in routine use, but at about 4 GHz a new problem sets in: absorption by water. These
waves are only a few centimeters long and are absorbed by rain. This effect would be fine if one were
planning to build a huge outdoor microwave oven for roasting passing birds, but for communication it

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 30


Computer Networks -21CS52 Module1

is a severe problem. As with multipath fading, the only solution is to shut off links that are being
rained on and route around them.
In summary, microwave communication is so widely used for long-distance telephone
communication, mobile phones, television distribution, and other purposes that a severe shortage of
spectrum has developed. It has several key advantages over fiber. The main one is that no right of
way is needed to lay down cables. By buying a small plot of ground every 50 km and putting a
microwave tower on it, one can bypass the telephone system entirely. This is how MCI managed to
get started as a new long-distance telephone company so quickly. (Sprint, another early competitor to
the deregulated AT&T, went a completely different route: it was formed by the Southern Pacific
Railroad, which already owned a large amount of right of way and just buried fiber next to the
tracks.)
Microwave is also relatively inexpensive. Putting up two simple towers (which can be just big poles
with four guy wires) and putting antennas on each one may be cheaper than burying 50 km of fiber
through a congested urban area or up over a mountain, and it may also be cheaper than leasing the
telephone company’s fiber, especially if the telephone company has not yet even fully paid for the
copper it ripped out when it put in the fiber.

The Politics of the Electromagnetic Spectrum


To prevent total chaos, there are national and international agreements about who gets to use which
frequencies. Since everyone wants a higher data rate, everyone wants more spectrum. National
governments allocate spectrum for AM and FM radio, television, and mobile phones, as well as for
telephone companies, police, maritime, navigation, military, government, and many other competing
users. Worldwide, an agency of ITU-R (WRC) tries to coordinate this allocation so devices that work
in multiple countries can be manufactured. However, countries are not bound by ITU-R’s
recommendations, and the FCC (Federal Communication Commission), which does the allocation for
the United States, has occasionally rejected ITU-R’s recommendations (usually because they required
some politically powerful group to give up some piece of the spectrum).
Even when a piece of spectrum has been allocated to some use, such as mobile phones, there is the
additional issue of which carrier is allowed to use which frequencies. Three algorithms were widely
used in the past. The oldest algorithm, often called the beauty contest, requires each carrier to
explain why its proposal serves the public interest best. Government officials then decide which of

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 31


Computer Networks -21CS52 Module1

the nice stories they enjoy most. Having some government official award property worth billions of
dollars to his favorite company often leads to bribery, corruption, nepotism, and worse. Furthermore,
even a scrupulously honest government official who thought that a foreign company could do a better
job than any of the national companies would have a lot of explaining to do.
This observation led to algorithm 2, holding a lottery among the interested companies. The problem
with that idea is that companies with no interest in using the spectrum can enter the lottery. If, say, a
fast food restaurant or shoe store chain wins, it can resell the spectrum to a carrier at a huge profit and
with no risk.
Bestowing huge windfalls on alert but otherwise random companies has been severely criticized by
many, which led to algorithm 3: auction off the bandwidth to the highest bidder. When the British
government auctioned off the frequencies needed for third-generation mobile systems in 2000, it
expected to get about $4 billion. It actually received about $40 billion because the carriers got into a
feeding frenzy, scared to death of missing the mobile boat. This event switched on nearby
governments’ greedy bits and inspired them to hold their own auctions. It worked, but it also left
some of the carriers with so much debt that they are close to bankruptcy. Even in the best cases, it
will take many years to recoup the licensing fee.
A completely different approach to allocating frequencies is to not allocate them at all. Instead, let
everyone transmit at will, but regulate the power used so that stations have such a short range that
they do not interfere with each other. Accordingly, most governments have set aside some frequency
bands, called the ISM (Industrial, Scientific, Medical) bands for unlicensed usage. Garage door
openers, cordless phones, radio-controlled toys, wireless mice, and numerous other wireless
household devices use the ISM bands. To minimize interference between these uncoordinated
devices, the FCC mandates that all devices in the ISM bands limit their transmit power (e.g., to 1
watt) and use other techniques to spread their signals over a range of frequencies. Devices may also
need to take care to avoid interference with radar installations. The location of these bands varies
somewhat from country to country. In the United States, for example, the bands that networking
devices use in practice without requiring a FCC license are shown in Fig. 1-23. The 900-MHz band
was used for early versions of 802.11, but it is crowded. The 2.4-GHz band is available in most
countries and widely used for 802.11b/g and Bluetooth, though it is subject to interference from
microwave ovens and radar installations. The 5-GHz part of the spectrum includes U-NII
(Unlicensed National Information Infrastructure) bands. The 5-GHz bands are relatively

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 32


Computer Networks -21CS52 Module1

undeveloped but, since they have the most bandwidth and are used by 802.11a, they are quickly
gaining in popularity.

Figure 1-23. ISM and U-NII bands used in the United States by wireless devices.
The unlicensed bands have been a roaring success over the past decade. The ability to use the
spectrum freely has unleashed a huge amount of innovation in wireless LANs and PANs, evidenced
by the widespread deployment of technologies such as 802.11 and Bluetooth. To continue this
innovation, more spectrum is needed. One exciting development in the U.S. is the FCC decision in
2009 to allow unlicensed use of white spaces around 700 MHz. White spaces are frequency bands
that have been allocated but are not being used locally. The transition from analog to all-digital
television broadcasts in the U.S. in 2010 freed up white spaces around 700 MHz. The only difficulty
is that, to use the white spaces, unlicensed devices must be able to detect any nearby licensed
transmitters, including wireless microphones, that have first rights to use the frequency band.
Another flurry of activity is happening around the 60-GHz band. The FCC opened 57 GHz to 64 GHz
for unlicensed operation in 2001. This range is an enormous portion of spectrum, more than all the
other ISM bands combined, so it can support the kind of high-speed networks that would be needed
to stream high-definition TV through the air across your living room. At 60 GHz, radio waves are
absorbed by oxygen. This means that signals do not propagate far, making them well suited to short-
range networks. The high frequencies (60 GHz is in the Extremely High Frequency or ‘‘millimeter’’
band, just below infrared radiation) posed an initial challenge for equipment makers, but products are
now on the market.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 33


Computer Networks -21CS52 Module1

1.6.4 Infrared Transmission


Unguided infrared waves are widely used for short-range communication. The remote controls used
for televisions, VCRs, and stereos all use infrared communication. They are relatively directional,
cheap, and easy to build but have a major drawback: they do not pass through solid objects. (Try
standing between your remote control and your television and see if it still works.) In general, as we
go from long-wave radio toward visible light, the waves behave more and more like light and less and
less like radio. On the other hand, the fact that infrared waves do not pass through solid walls well is
also a plus. It means that an infrared system in one room of a building will not interfere with a similar
system in adjacent rooms or buildings: you cannot control your neighbor’s television with your
remote control. Furthermore, security of infrared systems against eavesdropping is better than that of
radio systems precisely for this reason. Therefore, no government license is needed to operate an
infrared system, in contrast to radio systems, which must be licensed outside the ISM bands. Infrared
communication has a limited use on the desktop, for example, to connect notebook computers and
printers with the IrDA (Infrared Data Association) standard, but it is not a major player in the
communication game.

1.6.5 Light Transmission


Unguided optical signaling or free-space optics has been in use for centuries. Paul Revere used
binary optical signaling from the Old North Church just prior to his famous ride. A more modern
application is to connect the LANs in two buildings via lasers mounted on their rooftops. Optical
signaling using lasers is inherently unidirectional, so each end needs its own laser and its own
photodetector. This scheme offers very high bandwidth at very low cost and is relatively secure
because it is difficult to tap a narrow laser beam. It is also relatively easy to install and, unlike
microwave transmission, does not require an FCC license. The laser’s strength, a very narrow beam,
is also its weakness here. Aiming a laser beam 1 mm wide at a target the size of a pin head 500
meters away requires the marksmanship of a latter-day Annie Oakley. Usually, lenses are put into the
system to defocus the beam slightly. To add to the difficulty, wind and temperature changes can
distort the beam and laser beams also cannot penetrate rain or thick fog, although they normally work
well on sunny days. However, many of these factors are not an issue when the use is to connect two
spacecraft.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 34


Computer Networks -21CS52 Module1

One of the authors (AST) once attended a conference at a modern hotel in Europe at which the
conference organizers thoughtfully provided a room full of terminals to allow the attendees to read
their email during boring presentations. Since the local PTT was unwilling to install a large number
of telephone lines for just 3 days, the organizers put a laser on the roof and aimed it at their
university’s computer science building a few kilometers away. They tested it the night before the
conference and it worked perfectly. At 9 A.M. on a bright, sunny day, the link failed completely and
stayed down all day. The pattern repeated itself the next two days. It was not until after the
conference that the organizers discovered the problem: heat from the sun during the daytime caused
convection currents to rise up from the roof of the building, as shown in Fig. 1-24. This turbulent air
diverted the beam and made it dance around the detector, much like a shimmering road on a hot day.
The lesson here is that to work well in difficult conditions as well as good conditions, unguided
optical links need to be engineered with a sufficient margin of error.

Figure 1-24. Convection currents can interfere with laser communication systems.
A bidirectional system with two lasers is pictured here. Unguided optical communication may seem
like an exotic networking technology today, but it might soon become much more prevalent. We are
surrounded by cameras (that sense light) and displays (that emit light using LEDs and other
technology). Data communication can be layered on top of these displays by encoding information in

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 35


Computer Networks -21CS52 Module1

the pattern at which LEDs turn on and off that is below the threshold of human perception.
Communicating with visible light in this way is inherently safe and creates a low-speed network in
the immediate vicinity of the display. This could enable all sorts of fanciful ubiquitous computing
scenarios.
The flashing lights on emergency vehicles might alert nearby traffic lights and vehicles to help clear a
path. Informational signs might broadcast maps. Even festive lights might broadcast songs that are
synchronized with their display.

Dr. Rajesh L, Associate Professor, ISE, DSATM Page 36

You might also like