Module 1 - Introduction To Networks
Module 1 - Introduction To Networks
1.1 Introduction
Each of the past three centuries was dominated by a single new technology. The 18th century
was the era of the great mechanical systems accompanying the Industrial Revolution. The 19th
century was the age of the steam engine. During the 20th century, the key technology was
information gathering, processing, and distribution. Among other developments, we saw the
installation of worldwide telephone networks, the invention of radio and television, the birth and
unprecedented growth of the computer industry, the launching of communication satellites, and, of
course, the Internet.
As a result of rapid technological progress, these areas are rapidly converging in the 21st
century and the differences between collecting, transporting, storing, and processing information are
quickly disappearing. Two computers are said to be interconnected if they are able to exchange
information. The connection need not be via a copper wire; fiber optics, microwaves, infrared, and
communication satellites can also be used. Networks come in many sizes, shapes and forms.
Computer network refers to interconnected computing devices that can exchange data and share
resources with each other. These networked devices use a system of rules, called communications
protocols, to transmit information over physical or wireless technologies.
Computer networking refers to connected computing devices (such as laptops, desktops, servers,
Smartphone’s, and tablets) and an ever-expanding array of IoT devices (such as cameras, door locks,
doorbells, refrigerators, audio/visual systems, thermostats, and various sensors) that communicate
with one another.
Distributed system is a collection of independent computers appears to its users as a single coherent
system. A distributed system is simply any environment where multiple computers or devices are
working on a variety of tasks and components, all spread across a network. Components within
distributed systems split up the work, coordinating efforts to complete a given job more efficiently
than if only a single device ran it.
There are two types of transmission technology that are in widespread use: broadcast links and point-
to-point links.
Point-to-point links connect individual pairs of machines. To go from the source to the
destination on a network made up of point-to-point links, short messages, called packets in
certain contexts, may have to first visit one or more intermediate machines. Often multiple
routes, of different lengths, are possible, so finding good ones is important in point-to-point
networks. Point-to-point transmission with exactly one sender and exactly one receiver is
sometimes called unicasting.
In Broadcast network the communication channel is shared by all the machines on the
network; packets sent by any machine are received by all the others. An address field within
each packet specifies the intended recipient. A wireless network is a common example of a
broadcast link. Some broadcast systems also support transmission to a subset of the machines,
which known as multicasting.
In the simplest form, Bluetooth networks use the master-slave paradigm of Fig. 1-2. The system unit
(the PC) is normally the master, talking to the mouse, keyboard, etc., as slaves. The master tells the
slaves what addresses to use, when they can broadcast, how long they can transmit, what frequencies
they can use, and so on.
1.2.2 Local Area Networks
A LAN is a privately owned network that operates within and nearby a single building like a home,
office or factory. LANs are widely used to connect personal computers and consumer electronics to
let them share resources (e.g., printers) and exchange information. When LANs are used by
companies, they are called enterprise networks.
Figure 1-3. Wireless and wired LANs. (a) 802.11. (b) Switched Ethernet.
Wireless LANs are very popular these days, In most cases, each computer talks to a device in the
ceiling as shown in Fig. 1-3(a). This device, called an AP (Access Point), wireless router, or base
station, relays packets between the wireless computers and also between them and the Internet. There
is a standard for wireless LANs called IEEE 802.11, popularly known as WiFi, It runs at speeds
anywhere from 11 to hundreds of Mbps.
Wired LANs use a range of different transmission technologies. Most of them use copper wires, but
some use optical fiber. Typically, wired LANs run at speeds of 100 Mbps to 1 Gbps, have low delay
(microseconds or nanoseconds), and make very few errors. The topology of many wired LANs is
built from point-to-point links. IEEE 802.3, popularly called Ethernet.
These systems grew from earlier community antenna systems used in areas with poor over-the-air
television reception. In those early systems, a large antenna was placed on top of a nearby hill and a
signal was then piped to the subscribers’ houses.
Recent developments in highspeed wireless Internet access have resulted in another MAN, which has
been standardized as IEEE 802.16 and is popularly known as WiMAX.
Switching elements, or just switches, are specialized computers that connect two or more
transmission lines. When data arrive on an incoming line, the switching element must choose an
outgoing line on which to forward them. These switching computers have been called by various
names in the past; the name router is now most commonly used.
The second variation is that the subnet may be run by a different company. The subnet operator is
known as a network service provider and the offices are its customers.
There may be many paths in the network that connect these two routers. How the network makes the
decision as to which path to use is called the routing algorithm. Many such algorithms exist. How
each router makes the decision as to where to send a packet next is called the forwarding algorithm.
There are two rules of thumb that are useful. First, if different organizations have paid to construct
different parts of the network and each maintains its part, we have an internetwork rather than a single
network. Second, if the underlying technology is different in different parts (e.g., broadcast versus
point-to-point and wired versus wireless), we probably have an internetwork. To go deeper, we need
to talk about how two different networks can be connected. The general name for a machine that
makes a connection between two or more networks and provides the necessary translation, both in
terms of hardware and software, is a gateway. Gateways are distinguished by the layer at which they
operate in the protocol hierarchy.
In reality, no data are directly transferred from layer n on one machine to layer n on another machine.
Instead, each layer passes data and control information to the layer immediately below it, until the
lowest layer is reached. Below layer 1 is the physical medium through which actual communication
occurs. In Fig. 1-9, virtual communication is shown by dotted lines and physical communication by
solid lines.
Between each pair of adjacent layers is an interface. The interface defines which primitive operations
and services the lower layer makes available to the upper one. A set of layers and protocols is called a
network architecture. The specification of an architecture must contain enough information to allow
an implementer to write the program or build the hardware for each layer so that it will correctly obey
the appropriate protocol. A list of the protocols used by a certain system, one protocol per layer, is
called a protocol stack.
So an interface is the connection between systems or applications, while a protocol defines the rules
for data exchange between these systems or applications.
The main service provided is to transfer data packets from the network layer on the sending machine
to the network layer on the receiving machine. Data link layer of the sending machine transmits
accepts data from the network layer and sends them to the data link layer of the destination machine
which hands them to the network layer there.
In actual communication, the data link layer transmits bits via the physical layers and physical
medium. However virtually, this can be visualized as the two data link layers communicating with
each other using a data link protocol.
Some of the key design issues that occur in computer networks will come up in layer after layer.
Below, we will briefly mention the more important ones.
Reliability is the design issue of making a network that operates correctly even though it is made up
of a collection of components that are themselves unreliable. Think about the bits of a packet
traveling through the network. There is a chance that some of these bits will be received damaged
(inverted) due to fluke electrical noise, random wireless signals, hardware flaws, software bugs and
so on. How is it possible that we find and fix these errors?
One mechanism for finding errors in received information uses codes for error detection.
Information that is incorrectly received can then be retransmitted until it is received correctly. More
powerful codes allow for error correction, where the correct message is recovered from the possibly
incorrect bits that were originally received.
Routing: Another reliability issue is finding a working path through a network. Often there are
multiple paths between a source and destination, and in a large network, there may be some links or
routers that are broken.
A second design issue concerns the evolution of the network. Over time, networks grow larger and
new designs emerge that need to be connected to the existing network. We have recently seen the key
structuring mechanism used to support change by dividing the overall problem and hiding
implementation details: protocol layering. [addressing or naming].
Internetworking: mechanisms for disassembling, transmitting, and then reassembling messages.
Scalable: Designs that continue to work well when the network gets large
Statistical multiplexing: meaning sharing based on the statistics of demand
Flow control: Feedback from the receiver to the sender
Congestion: overloading of the network
Real time: delivery at the same time that they provide service to applications
Confidentiality: defend against to threats
Authenticity: prevent someone from impersonating someone else
In some cases when a connection is established, the sender, receiver, and subnet conduct a
negotiation about the parameters to be used, such as maximum message size, quality of service
required, and other issues. Typically, one side makes a proposal and the other side can accept it, reject
it, or make a counterproposal. A circuit is another name for a connection with associated resources,
such as a fixed bandwidth. This dates from the telephone network in which a circuit was a path over
copper wire that carried a phone conversation.
In contrast to connection-oriented service, connectionless service is modeled after the postal system.
Each message (letter) carries the full destination address, and each one is routed through the
intermediate nodes inside the system independent of all the subsequent messages. There are different
names for messages in different contexts; a packet is a message at the network layer. When the
intermediate nodes receive a message in full before sending it on to the next node, this
is called store-and-forward switching. The alternative, in which the onward transmission of a
message at a node starts before it is completely received by the node, is called cut-through
switching. Normally, when two messages are sent to the same destination, the first one sent will be
the first one to arrive. However, it is possible that the first one sent can be delayed so that the second
one arrives first.
Usually, a reliable service is implemented by having the receiver acknowledge the receipt of each
message so the sender is sure that it arrived. The acknowledgement process introduces overhead and
delays, which are often worth it but are sometimes undesirable. Reliable connection-oriented service
has two minor variations: message sequences and byte streams. Unreliable (meaning not
acknowledged) connectionless service is often called datagram service, in analogy with telegram
service, which also does not return an acknowledgement to the sender.
Figure 1-11. Six service primitives that provide a simple connection-oriented service.
international standardization of the protocols used in the various layers (Day and Zimmermann,
1983). It was revised in 1995 (Day, 1995). The model is called the ISO OSI (Open Systems
Interconnection) model, it is a Reference Model because it deals with connecting open systems—
that is, systems that are open for communication with other systems.
The OSI model has seven layers. The principles that were applied to arrive at the seven layers can be
briefly summarized as follows:
1. A layer should be created where a different abstraction is needed.
2. Each layer should perform a well-defined function.
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown together in
the same layer out of necessity and small enough that the architecture does not become unwieldy.
The data-link layer can be further divided into two sublayers. The higher layer, which is called logical
link control (LLC), is responsible for multiplexing, flow control, acknowledgement and notifying
upper layers if transmit/receive (TX/RX) errors occur.
The media access control sublayer is responsible for tracking data frames using MAC addresses of
the sending and receiving hardware. It's also responsible for organizing each frame, marking the
starting and ending bits and organizing timing regarding when each frame can be sent along the
physical layer medium.
The session layer, which is used for session management, and the presentation layer, which deals
with user interaction, aren't as useful as other layers in the OSI model.
Some services are duplicated at various layers, such as the transport and data-link layers.
Layers can't work in parallel; each layer must wait to receive data from the previous layer.
The ARPANET was a research network sponsored by the DoD (U.S. Department of Defense). It
eventually connected hundreds of universities and government installations, using leased telephone
lines. When satellite and radio networks were added later, the existing protocols had trouble
interworking with them, so a new reference architecture was needed. Thus, from nearly the
beginning, the ability to connect multiple networks in a seamless way was one of the major design
goals.
This architecture later became known as the TCP/IP Reference Model, after its two primary
protocols. It was first described by Cerf and Kahn (1974), and later refined and defined as a standard
in the Internet community (Braden, 1989). The design philosophy behind the model is discussed by
Clark (1988). TCP/IP Model helps you to determine how a specific computer should be connected to
the internet and how data should be transmitted between them. It helps you to create a virtual network
when multiple computer networks are connected together. The purpose of TCP/IP model is to allow
communication over large distances.
TCP/IP stands for Transmission Control Protocol/ Internet Protocol. TCP/IP Stack is specifically
designed as a model to offer highly reliable and end-to-end byte stream over an unreliable
internetwork.
Application layer: On top of the transport layer is the application layer. It contains all the higher-
level protocols. This layer interacts with software applications to implement a communicating
component. Example of the application layer is an application such as file transfer, email, remote
login, etc.
Transport Layer: Transport layer builds on the network layer in order to provide data transport from
a process on a source system machine to a process on a destination system. Two end-to-end transport
protocols have been defined here. The first one, TCP (Transmission Control Protocol), is a reliable
connection-oriented protocol that allows a byte stream originating on one machine to be delivered
without error on any other machine in the internet. It segments the incoming byte stream into discrete
messages and passes each one on to the internet layer. At the destination, the receiving TCP process
reassembles the received messages into the output stream. TCP also handles flow control to make
sure a fast sender cannot swamp a slow receiver with more messages than it can handle.
The second protocol in this layer, UDP (User Datagram Protocol), is an unreliable, connectionless
protocol for applications that do not want TCP’s sequencing or flow control and wish to provide their
own.
Internet Layer: An internet layer is a second layer of TCP/IP layes of the TCP/IP model. It is also
known as a network layer. The main work of this layer is to send the packets from any network, and
any computer still they reach the destination irrespective of the route they take.
The internet layer defines an official packet format and protocol called IP (Internet Protocol), plus a
companion protocol called ICMP (Internet Control Message Protocol) that helps it function. The
job of the internet layer is to deliver IP packets where they are supposed to go. Layer-management
protocols that belong to the network layer are:
1. Routing protocols
2. Multicast group management
3. Network-layer address assignment.
Network Interface Layer: This layer is also called a network access layer. It helps you to defines
details of how data should be sent using the network. It also includes how bits should optically be
signaled by hardware devices which directly interfaces with a network medium, like coaxial, optical,
coaxial, fiber, or twisted-pair cables.
A network layer is a combination of the data line and defined in the article of OSI reference model.
This layer defines how the data should be sent physically through the network. This layer is
responsible for the transmission of the data between two devices on the same network.
Differences between the OSI and TCP/IP models include the following:
such as copper wire and fiber optics, and unguided media, such as terrestrial wireless, satellite, and
lasers through the air.
1.5.1 Magnetic Media
One of the most common ways to transport data from one computer to another is to write them onto
magnetic tape or removable media (e.g., recordable DVDs), physically transport the tape or disks to
the destination machine, and read them back in again
Category 3 cables with a similar cable that uses the same connector, but has more twists per meter.
More twists result in less crosstalk and a better-quality signal over longer distances, making the
cables more suitable for high-speed computer communication, especially 100-Mbps and 1-Gbps
Ethernet LANs.
Category 5 cabling, or ‘‘Cat 5.’’ twisted pair consists of two insulated wires gently twisted together.
Four such pairs are typically grouped in a plastic sheath to protect the wires and keep them together.
Category 6 or even Category 7. These categories has more stringent specifications to handle signals
with greater bandwidths. Some cables in Category 6 and above are rated for signals of 500 MHz and
can support the 10-Gbps links that will soon be deployed.
Through Category 6, these wiring types are referred to as UTP (Unshielded Twisted Pair) as they
consist simply of wires and insulators. In contrast to these, Category 7 cables have shielding on the
individual twisted pairs, as well as around the entire cable (but inside the plastic protective sheath).
Shielding reduces the susceptibility to external interference and crosstalk with other nearby cables to
meet demanding performance specifications.
Fiber optics is the technology used by internet services such as Verizon Fios home internet to
transmit information as pulses of light through strands of fiber made of glass or plastic over long
distances. A fiber-optic cable contains anywhere from a few to hundreds of optical fibers within a
plastic casing. Also known as optic cables or optical fiber cables, they transfer data signals in the
form of light and travel hundreds of miles significantly faster than those used in traditional electrical
cables. And because fiber-optic cables are non-metallic, they are not affected by electromagnetic
interference (i.e. lightening) that can reduce speed of transmission. Fiber cables are also safer as they
do not carry a current and therefore cannot generate a spark.
It is noteworthy that modern wireless digital communication began in the Hawaiian Islands, where
large chunks of Pacific Ocean separated the users from their computer center and the telephone
system was inadequate. Our age has given rise to information junkies: people who need to be online
all the time. For these mobile users, twisted pair, coax, and fiber optics are of no use. They need to
get their ‘‘hits’’ of data for their laptop, notebook, shirt pocket, palmtop, or wristwatch computers
without being tethered to the terrestrial communication infrastructure. For these users, wireless
communication is the answer.
portions of the spectrum can all be used for transmitting information by modulating the amplitude,
frequency, or phase of the waves.
Ultraviolet light, X-rays, and gamma rays would be even better, due to their higher frequencies, but
they are hard to produce and modulate, do not propagate well through buildings, and are dangerous to
living things. The bands listed at the bottom of Fig. 2-10 are the official ITU (International
Telecommunication Union) names and are based on the wavelengths, so the LF band goes from 1 km
to 10 km (approximately 30 kHz to 300 kHz). The terms LF, MF, and HF refer to Low, Medium, and
High Frequency, respectively. Clearly, when the names were assigned nobody expected to go above
10 MHz, so the higher bands were later named the Very, Ultra, Super, Extremely, and Tremendously
High Frequency bands. Beyond that there are no names, but Incredibly, Astonishingly, and
Prodigiously High Frequency (IHF, AHF, and PHF) would sound nice.
We know from Shannon [Eq. (2-3)] that the amount of information that a signal such as an
electromagnetic wave can carry depends on the received power and is proportional to its bandwidth.
From Fig. 1-19 it should now be obvious why networking people like fiber optics so much. Many
GHz of bandwidth are available to tap for data transmission in the microwave band, and even more in
fiber because it is further to the right in our logarithmic scale. As an example, consider the 1.30-
micron band of Fig. 2-7, which has a width of 0.17 microns. If we use Eq. (2-4) to find the start and
end frequencies from the start and end wavelengths, we find the frequency range to be about 30,000
GHz. With a reasonable signalto-noise ratio of 10 dB, this is 300 Tbps.
Figure 1-19. The electromagnetic spectrum and its uses for communication.
Most transmissions use a relatively narrow frequency band (i.e., Δf / f << 1). They concentrate their
signals in this narrow band to use the spectrum efficiently and obtain reasonable data rates by
transmitting with enough power. However, in some cases, a wider band is used, with three variations.
In frequency hopping spread spectrum, the transmitter hops from frequency to frequency hundreds
of times per second. It is popular for military communication because it makes transmissions hard to
detect and next to impossible to jam. It also offers good resistance to multipath fading and
narrowband interference because the receiver will not be stuck on an impaired frequency for long
enough to shut down communication.
A second form of spread spectrum, direct sequence spread spectrum, uses a code sequence to
spread the data signal over a wider frequency band. It is widely used commercially as a spectrally
efficient way to let multiple signals share the same frequency band. These signals can be given
different codes, a method called CDMA (Code Division Multiple Access) that we will return to later
in this chapter.
This method is shown in contrast with frequency hopping in Fig. 1-20. It forms the basis of 3G
mobile phone networks and is also used in GPS (Global Positioning System). Even without different
codes, direct sequence spread spectrum, like frequency hopping spread spectrum, can tolerate
narrowband interference and multipath fading because only a fraction of the desired signal is lost. It is
used in this role in older 802.11b wireless LANs.
A third method of communication with a wider band is UWB (Ultra-Wide Band) communication.
UWB sends a series of rapid pulses, varying their positions to communicate information. The rapid
transitions lead to a signal that is spread thinly over a very wide frequency band. UWB is defined as
signals that have a bandwidth of at least 500 MHz or at least 20% of the center frequency of their
frequency band. UWB is also shown in Fig. 1-21. With this much bandwidth, UWB has the potential
to communicate at high rates. Because it is spread across a wide band of frequencies, it can tolerate a
substantial amount of relatively strong interference from other narrowband signals. Just as
importantly, since UWB has very little energy at any given frequency when used for short-range
transmission, it does not cause harmful interference to those other narrowband radio signals. It is said
to underlay the other signals.
Figure 1-22. (a) In the VLF, LF, and MF bands, radio waves follow the curvature of the earth. (b) In the HF band, they
bounce off the ionosphere.
In the HF and VHF bands, the ground waves tend to be absorbed by the earth. However, the waves
that reach the ionosphere, a layer of charged particles circling the earth at a height of 100 to 500 km,
are refracted by it and sent back to earth, as shown in Fig. 1-22(b). Under certain atmospheric
conditions, the signals can bounce several times. Amateur radio operators (hams) use these bands to
talk long distance. The military also communicate in the HF and VHF bands.
The demand for more and more spectrum drives operators to yet higher frequencies. Bands up to 10
GHz are now in routine use, but at about 4 GHz a new problem sets in: absorption by water. These
waves are only a few centimeters long and are absorbed by rain. This effect would be fine if one were
planning to build a huge outdoor microwave oven for roasting passing birds, but for communication it
is a severe problem. As with multipath fading, the only solution is to shut off links that are being
rained on and route around them.
In summary, microwave communication is so widely used for long-distance telephone
communication, mobile phones, television distribution, and other purposes that a severe shortage of
spectrum has developed. It has several key advantages over fiber. The main one is that no right of
way is needed to lay down cables. By buying a small plot of ground every 50 km and putting a
microwave tower on it, one can bypass the telephone system entirely. This is how MCI managed to
get started as a new long-distance telephone company so quickly. (Sprint, another early competitor to
the deregulated AT&T, went a completely different route: it was formed by the Southern Pacific
Railroad, which already owned a large amount of right of way and just buried fiber next to the
tracks.)
Microwave is also relatively inexpensive. Putting up two simple towers (which can be just big poles
with four guy wires) and putting antennas on each one may be cheaper than burying 50 km of fiber
through a congested urban area or up over a mountain, and it may also be cheaper than leasing the
telephone company’s fiber, especially if the telephone company has not yet even fully paid for the
copper it ripped out when it put in the fiber.
the nice stories they enjoy most. Having some government official award property worth billions of
dollars to his favorite company often leads to bribery, corruption, nepotism, and worse. Furthermore,
even a scrupulously honest government official who thought that a foreign company could do a better
job than any of the national companies would have a lot of explaining to do.
This observation led to algorithm 2, holding a lottery among the interested companies. The problem
with that idea is that companies with no interest in using the spectrum can enter the lottery. If, say, a
fast food restaurant or shoe store chain wins, it can resell the spectrum to a carrier at a huge profit and
with no risk.
Bestowing huge windfalls on alert but otherwise random companies has been severely criticized by
many, which led to algorithm 3: auction off the bandwidth to the highest bidder. When the British
government auctioned off the frequencies needed for third-generation mobile systems in 2000, it
expected to get about $4 billion. It actually received about $40 billion because the carriers got into a
feeding frenzy, scared to death of missing the mobile boat. This event switched on nearby
governments’ greedy bits and inspired them to hold their own auctions. It worked, but it also left
some of the carriers with so much debt that they are close to bankruptcy. Even in the best cases, it
will take many years to recoup the licensing fee.
A completely different approach to allocating frequencies is to not allocate them at all. Instead, let
everyone transmit at will, but regulate the power used so that stations have such a short range that
they do not interfere with each other. Accordingly, most governments have set aside some frequency
bands, called the ISM (Industrial, Scientific, Medical) bands for unlicensed usage. Garage door
openers, cordless phones, radio-controlled toys, wireless mice, and numerous other wireless
household devices use the ISM bands. To minimize interference between these uncoordinated
devices, the FCC mandates that all devices in the ISM bands limit their transmit power (e.g., to 1
watt) and use other techniques to spread their signals over a range of frequencies. Devices may also
need to take care to avoid interference with radar installations. The location of these bands varies
somewhat from country to country. In the United States, for example, the bands that networking
devices use in practice without requiring a FCC license are shown in Fig. 1-23. The 900-MHz band
was used for early versions of 802.11, but it is crowded. The 2.4-GHz band is available in most
countries and widely used for 802.11b/g and Bluetooth, though it is subject to interference from
microwave ovens and radar installations. The 5-GHz part of the spectrum includes U-NII
(Unlicensed National Information Infrastructure) bands. The 5-GHz bands are relatively
undeveloped but, since they have the most bandwidth and are used by 802.11a, they are quickly
gaining in popularity.
Figure 1-23. ISM and U-NII bands used in the United States by wireless devices.
The unlicensed bands have been a roaring success over the past decade. The ability to use the
spectrum freely has unleashed a huge amount of innovation in wireless LANs and PANs, evidenced
by the widespread deployment of technologies such as 802.11 and Bluetooth. To continue this
innovation, more spectrum is needed. One exciting development in the U.S. is the FCC decision in
2009 to allow unlicensed use of white spaces around 700 MHz. White spaces are frequency bands
that have been allocated but are not being used locally. The transition from analog to all-digital
television broadcasts in the U.S. in 2010 freed up white spaces around 700 MHz. The only difficulty
is that, to use the white spaces, unlicensed devices must be able to detect any nearby licensed
transmitters, including wireless microphones, that have first rights to use the frequency band.
Another flurry of activity is happening around the 60-GHz band. The FCC opened 57 GHz to 64 GHz
for unlicensed operation in 2001. This range is an enormous portion of spectrum, more than all the
other ISM bands combined, so it can support the kind of high-speed networks that would be needed
to stream high-definition TV through the air across your living room. At 60 GHz, radio waves are
absorbed by oxygen. This means that signals do not propagate far, making them well suited to short-
range networks. The high frequencies (60 GHz is in the Extremely High Frequency or ‘‘millimeter’’
band, just below infrared radiation) posed an initial challenge for equipment makers, but products are
now on the market.
One of the authors (AST) once attended a conference at a modern hotel in Europe at which the
conference organizers thoughtfully provided a room full of terminals to allow the attendees to read
their email during boring presentations. Since the local PTT was unwilling to install a large number
of telephone lines for just 3 days, the organizers put a laser on the roof and aimed it at their
university’s computer science building a few kilometers away. They tested it the night before the
conference and it worked perfectly. At 9 A.M. on a bright, sunny day, the link failed completely and
stayed down all day. The pattern repeated itself the next two days. It was not until after the
conference that the organizers discovered the problem: heat from the sun during the daytime caused
convection currents to rise up from the roof of the building, as shown in Fig. 1-24. This turbulent air
diverted the beam and made it dance around the detector, much like a shimmering road on a hot day.
The lesson here is that to work well in difficult conditions as well as good conditions, unguided
optical links need to be engineered with a sufficient margin of error.
Figure 1-24. Convection currents can interfere with laser communication systems.
A bidirectional system with two lasers is pictured here. Unguided optical communication may seem
like an exotic networking technology today, but it might soon become much more prevalent. We are
surrounded by cameras (that sense light) and displays (that emit light using LEDs and other
technology). Data communication can be layered on top of these displays by encoding information in
the pattern at which LEDs turn on and off that is below the threshold of human perception.
Communicating with visible light in this way is inherently safe and creates a low-speed network in
the immediate vicinity of the display. This could enable all sorts of fanciful ubiquitous computing
scenarios.
The flashing lights on emergency vehicles might alert nearby traffic lights and vehicles to help clear a
path. Informational signs might broadcast maps. Even festive lights might broadcast songs that are
synchronized with their display.