0% found this document useful (0 votes)
78 views

Computer Networks

This document provides an overview of computer networks. It defines a computer network as a collection of connected computer systems that allow for the exchange of data and resources. The document discusses different types of networks including personal area networks (PANs), local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs) like the Internet. It also describes the basic components needed for a network including nodes, connections between nodes, network cards in each system, and additional devices for larger networks. Network topology, protocols for communication, security, and an layered model for understanding how networks function are also summarized.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

Computer Networks

This document provides an overview of computer networks. It defines a computer network as a collection of connected computer systems that allow for the exchange of data and resources. The document discusses different types of networks including personal area networks (PANs), local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs) like the Internet. It also describes the basic components needed for a network including nodes, connections between nodes, network cards in each system, and additional devices for larger networks. Network topology, protocols for communication, security, and an layered model for understanding how networks function are also summarized.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

1|Page

Computer networks

by Chris Woodford. Last updated: December 25, 2018.

T hank goodness for computer networks! If they'd never been invented,

you wouldn't be reading this now (using the Internet) and I wouldn't be writing it
either (using a wireless home network to link up my computerequipment).
There's no doubt that computer networking is extremely complex when you
delve into it deeply, but the basic concept of linking up computers so they can
talk to one another is pretty simple. Let's take a closer look at how it works!
Artwork: The basic concept of a computer network: a collection of computers (and related equipment) hooked
up with wired or wireless links so any machine can exchange information with any other.

What is a computer network?


2|Page

Photo: Testing a small computer network linked to the Internet. Photo courtesy of NASA Glenn Research
Center (NASA-GRC).

You can do lots of things with a computer but, connect it up to other


computers and peripherals (the general name given to add-on bits of
computer equipment such as modems, inkjet and laser printers, and scanners)
and you can do an awful lot more. A computer network is simply a collection
of computer equipment that's connected with wires, optical fibers, or wireless
links so the various separate devices (known as nodes) can "talk" to one
another and swap data (computerized information).

Types of networks

Photo: A wireless router like this one, made by Netgear, is the heart of many home PANs.
3|Page

Not all computer networks are the same. The network I'm using to link this
laptop to my wireless router, printer, and other equipment is the smallest
imaginable. It's an example of what's sometimes called a PAN (personal area
network)—essentially a convenient, one-person network. If you work in an
office, you probably use a LAN (local area network), which is typically a few
separate computers linked to one or two printers, a scanner, and maybe a
single, shared connection to the Internet. Networks can be much bigger than
this. At the opposite end of the scale, we talk about MANs (metropolitan
area networks), which cover a whole town or city, and WANs (wide area
networks), which can cover any geographical area. The Internet is a WAN that
covers the entire world but, in practice, it's a network of networks as well as
individual computers: many of the machines linked to the Net connect up
through LANs operated by schools and businesses.

Rules

Artwork: The three best-known computer network topologies: line (chain/bus), ring, and star.

Computers are all about logic—and logic is all about following rules. Computer
networks are a bit like the army: everything in a network has to be arranged
with almost military precision and it has to behave according to very clearly
defined rules. In a LAN, for example, you can't connect things together any old
4|Page

how: all the nodes (computers and other devices) in the network have to be
connected in an orderly pattern known as the network topology. You can
connect nodes in a simple line (also called a daisy chain or bus), with each
connected to the next in line. You can connect them in a star shape with the
various machines radiating out from a central controller known as the network
server. Or you can link them into a loop (generally known as a ring). All the
devices on a network also have to follow clearly defined rules
(called protocols) when they communicate to ensure they understand one
another—for example, so they don't all try to send messages at exactly the
same time, which causes confusion.

Permissions and security

Just because a machine is on a network, it doesn't automatically follow that


every other machine and device has access to it (or can be accessed by it).
The Internet is an obvious example. If you're online, you get access to billions
of Web pages, which are simply files stored on other machines (servers)
dotted all over the network. But you can't access every single file on every
single computer hooked up to the Internet: you can't read my personal files
and I can't read yours, unless we specifically choose for that to happen.

Permissions and security are central to the idea of networking: you can
access files and share resources only if someone gives you permission to do
so. Most personal computers that connect to the Internet allow outgoing
connections (so you can, theoretically, link to any other computer), but block
most incoming connections or prohibit them completely. Servers (the
machines on the Internet that hold and serve up Web pages and other files)
operate a more relaxed policy to incoming connections. You've probably
heard of hacking, which, in one sense of the word, means gaining
unauthorized access to a computer network by cracking passwords or
defeating other security checks. To make a network more secure, you can
add a firewall(either a physical device or a piece of software running on your
machine, or both) at the point where your network joints onto another network
or the Internet to monitor and prohibit any unauthorized, incoming access
attempts.

What makes a network?


To make a network, you need nodes and connections (sometimes called
links) between them. Linking up the nodes means making some sort of a
temporary or permanent connection between them. In the last decade or so,
5|Page

wireless connections have become one of the most popular ways of doing
this, especially in homes. In offices, wired connections are still more
commonplace—not least because they are generally faster and more secure
and because many newer offices have network cabling already in place.

Photo: If your laptop doesn't have a network card, you can simply plug in a PCMCIA adapter like this one. The
adapter has a network card built into it.

Apart from computers, peripherals, and the connections between them, what
else do you need? Each node on a network needs a special circuit known as
a network card (or, more formally, a network interface card or NIC) to tell it
how to interact with the network. Most new computers have network cards
built in as standard. If you have an older computer or laptop, you may have to
fit a separate plug-in circuit board (or, in a laptop, add a PCMCIA card) to
make your machine talk to a network. Each network card has its own separate
numeric identifier, known as a MAC (media access control) code or LAN
MAC address. A MAC code is a bit like a phone number: any machine on the
network can communicate with another one by sending a message quoting its
MAC code. In a similar way, MAC codes can be used to control which
machines on a network can access files and other shared resources. For
example, I've set up my wireless link to the Internet so that only two MAC
codes can ever gain access to it (restricting access to the network cards built
into my two computers). That helps to stop other people in nearby buildings
(or in the street) hacking into my connection or using it by mistake.

The bigger you make a network, the more extra parts you need to add to
make it function efficiently. Signals can travel only so far down cables or over
wireless links so, if you want to make a big network, you have to add in
devices called repeaters—effectively signal boosters. You might also
need bridges, switches, and routers—devices that help to link together
6|Page

networks (or the parts of networks, which are known as segments), regulate
the traffic between them, and forward traffic from one part of a network to
another part.

Understanding computer networks with layers

Photo: Computer architecture: We can think of computers in layers, from the hardware and the BIOS at the
moment to the operating system and applications at the top. We can think of computer networks in a similar
way.

Computers are general-purpose machines that mean different things to


different people. Some of us just want to do basic tasks like word processing
or chatting to friends on Facebook and we couldn't care less how that
happens under the covers—or even that we're using a computer to do it (if
we're using a smartphone, we probably don't even think what we're doing is
"computing"—or that installing a new app is effectively computer
programming). At the opposite end of the spectrum, some of us like modifying
our computers to run faster, fitting quicker processors or more memory, or
whatever it might be; for geeks, poking around inside computers is an end in
itself. Somewhere in between these extremes, there are moderately tech-
savvy people who use computers to do everyday jobs with a reasonabe
understanding of how their machines work. Because computers mean
different things to different people, it can help us to understand them by
thinking of a stack of layers: hardware at the bottom, the operating system
somewhere on top of that, then applications running at the highest level. You
can "engage" with a computer at any of these levels without necessarily
7|Page

thinking about any of the other layers. Nevertheless, each layer is made
possible by things happening at lower levels, whether you're aware of that or
not. Things that happen at the higher levels could be carried out in many
different ways at the lower levels; for example, you can use a web browser
like Firefox (an application) on many different operating systems, and you can
run various operating systems on a particular laptop, even though the
hardware doesn't change at all.

Computer networks are similar: we all have different ideas about them and
care more or less about what they're doing and why. If you work in a small
office with your computer hooked up to other people's machines and shared
printers, probably all you care about is that you can send emails to your
colleagues and print out your stuff; you're not bothered how that actually
happens. But if you're charged with setting up the network in the first place,
you have to consider things like how it's physically linked together, what sort
of cables you're using and how long they can be, what the MAC addresses
are, and all kinds of other nitty gritty. Again, just like with computers, we can
think about a network in terms of its different layers—and there are two
popular ways of doing that.

The OSI model

Perhaps the best-known way is with what's called the OSI (Open Systems
Interconnect) model, based on an internationally agreed set of standards
devised by a committee of computer experts and first published in 1984. It
describes a computer network as a stack of seven layers. The lower layers
are closest to the computer hardware; the higher levels are closer to human
users; and each layer makes possible things that happen at the higher layers:

1. Physical: The basic hardware of the network, including cables


and connections, and how devices are hooked up into a certain
network topology (ring, bus, or whatever). The physical layer
isn't concerned in any way with the data the network carries
and, as far as most human users of a network are concerned, is
uninteresting and irrelevant.
2. Data link: This covers things like how data is packaged and
how errors are detected and corrected.
3. Network: This layer is concerned with how data is addressed
and routed from one device to another.
4. Transport: This manages the way in which data is efficiently
and reliably moved back and forth across the network, ensuring
all the bits of a given message are correctly delivered.
8|Page

5. Session: This controls how different devices on the network


establish temporary "conversations" (sessions) so they can
exchange information.
6. Presentation: This effectively translates data produced by user-
friendly applications into computer-friendly formats that are sent
over the network. For example, it can include things like
compression (to reduce the number of bits and bytes that need
transmitting), encryption (to keep data secure), or converting data
between different character sets (so you can read emoticons
("smileys") or emojis in your emails).
7. Application: The top level of the model and the one closest to
the user. This covers things like email programs, which use the
network in a way that's meaningful to human users and the
things they're trying to achieve.

OSI was conceived as a way of making all kinds of different computers and
networks talk to one another, which was a major problem back in the 1960s,
1970s, and 1980s, when virtually all computing hardware was proprietary and
one manufacturer's equipment seldom worked with anyone else's.

The TCP/IP (DARPA) model

If you've never heard of the OSI model, that's quite probably because a
different way of hooking up the world's computers triumphed over it, delivering
the amazing computer network you're using right now: the Internet. The
Internet is based on a two-part networking system called TCP/IP in which
computers hook up over networks (using what's called TCP, Transmission
Control Protocol) to exchange information in packets (using the Internet
Protocol, IP). We can understand TCP/IP using four slightly simpler layers,
sometimes known as the TCP/IP model (or the DARPA model, for the US
government's Defense Advanced Research Projects Agency that sponsored
its development):

1. Network Access (sometimes called the Network Interface


layer): This represents the basic network hardware, and
corresponds to the Physical and Data link layers of the OSI
model. Your Ethernet or Wi-Fi connection to the Internet is an
example.
2. Internet (sometimes called the Network layer): This is how data
is sent over the network and it's equivalent to the Network layer
in the OSI model. IP (Internet Protocol) packet switching—
9|Page

delivering actual packets of data to your computer from the


Internet—works at this level.
3. Transport: This corresponds to the Transport layer in the OSI
model. TCP (Transmission Control Protocol) works at this
level, administering the delivery of data without actually
delivering it. TCP converts transmitted data into packets (and
back again when they're received) and ensures those packets
are reliably delivered and reassembled in the same order in
which they were sent.
4. Application: Equivalent to the Session, Presentation, and
Application layers in the OSI model. Well-known Internet
protocols such as HTTP (the under-the-covers "conversation"
between web browsers and web servers), FTP (a way of
downloading data from servers and uploading them in the
opposite direction), and SMTP (the way your email program
sends mails through a server at your ISP) all work at this level.

Artwork: The TCP/IP model is easy to understand. In this example, suppose you're emailing someone over the
Internet. Your two devices are, in effect, connected by one long "cable" running between their network cards.
That's what the green Network Access layer at the bottom represents. Your email is transmitted as packets
(orange squares) using the Internet Protocol (IP), illustrated by the orange Internet layer. Transmission Control
Protocol (TCP) oversees this process in the blue Transport layer; and, in effect, TCP and IP work together. At
the top, in the Application layer, you sit at your computer using an email program (an application) that uses all
the layers below.

While the OSI model is quite an abstract and academic concept, rarely
encountered outside books and articles about computer networking, the
TCP/IP model is a simpler, easier-to-understand, and more practical
10 | P a g e

proposition: it's the bedrock of the Internet—and the very technology you're
using to read these words now.

As we saw above, higher levels of the basic computing models are


independent of the lower levels: you can run your Firefox browser on different
Windows operating systems or Linux, for example. The same applies to
networking models. So you can run many applications using Internet packet
switching, from the World Wide Web and email to Skype (VoIP) and Internet
TV. And you can hook your computer to the net using WiFi or wired broadband
or dialup over a telephone line (different forms of network access). In other
words, the higher levels of the model are doing the same jobs even though
the lower levels are working differently.

Networks on the fly


Like highways or railroad lines that connect towns and cities, computer
networks are often very elaborate, well-planned things. In the days when
computers were big static boxes that never shifted from data centers and
desktops, computer networks also tended to be fairly static things; often they
didn't change much from one week, month, or year to the next. The Internet,
for example, is based on a set of well-defined connections called the Internet
backbone including vastsubmarine cables that obviously have to stay in place for
years. That's computer networking at one extreme.

Increasingly, though, we're shifting to mobile devices that need to improvise


networks as they move around the world. Wi-Fi (wireless Ethernet) is one
example of how smartphones, tablets, and other mobile computers can join
and leave fixed networks (based around "hotspots," or access points) in a
very ad-hoc way. Bluetooth is even more improvized: nearby devices detect
one another, connect together (when you give them permission), and form a
(generally) short-lived computer network—before going their separate ways.
Ad-hoc technologies like these are still based on classic computer networking
concepts, but they also involve a range of new problems. How do mobile
devices discover one another? How does one device (such as a Wi-Fi router)
know when another abruptly joins or leaves the network? How can it maintain
the performance of the network when lots of people try to join at the same
time? What if all the network devices are using slightly different versions of
Wi-Fi or Bluetooth; will they still be able to connect? If communication is
entirely wireless, how can it be properly secured? We discuss these sorts of
issues in more detail in our main articles about Wi-Fi and Bluetooth.
11 | P a g e

How Ethernet works

Photo: A typical ethernet networking cable.

Not so long ago, computers were all made by different companies, worked in
different ways, and couldn't communicate with one another. Often, they didn't
even have the same sorts of plugs and sockets on their cases! During the
1980s and 1990s, everything became much more standardized and it's now
possible to connect virtually any machine to any other and get them
exchanging data without too much effort. That's largely because most
networks now use the same system, called Ethernet. It was developed in May
1973 by US computer engineer Dr Robert ("Bob") Metcalfe (1946–), who
went on to found 3Com and later became a well-known computer-industry
pundit (perhaps, somewhat unfairly, best known for predicting a spectacular
collapse of the Internet in 1995 that never actually occurred).

As Metcalfe originally designed it, Ethernet was based on three very simple
ideas. First, computers would connect through the "ether" (a semi-serious,
semi-scientific name for the void of emptiness that separates them) using
standard coaxial cable (wires like the ones used in
a television antenna connection, made up of concentric metal layers). In
Ethernet-speak, the physical connection between the nodes (computers and
other devices) on the network is also known as the medium. Things have
moved on quite a bit since the early 1970s and the medium is now just as
often a wireless radio link (you've probably heard of Wi-Fi, which is the
wireless version of Ethernet). Second, all the computers and devices on a
network would stay silent except for when they were sending or receiving
messages. Finally, when they wanted to communicate, they'd do so by
breaking up messages into small packets of data and sending them around
12 | P a g e

the network by a highly efficient method known as packet


switching (discussed in much more detail in our article on the Internet).

If one machine wants to send a message to another machine on an Ethernet


network, it goes through a process a bit like sending a letter. The message
has to be packaged in a standard format called a frame (a bit like the
envelope that contains a letter). The frame includes a standard header, the
address of the device on the network it's intended for (like the address on an
envelope), the address of the machine that sent it (like an envelope's return-to
or sender's address), an indication of how much data it contains, the data
itself, some padding, and some error checking information at the end (used to
do a quick check on whether the data has transmitted correctly). Unlike a
letter, which goes only to the recipient, the frame goes to every machine and
device on the network. Each machine reads the destination address to figure
out whether the frame is intended for them. If so, they act on it; if not, they
ignore it. Any machine on the network can transmit messages through the
ether at any time, but problems will occur if two or more machines try to talk at
once (known as a collision). If that happens, the machines all fall silent for a
random period of time before trying again. Eventually, one will find the ether is
clear and get its message out first, followed by the other, so all messages will
get through eventually. Typical Ethernet equipment can handle thousands of
frames per second. In tech-speak, this method of using the network is
called carrier sense multiple access with collision detection (CSMA/CD):
that's a fancy way of saying that the nodes do their best to transmit when the
ether is clear ("carrier sense"), they can all theoretically send or receive at any
time ("multiple access"), and they have a way of sorting out the problem if two
happen to transmit at exactly the same time ("collision detection").

Find out more

 An interview with Bob Metcalfe: Manek Dubash offers this


fascinating interview with the Ethernet pioneer to mark
40 years of his world-changing invention.
 Oral history of Bob Metcalfe: A much longer (almost three-
hour) oral history interview with Len Shustek of The
Computer History Museum.

How do computer networks detect errors?


Suppose you order a book by mail order and it arrives, a few days later, with
the packaging ripped and the cover slightly creased or torn. That's a kind of
13 | P a g e

error of transmission. Fortunately, since a book is analog information, a bit of


damage to the cover doesn't stop you appreciating the story the book tells or
the information it contains. But what if you're downloading an ebook (electronic
book) and there's a blip in transmission so some of the data goes astray.
Maybe you won't be able to open the book file at all, rendering the whole thing
useless. Or what if a bank is sending an electronic payment to someone and
the data it transmits across its network is corrupted so the account number or
the amount to be paid gets scrambled? What if a military control center sends
a signal to a nuclear missile installation and a blip on the network alters the
data it contains so, instead of "power down," the rocket is told to "launch
immediately"? The point is a simple one: when we send data over computer
networks, we need to be absolutely certain that the information received is
identical to the information transmitted. But how we can do this when vast
amounts of data are being sent around the world all the time?

Artwork: Checking the integrity of a large download with an MD5 code: If you've ever downloaded
a linux distribution (anything from a few hundred megabytes to several gigabytes of data), you've probably
done this—or you certainly should have done! On the original download page, you'll be given an MD5
checksum code matching the file you want to download. Once your download is complete, you simply run the
file through an MD5 calculator program (here I'm using winMd5sum) to calculate the MD5 code from the data
you've downloaded. If the two MD5 codes match, you can be reasonably confident your file downloaded
without any mistakes.

Computers and computer networks have all kinds of ingenious ways of


checking the information they send. One simple method is to send everything
twice and compare the two sets of data that are received; if they don't match,
14 | P a g e

you can ask for all the data to be resent. That's laborious and inefficient—
doubling the time it takes to transmit information—and there are far better
methods of keeping data straight. One of the simplest is called parity
checking (or parity bit checking). Suppose you're sending strings of binary
digits (bits, made up of zeros and ones) over a network. Every time you send
seven bits, you add up the number of ones you've sent. If you've sent an odd
number of ones (1, 3, 5, or 7 of them), you then send an extra 1 to confirm
this; if you've sent an even number of ones (0, 2, 4, or 6), you send a zero
instead. The receiver can do the same sums with the data it sees, check the
parity bit, and so detect if a mistake has been made. Unfortunately, with
simple parity checking, it's not possible to say where an error has been made
or to correct it on the spot, but the receiver can at least spot a batch of
incorrect data and ask for it to be sent again.

More sophisticated ways of detecting errors are usually variants


of checksums where, every so often, you add together the numbers you've
previously sent and then transmit the total (the sum) as a check. The receiver
does the same calculation and compares it with the checksum. But what if
multiple errors occur (say, the checksum is transmitted incorrectly as well as
some of the original data), so they cancel one another out and go undetected?
There are much more sophisticated versions of checksums where, instead of
simply adding the data you've transmitted, you process it in more complex
ways that make it far harder for errors to slip through. When you download
large files, for example, you'll sometimes be given what's called an MD5 hash
code to check, which is a long number (often in hexadecimal or base 16
format, made up of the numbers 0–9 and the letters A–F) computed from the
original file by a complex mathematical algorithm. A typical MD5 hash code
would
be 7b7c56c74008da7d97bd49669c8a045d or ef6a998ac98a440b6e58b
ed8e7a412db. Once you've downloaded your file, you simply run it against a
hash-checking program to generate a code the same way. Comparing the
codes, you can see if the file downloaded correctly and, if not, try again. Some
forms of error checking not only allow you to detect errors but make it possible
to correct them without retransmitting all the data. Among the best known
are Hamming codes, invented in 1950 by US mathematician Richard
Hamming to improve the accuracy and reliability of all kinds of data
transmissions. They work by using more error detection bits so that the
position of an error in transmitted data can be figured out and not just the
simple fact that an error has occurred.

You might also like