Handout 4 - Network Architecture
Handout 4 - Network Architecture
Network Architecture
1. Introduction
In the first three handouts, we have looked at the physical aspects of a network. We
learned about cables and the various methods of connecting them so that we can share
data. Now that we can physically link computers, we need to learn how to gain access
to the wires and cables.
In this section, we will first examine how data is put together before it is sent on to the
wires of a computer network. Next, we examine the three principal methods used to
access the wires. The first method, called contention, is based on the principle of "first
come, first served." The second method, token passing, is based on the principle of
waiting to take turns. The third method, demand priority, is relatively new and is
based on prioritising access to the network. Last, we examine two of the most
common network systems (Ethernet and Token Ring).
Data usually exists as rather large files. However, networks cannot operate if
computers put large amounts of data on the cable at the same time. If a computer
sends large amounts of data it can cause other computers to wait (increasing the
frustration of the other users) while the data is being moved. There are two reasons
why putting large chunks of data on the cable at one time slows down the network:
Large amounts of data sent as one large unit tie up the network and make
timely interaction and communications impossible because one computer is
flooding the cable with data.
The impact of retransmitting large units of data further multiplies network
traffic.
These effects are minimized when the large data units are reformatted into smaller
packages. This way, only a small section of data is affected, and, therefore, only a
small amount of data must be retransmitted, making it relatively easy to recover from
the error. These packages are commonly called packets or frames, and are the basic
building blocks of network data communications.
When the operating system at the sending computer breaks the data into packets, it
adds special control information to each frame. This makes it possible to:
Send the original, disassembled data in small chunks
Reassemble the data in the proper order when it reaches its destination
Check the data for errors after it has been reassembled
Exactly what control information is added can vary, but all packets include at least the
source address, the data and the destination address. There are three different ways in
which packets can be addressed:
Unicast: packet is addressed to a single destination
Multicast: packet is addressed simultaneously to multiple destinations
Broadcast: packet is sent simultaneously to all stations on the network
1
Communication using smaller packets of data is known as packet switching, whereas
if a direct dedicated communication line is used for the duration of the transmission it
is known as circuit switching.
CO packet switching has more ‘overheads’: before transmission can start time must
be spent setting up the virtual connection across the network, and after it has finished
more time must be spent closing the connection. However, once transmission has
commenced, bandwidth can be reserved so it is possible to guarantee higher data
rates, which is not possible with CL packet switching. Therefore CO packet switching
is well suited to real-time application such as streaming of video and/or sound. On the
other hand, CL packet switching is simpler, has fewer overheads, and allows multicast
and broadcast addressing.
3. Routing
A routing table is stored in the RAM of a network device such as a bridge, switch or
router, and contains information about where to forward data to, based on its
destination address. For example, Figure 1 shows a simple network consisting of 3
switches and 6 computers. Each switch connects a different sub-network. Each
computer has an address consisting of a network number followed by a computer
number (e.g. computer A is in network number 1 and has computer number 2). The
routing table is shown for switch 3, and indicates where the next destination (or next
hop) should be for reaching each address on the network.
2
For instance, if computer C sends data to computer A, then switch 3 will first look at
the destination address (1, 2), and then look up this address in its routing table. It finds
that the next hop for this address is port 5 of the switch, and so sends the data to this
port and no other.
The next question is how is the data in the routing table determined. We will now
look at some of the common strategies used to route packets in packet switching
networks. We will first survey some of the key characteristics of such strategies, and
then examine some specific routing strategies
The primary function of a packet switching network is to accept packets from a source
station and deliver them to a destination station. To accomplish this a route through
the network must be established. Often, more than one route is possible. Thus, the
‘best’ route must be determined. There are a number of requirements that this decision
should take into account:
Correctness
Simplicity
Robustness
Stability
Fairness
Optimality
Efficiency
The first two requirements are straightforward: correctness means that the route must
lead to the correct destination; and simplicity means that the algorithm used to make
3
the decision should not be too complex. Robustness has to do with the ability of the
network to cope with network failures and overloads. Ideally, the network should
react to such failures without losing packets and without breaking virtual circuits.
Stability means that the network should not overreact to such failures: the
performance of the network should remain reasonably stable over time. A tradeoff
exists between fairness and optimality. The optimal route for one packet is the
shortest route (measured by some performance criterion). However, giving one packet
its optimal route may adversely affect the delivery of other packets. Fairness means
that overall most packets should have a reasonable performance. Finally, any routing
strategy involves some overheads and processing to calculate the best routes.
Efficiency means that the benefits of these overheads should outweigh their cost.
With these requirements in mind, we are now in a position to assess the various
design elements that contribute to a routing strategy. Table 1 lists these elements.
4
Figure 2 – A weighted graph illustrating network connectivity
Two key characteristics of the routing decision are when and where it is made. The
decision time is determined by whether we a re using a CO network or a CL network.
For CL networks the route is established independently for each packet. For CO
networks the route is established once at the time the virtual circuit is set up. The
decision place refers to which node(s) are responsible for the routing decision. The
most common technique is distributed routing, in which each node has the
responsibility to forward each packet as it arrives. For centralised routing all routing
decisions are made by a single designated node. The danger of this approach is that if
this node is damaged or lost the operation of the network will cease. In source routing,
the routing path is established by the node that is sending the packet.
Almost all routing strategies will make their routing decisions based upon some
information about the state of the network. The network information source refers to
where this information comes from, and the network information update timing refers
to how often this information is updated. Local information means just using
information from outgoing links from the current node. An adjacent information
source means any node which has a direct connection to the current node. The update
timing of a routing strategy can be continuous (updating all the time), periodic (every
t seconds), or occur when there is a major load or topology change.
How that we are familiar with some of the characteristics and elements of routing
strategies, we will examine some specific examples.
Fixed routing is a simple scheme, and it works well in a reliable network with a stable
load. However, it does not respond to network failures, or changes in network load
(e.g. congestion).
5
3.2.3.2 Flooding
With flooding, all possible routes between the source and the destination are tried.
Therefore so long as a path exists at least one packet will reach the destination. This
means that flooding is a highly robust technique, and is sometimes used to send
emergency information. Furthermore, at least one packet will have used the least cost
route. This can make it useful for initialising routing tables with least cost routes.
Another property of flooding is that every node on the network will be visited by a
packet. This means that flooding can be used to propagate important information on
the network, such as routing tables.
A major disadvantage of flooding is the high network traffic that it generates. For this
reason it is rarely used on its own, but as described above it can be a useful technique
when used in combination with other routing strategies.
Random routing has the simplicity and robustness of flooding with far less traffic
load. With random routing, instead of each node forwarding packets to all outgoing
links, the node selects only one link for transmission. This link is chosen at random,
excluding the link on which the packet arrived. Often the decision is completely
random, but an refinement of this technique is to apply a probability to each link. this
probability could be based on some performance criterion, such as throughput.
Like flooding, random routing requires the use of no network information. The traffic
generated is much reduced compared to flooding. However, unlike flooding, random
routing is not guaranteed to find the shortest route from the source to the destination.
In almost all packet switching networks some form of adaptive routing is used. The
term adaptive routing means that the routing decisions that are made change as
conditions on the network change. The two principle factors that can influence
changes in routing decisions are failure of a node or a link, and congestion (if a
particular link has a heavy load it is desirable to route packets away from that link).
For adaptive routing to be possible, information about the state of the network must
be exchanged among the nodes. This has a number of disadvantages. First, the routing
decision is more complex, thus increasing the processing overheads at each node.
Second, the information that is used may not be up-to-date. To get up-to-date
information means requires continuous exchange of routing information between
6
nodes, thus increasing network traffic. Therefore there is a tradeoff between quality of
information and network traffic overheads. Finally, it is important that an adaptive
strategy does not react to slowly or too quickly to changes. If it reacts too slowly it
will not be useful. But if it reacts too quickly it may result in an oscillation, in which
all network traffic makes the same change of route at the same time.
However, despite these dangers, adaptive routing strategies generally offer real
benefits in performance, hence their popularity. Two examples of adaptive routing
strategies are distance-vector routing and link-state routing.
Distance-vector routing
Link-state routing
In the link state technique, each network device periodically tests the speed of all of
its links. It then broadcasts this information to the entire network. Each device can
therefore construct a graph with weighted edges that represents the network
connectivity and performance (e.g. see Figure 2). The device can then use a shortest
path algorithm such as Dijkstra’s agorithm to compute the best route for a packet to
take.
4. Access methods
In networking, to access a resource is to be able to use that resource. The set of rules
that defines how a computer puts data onto the network cable and takes data from the
cable is called an access method. Once data is moving on the network, access methods
help to regulate the flow of network traffic.
If data is to be sent over the network from one user to another, or accessed from
a server, there must be some way for the data to access the cable without running into
other data (a collision). And the receiving computer must have reasonable assurance
that the data has not been destroyed in a data collision during transmission. Access
methods need to be consistent in the way they handle data. If different computers
were to use different access methods, the network would fail because some methods
would dominate the cable. Access methods prevent computers from gaining
simultaneous access to the cable. By making sure that only one computer at a time can
7
put data on the network cable, access methods ensure that the sending and receiving
of network data is an orderly process.
There are three major access methods: carrier-sense multiple-access, token passing
and demand priority.
Carrier sense multiple access methods can be divided into two subtypes: carrier sense
multiple access with collision detection (CSMA/CD) and carrier sense multiple access
with collision avoidance (CSMA/CA).
In CSMA/CD, each computer on the network, including clients and servers, checks
the cable for network traffic. Only when a computer "senses" that the cable is free and
that there is no traffic on the cable can it send data. Once the computer has transmitted
data on the cable, no other computer can transmit data until the original data has
reached its destination and the cable is free again. Remember, if two or more
computers happen to send data at exactly the same time, there will be a data collision.
When that happens, the two computers involved stop transmitting for a random period
of time and then attempt to retransmit. Each computer determines its own waiting
period; this reduces the chance that the computers will once again transmit
simultaneously. The waiting time is calculated using an algorithm known as
exponential backoff: the first time a collision occurs each computer waits a random
time t1, 0 ≤ t1 ≤ d (where d is a constant). If a second collision occurs with the same
packet, the wait time will be t2, 0 ≤ t2 ≤ 2d. The third time the wait time will be t 3, 0 ≤
t3 ≤ 4d, and so on: the maximum waiting time will be doubled after each successive
collision. This will continue for a maximum of 10 times, when the maximum waiting
time will reach a peak of 210d (= 1024d). After 16 successive collisions, transmission
of the packet is aborted and an error is reported.
With CSMA/CD, the more computers there are on the network, the more network
traffic there will be. With more traffic, collisions tend to increase, which slows the
network down, so CSMA/CD can be a slow-access method. After each collision, both
computers will have to try to retransmit their data. If the network is very busy, there is
a chance that the attempts by both computers will result in collisions with packets
from other computers on the network. If this happens, four computers (the two
original computers and the two computers whose transmitted packets collided with
the original computer's retransmitted packets) will have to attempt to retransmit.
These retransmissions can slow the network to a near standstill.
8
CSMA/CA is the least popular of the major access methods. In CSMA/CA, each
computer signals its intent to transmit before it actually transmits data. In this way,
computers sense when a collision might occur; this allows them to avoid transmission
collisions. Unfortunately, broadcasting the intent to transmit data increases the
amount of traffic on the cable and slows down network performance.
CSMA/CA is not commonly used in wired networks, but it has become the standard
for wireless networking. We will return to wireless networking standards later in this
handout.
A collision domain is a part of a LAN (or an entire LAN) where two computer
transmitting at the same time will cause a collision. Because switches, bridges and
routers do not forward unnecessary packets the different ports of these devices operate
in different collision domains. Repeaters and hubs broadcast all packets to all ports, so
their ports are in the same collision domain.
Figure 3 shows a simple network with one repeater (‘R’), two hubs, a switch and 10
computers (‘C’). Because hubs broadcast all packets to all ports, if computers 2 and 4
attempted to send at the same time there would be a collision, hence they are in the
same collision domain. However, because a switch will only forward a packet if it is
intended for the other subnet, every port of the switch is in a separate collision
domain. So if computer 2 tried to send to computer 4 at the same time as computer 7
tried to send to computer 10, there would be no collision.
9
computer. When any computer on the ring needs to send data across the network, it
must wait for a free token. When a free token is detected, the computer will take
control of it if the computer has data to send. The computer can now transmit data.
Data is transmitted in frames, which consist of the data to be sent, plus some
additional information, such as addressing.
While the token is in use by one computer, other computers cannot transmit data.
Because only one computer at a time can use the token, no contention and no collision
take place, and no time is spent waiting for computers to resend tokens due to network
traffic on the cable.
10
4.4 Access methods summary
The following table summarises the major features of each access method:
Demand
Feature/function CSMA/CD CSMA/CA Token passing
priority
Type of Broadcast Broadcast
Token based Hub based
communication based based
Type of access
Contention Contention Non-contention Contention
method
The token ring architecture was developed in the mid-1980s by IBM. It is the
preferred method of networking by IBM and is therefore found primarily in large
IBM mini- and mainframe installations.
We introduced the token ring architecture in Handout 1. The table below gives a
summary of the features of token ring LANs.
Feature Description
Physical topology Star
Logical topology Ring
Type of communication Baseband
Access method Token passing
Transfer speeds 4-16 Mbps
Cable type STP or UTP
Hardware for token ring networks is centred on the hub, which houses the actual ring.
This combination of a logical ring and a physical star topology is sometimes referred
to as a “star-shaped ring”. A token ring network can have multiple hubs. STP or UTP
cabling connects the computers to the hubs. Fibre-optic cable, together with repeaters,
can be used to extend the range of token ring networks. Token ring networks are not
that commonly used these days.
Ethernet has become the most popular way of networking desktop computers and is
still very commonly used today in both small and large network environments.
11
Standard specifications for Ethernet networks are produced by the Institute of
Electronic and Electrical Engineers (IEEE) in the USA, and there have been a large
number over the years. The original Ethernet standard used a bus topology,
transmitted at 10 Mbps, and relied on CSMA/CD to regulate traffic on the main cable
segment. The Ethernet media was passive, which means it required no power source
of its own and thus would not fail unless the media is physically cut or improperly
terminated. More recent Ethernet standards have different specifications.
Each frame begins with a 7-byte preamble. Each byte has the identical pattern
10101010, which is used to help the receiving computer synchronise with the sender.
This is followed by a 1-byte start frame delimiter (SFD), which has the pattern
10101011. Next are the source and destination addresses, which take up 6 bytes each.
The data can be of variable length (46-1500 bytes), so before the data itself there is a
2-byte field that indicates the length of the following data field. Finally there is a 4-
byte frame check sequence, used for cyclic redundancy checking. Therefore the
minimum and maximum lengths of an Ethernet frame are 72 bytes and 1526 bytes
respectively.
Although there have been a number of different standards for the Ethernet architecture
over the years, a number of features have remained the same The table below
summarises the general features of Ethernet LANs.
Feature Description
Traditional topology Linear bus
Other topologies Star bus
Type of communication Baseband
Access method CSMA/CD
Transfer speeds 10/100/1000 Mbps
Cable type Thicknet/thinnet coaxial or UTP
The first phase of Ethernet standards had a transmission speed of 10Mbps. Three of
the most common of these are known as 10Base2, 10Base5 and 10BaseT. The
following table summarises some of the features of each specification.
ETHERNET STANDARDS
10Base2 10Base5 10BaseT
Topology Bus Bus Star bus
UTP
Cable type Thinnet coaxial Thicknet coaxial
(Cat. 3 or higher)
Simplex/half/full
Half duplex Half duplex Half duplex
duplex
Data encoding Manchester, Manchester, Manchester,
12
asynchronous asynchronous asynchronous
Connector BNC DIX or AUI RJ45
Max. segment length 185 metres 500 metres 100 metres
Note that although the 10BaseT standard uses a physical star-bus topology, it still
used a logical bus topology. This combination is sometimes referred to as a “star-
shaped bus”. In addition to these three, a number of standard existed for use with
fibre-optic cabling, namely 10BaseFL, 10BaseFB and 10BaseFP.
The next phase of Ethernet standards was known as fast Ethernet, and increased
transmission speed up to 100Mbps. Fast Ethernet is probably the most common
standard in use today. The Manchester encoding technique used in the original
Ethernet standards is not well suited to high frequency transmission so new encoding
techniques were developed for fast Ethernet networks. Three of the most common fast
Ethernet standards are summarised below, although others do exist (e.g. 100BaseT2).
The most recent phase of Ethernet standards has increased transmission speeds up to
1000Mbps, although sometimes at the expense of some other features, such as
maximum segment length. Because of the transmission speed, it has become known
as Gigabit Ethernet, and the most common standards are summarised below.
Finally, the IEEE has also published a number of standards for wireless Ethernet
networks. The original standard was known as 802.11, was very slow (around 2Mbps)
and was quickly superseded by more efficient standards. 802.11 now usually refers to
the family of standards that followed after this original standard.
13
outdoors
Max. distance
60m 12m 20m
indoors
Broadcast
2.4Ghz 5Ghz 2.4Ghz
frequency
The CSMA/CA access method has become the standard access method for use in
wireless networking.
14
Summary of Key Points
15
Ethernet and token ring are two of the most popular network architectures
There have been many standards published by the IEEE for Ethernet
networks: the original standards had a transmission speed of 10Mbps; fast
Ethernet has a speed of 100Mbps; and Gigabit Ethernet has a speed of
1000Mbps
Wireless networking is becoming increasingly popular – the three wireless
Ethernet standards are known as 801.11b, 802.11a and 802.11g
16