0% found this document useful (0 votes)
18 views66 pages

Unit V

Network design and management

Uploaded by

Durga Devi P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views66 pages

Unit V

Network design and management

Uploaded by

Durga Devi P
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

AU1608_book.

fm Page 491 Friday, November 12, 2004 8:28 PM

Chapter 8

Technical Considerations
in Network Design and
Planning

8.1 Overview of the Network Design Process


Network design is a painstaking, iterative process. The first step of this
process is to define the requirements the network must satisfy. This
involves collecting information on anticipated traffic loads, traffic types
(e.g., data, video, etc.), and sources and destinations of traffic. This
information is used, in turn, to estimate the network capacity needed.
These requirements are used as input to the second step, the design
process. In this step, various design techniques and algorithms are used
to produce a network topology. The design process involves specifying
link and node placements, traffic routing paths, and equipment sizing.
After a candidate network solution is developed, it must be analyzed to
determine its cost, reliability, and delay characteristics. This third step is
called performance analysis. When these three steps are completed, the
first design iteration is finished. Then the entire process is repeated, either
with revised input data (e.g., using revised traffic estimates, etc.) or by
using a new design approach. This process is summarized in Figure 8.1.
The basic idea of this iterative process is to produce a variety of
networks from which to choose. Unfortunately, for most realistic design
problems, it is not possible from a mathematical perspective to know
what the optimal network should look like. To compensate for this inability
491
AU1608_book.fm Page 492 Friday, November 12, 2004 8:28 PM

492  Network Design: Management and Technical Perspectives

Design Inputs

•Traffic Requirements
•Link & Node Costs
•Design Parameters
•Utilization Constraints

Design Process
Modify Design Inputs

•Link & Node Selection


•Traffic Routine
•Link Sizing

Design Outputs

•Network Topology

Performance Analysis
Good Designs
Fine-Tuning
•Cost •Low Cost Final
•Reliability •Refine Link Sizing Design
•High Performance
•Delay •Refine Routing
•Robust
•Utilization •Refine Node Placement
•Easy To Manage

Figure 8.1 Overview of network design process.

to derive an analytically perfect solution, the network designer must use


a judicious form of trial and error to determine his or her best options.
After surveying a variety of designs, the designer can select the one that
appears to provide the best performance at the lowest cost.
Because network design involves exploring as many alternatives as
possible, automated heuristic design tools are often used to produce quick,
approximate solutions. Once the overall topology and major design aspects
have been decided, it may be appropriate to use additional, more exact
solution techniques to refine the details of the network design. This fine-
tuning represents the final stage of the network design process.
This chapter takes the reader through each of the major steps involved
in network design: requirements analysis, topological design, and perfor-
mance analysis.

8.2 Data Collection


The network requirements must be known before the network can be
designed. However, it is not easy to collect all the information needed to
AU1608_book.fm Page 493 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  493

design a network. Often, this is one of the most time-consuming aspects


of the design process. Data must be collected to determine the sources
and destinations of traffic, the volume and character of the traffic flows,
and the types and associated costs of the line facilities to transport the
traffic. It is rare that these statistics are readily available in the succinct
summary form needed by network design algorithms and procedures.
Most typically, a considerable volume of data is collected from a variety
of sources. For existing networks, it may be possible to collect information on:

 Session type(s)
 Length of session(s): average, minimum, and maximum time
 Source and destination of data transmissions
 Number of packets, characters sent, etc.
 Application type
 Time of session
 Traffic direction and routing

To relate these findings to the user requirements, interviews and data


collection activities may be needed to determine the following:

 Required response time


 Current and planned services and applications to be supported
(i.e., e-mail, database transactions and updates, file transfer, etc.)
 Anticipated future services and applications needs
 Communications protocols, including network management proto-
cols, to be supported
 Network management functions to be supported
 Network reliability requirements
 Implementation and maintenance budget

This data must then be related, culled, and summarized if it is to be useful.


Ultimately, the traffic data must be consolidated to a single estimate of
the source–destination traffic for each node. Between each source and
destination of traffic, a line speed must be selected that will have sufficient
capacity1 to transport the traffic requirement.
The line speed and the line endpoints are used, in turn, to determine
the line cost. Usually, only one line speed is used when designing a
network (or a significant sub-network portion), because multiple line
speeds complicate the network manageability and may introduce incom-
patibilities in transporting the data. Table 8.1 illustrates the data in the
summarized form needed by most network design procedures.
Thus far the issue of node2 costs has not been addressed. The reason
for this is that these costs seldom play a major role in the design of the
AU1608_book.fm Page 494 Friday, November 12, 2004 8:28 PM

494  Network Design: Management and Technical Perspectives

Table 8.1 Sample Traffic and Cost Data Needed for Network Design

Estimated Usable Line Estimated


Traffic Traffic Traffic Capacity Line Cost
Source Destination (Bytes) (Bytes) ($ Monthly)
(1) (2) (3) (4) (5)
City A City B 80,000 1,000,000 (T1) 1,000.00
City A City C 770,000 1,000,000 (T1) 3,500.00
City B City N 500,000 1,000,000 (T1) 6,006.00
City B City C 30,500 1,000,000 (T1) 5,135.00

network topology. After a network has been designed, the node costs are
factored into the total network cost (see Section 8.6.4).
When collecting traffic and cost data, it is helpful to maintain a
perspective on the level of design needed. The level of design — that is,
be it high-level or finely detailed — helps to determine the amount of
data that should be collected, and when more detailed data is required
and when it is not. It may be necessary to develop multiple views of the
traffic requirements and candidate node sets, in order to perform sensitivity
analysis. It is easier to develop strategies for dealing with missing or
inconsistent data when the design objective is clear.
To the extent that it is practical, the traffic and cost data collected
should be current and representative. In general, it is easier to collect
static data than to collect dynamic data. Static data is data that remains
fairly constant over time, while dynamic data may be highly variable over
time. If the magnitude of the dynamic traffic is large, then it makes sense
to concentrate more effort on trying to estimate it more accurately, because
it may have a substantial impact on the network performance. Wherever
possible, automated tools and methods should be used to collect the data.
However, this may not be feasible, and sometimes the data must be
collected manually. However, a manual process increases the likelihood
of errors and limits the amount of data that can be analyzed.

8.2.1 Automated Data Generation


There may be situations where no data is available on existing traffic and
cost patterns. This may be the case when an entirely new network is
being implemented. When actual data is unavailable, an option is to use
traffic and cost generators. Traffic and cost generators can also be used
to augment actual data (particularly when it is missing or inconsistent) or
to produce data for benchmark studies.
AU1608_book.fm Page 495 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  495

For a traffic and cost matrix similar to Table 8.1, for a network with
n sources and destinations, the number of potential table entries is:3

 n
 2 

As stated in [CAHN98]:

There is only one thing certain about a table with 5000 or


10,000 entries. If you create such a table by hand it will
contain thousands and thousands of errors and take
weeks of work.

Thus, there is strong motivation for using automated tools to generate or


augment the traffic and cost data needed to design a network.

8.2.2 Traffic Generators


A traffic generator, as its name implies, is used to automatically produce
a traffic matrix (i.e., the first three columns of Table 8.1) based on a
predetermined model of the traffic flow. Many design tools have traffic
generators built into them. Alternatively, stand-alone software routines are
available that can be used to produce traffic matrices. A traffic generator
can easily produce matrices representative of increasing or decreasing
traffic volumes. Each of these matrices can be used to design a network,
and the results studied to analyze how well a design will handle increasing
traffic loads over time. The traffic generator might also be used to produce
traffic matrices based on a uniform or random traffic model. While these
are not realistic traffic distributions for most networks, they may, none-
theless, provide useful results for benchmarking studies. Other, more
realistic traffic models can be used to produce traffic matrices that more
accurately conform to observed or expected traffic flows. For example, it
may be appropriate to use a model that adjusts traffic flows as a function
of node population, geographic distance between other sites, and antici-
pated link utilization levels. [KERS89] Other models might be based on
the type of traffic (i.e., e-mail, World Wide Web traffic, and client/server
traffic) that is to be carried by the network. The interested reader is referred
to [CAHN98], which lists a number of exemplar software routines for
producing traffic matrices to conform to a variety of model conditions
and assumptions. The major decision when using a traffic generator is
the selection of the traffic flow model that will be used to produce the
traffic matrix.
AU1608_book.fm Page 496 Friday, November 12, 2004 8:28 PM

496  Network Design: Management and Technical Perspectives

8.2.3 Cost Generators and Tariff Data


Cost generators are similar in concept to traffic generators. Cost generators
are used to produce a cost matrix (i.e., columns (1), (2), and (5) in Table
8.1). In an actual design situation, it may be necessary to produce several
cost matrices representing the various line and connectivity options. In
this case, each circuit or line option should be represented in a separate
traffic/cost matrix.
A major challenge in creating cost matrices is that the tariff4 structures
(which determine the costs for lines between two points) are complex,
and contain many anomalies and inconsistencies. For example, tariffs are
not based solely on a simple distance calculation. Thus, it is possible that
a longer circuit may actually cost less than a shorter circuit. Two circuits
of the same length may have different costs, depending on where they
begin and terminate. In the United States, circuits within a LATA5 may
cost less than circuits that begin and end across LATA boundaries. Depend-
ing on the geographic scope of the network, domestic (United States) and
international tariffs may apply. Given the complexities of the tariff struc-
tures, the “ideal” way of keeping track of them is to have a single look-
up table containing all the published tariff rates between all points.
Although comprehensive, accurate tariff information is available, it is very
expensive, costing thousands of dollars. Thus, the “ideal” solution for
gathering tariff data may be too costly to be practical.
Access to a comprehensive tariff database does not guarantee that the
desired cost data is obtainable for all points. For example, tariff information
is only available for direct services that are actually provided. If no direct
service is available between two points, then obviously a tariff will not
be published for these points. Furthermore, even if accurate tariff infor-
mation is available, it may still be too complex to use easily.
When the tariff data is either too costly or complex to use directly,
alternative methods must be employed to estimate the line costs. It should
be noted that while these alternative methods may be easier and cheaper
than the purchase of a commercial tariff tool, they are not as accurate.
The need for accuracy must be weighed against the trade-off of using a
simpler scheme to estimate line costs.
One simple cost model is based on a linear distance function. In this
model, line costs between two nodes i and j are estimated by a function
containing a fixed cost component, F, and a variable distance based
component, V. [KERS89] It should be noted that the F and V cost
components will vary by link type. The linear cost function can be
summarized as:

Costij = F + V (distij) (8.1)


AU1608_book.fm Page 497 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  497

400

300

Cost in $
200

100

0
1 50 100 150
Distance in Miles

Figure 8.2 Example of piece-wise linear cost function.

where:
Costij = cost for line between two nodes i and j
F = fixed cost component
V = variable cost based on distance
distij = distance between nodes i and j

When the locations of nodes i and j are expressed as V and H 6


coordinates (i.e., (Vi, Hi), and (Vj, Hj), respectively), the distance, distij,
between the nodes is easily calculated using a standard distance formula:
[KERS89]

dist ij = (V − V )
i j
2 (
10 + 
i j )
 H −H 2 
 (8.2)
 10 

This simple model can also be used to simplify a complex tariff structure.
Linear regression can be used to transform selected points from the tariff
table into a linear cost relationship. The fixed cost component, F, and
the variable cost component, V, can be derived by taking the partial
derivatives of the cost function with respect to F, and with respect to V,
respectively. [CAHN98, p.151] This simplified model can perform well in
special cases where the tariff structure is highly linear.
A somewhat more realistic estimate of cost may be possible using a
piece-wise linear function. The piece-wise linear cost function is very
similar to the linear cost function, except that the F (fixed) and V
(variable) cost components vary according to distance. An example of a
piece-wise linear function is presented below and in Figure 8.2.

Costij = $100 + $3/(distij) (for distij between 0 and 50 miles)


Costij = $100 + $2/(distij) (for distij between 50 and 100 miles)
Costij = $100 + $1/(distij) (for distij between 100 and 150 miles) (8.3)
AU1608_book.fm Page 498 Friday, November 12, 2004 8:28 PM

498  Network Design: Management and Technical Perspectives

300
250
200

Cost in $
150
100
50
0
0 50 100 150
Distance in Miles

Figure 8.3 Example of step-wise linear cost function.

where:
Costij = cost for line between two nodes i and j
distij = distance between nodes i and j

Many service providers use this model to price private lines. Note that
there are no additional usage fees in the model presented here. With this
type of cost model, there is an economic incentive to fill the line with as
much traffic as possible for as much time as possible.
A step-wise linear function is illustrated in Figure 8.3. A hypothetical
step-wise linear function is given below. Note that in this function, the
fixed costs are only constant within a given range, and there is no longer
a variable cost component.

Costijk = $100 (for distijk between 0 and 50 miles)


Costijk = $200 (for distijk between 50 and 100 miles)
Costijk = $300 (for distijk between 100 and 150 miles) (8.4)

where:
Costijk = cost for line between two nodes i and j up to point k
distijk = distance between nodes i and j

When international circuits must be priced, the cost models may need
to be extended to provide more realistic estimates. For example, if a line
is installed across international boundaries, adjustments may be needed
in the cost model to account for differences in the tariff structures of each
respective country. In addition, lines installed across international bound-
aries are usually priced as the sum of two half-circuits. A communication
supplier in one country supplies half of the line, while a communication
supplier in the other country supplies the other half of the line.
AU1608_book.fm Page 499 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  499

8.3 Technical Requirements Specification


8.3.1 Node Selection, Placement, and Sizing
This section discusses matters relating to the selection, placement, and
sizing of network nodes. This discussion begins with a review of the
devices commonly used as network nodes:

 Bridge. This device is used to interconnect networks or sub-


network components that share the same protocols. Bridges are
fairly simple and do not examine data as it passes through them.
 Concentrator. These devices allow multiple devices to dynamically
share a smaller number of lines than would otherwise be possible.
Concentrators are sometimes called MAUs (Multiple Access Units).
Concentrators only allow one device at a time to use the commu-
nications channel, although many devices can connect to the
concentrator. This is a significant difference from multiplexers.
 Digital Service Units (DSUs) and Channel Service Units (CSUs).
These devices are used to connect digital devices to digital circuits.
 Gateway. Gateways are used to interconnect otherwise incompat-
ible devices using protocol conversion techniques.
 Hub. A device that is used to interconnect workstations or devices.
 Modem. A MODulator/DEModulator device is used to connect
digital devices to analog circuits.
 Multiplexer. These devices are also known as MUXes. They allow
multiple devices to transmit signals over a shared channel, thereby
reducing line and hardware costs. In general, multiplexers do not
require the devices on the line to operate using the same protocols.
There are two types of multiplexers. One type uses Frequency
Division Multiplexing (FDM). This type of multiplexing divides the
channel frequency band into two or more narrower bands, each
acting as a separate channel. The second type of multiplexer uses
Time Division Multiplexing (TDM). This type of multiplexing
divides the line into separate channels by assigning each channel
to a repeating time slot.
 Intelligent multiplexer, or statistical time division multiplexer
(STDM). These devices use statistical sampling techniques to allo-
cate time slots for data transmission based on need, thereby improv-
ing the line efficiency and capacity. The design of STDMs is based
on the fact that most devices transmit data for only a relatively
small percentage of the time they are actually in use. STDMs
aggregate (both synchronous and asynchronous) traffic from mul-
tiple lower-speed devices onto a single higher-speed line.
AU1608_book.fm Page 500 Friday, November 12, 2004 8:28 PM

500  Network Design: Management and Technical Perspectives

 Router. This is a protocol-specific device that transmits data from


sender to receiver over the best route, where “best” may mean the
cheapest, fastest, or least congested route.
 Switch. These devices are used to route transmissions to specific
destinations. Two of the most common types of switches are circuit
switches and packet switches.

The selection of a specific device depends on many factors. These


factors may include the node cost and the requirements that have been
established for protocol compatibility and network functionality. This is a
context-specific decision that must be made on a case-by-case basis.
As previously discussed, the selection of a particular type of node
device has little impact on the design of the network topology. However,
the placement of nodes within the network does impact the network
topology. Typically, nodes are placed near major sources and destinations
of traffic. However, this is not always true, as sometimes node placements
are based on organizational or functional requirements that do not strictly
relate to traffic flow. For example, a node can be placed at the site of a
corporate headquarters, which may or may not be a major source of traffic.
If the node locations must be taken as given and cannot be changed,
then the decisions on node placement are straightforward.
However, in other cases, the network designer is asked to suggest
optimal node placements. One node placement algorithm — the Center
of Mass algorithm — suggests candidate locations based on traffic and
cost considerations. A potential shortcoming of the Center of Mass (COM)
algorithm is that it may suggest node placements in areas that are not
feasible or practical. Alternatively, the ADD and DROP algorithms can be
used to select an optimal subset of node locations, based on a predefined
set of candidate nodes. Thus, potential sites for node placements must
be known in advance when using these latter two algorithms. These
algorithms are discussed in Chapter 2.
Once the node placements and the network topology have been
established, the traffic flows through the node can be estimated. This is
needed to size the node. Measuring the capacity of a node is generally
more difficult than measuring the capacity of a link. Depending on the
device, the node capacity may depend upon the processor speed, the
amount of memory available, the protocols used, and software implemen-
tation. It may also reflect constraints on the type, amount, and mix of
traffic that can be processed by the device. These factors should be
considered when estimating the rated versus the actual usable node
capacity.
Queuing models can be used to test whether or not the actual node
capacity is sufficient to handle the predicted traffic flow. Queuing analysis
AU1608_book.fm Page 501 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  501

allows the network designer to estimate the processing times and delays
expected for traffic flowing through the node, for various node capacities
and utilization rates. Section 8.6.2 provides an introduction to the queuing
models needed to perform this analysis. The node should be sized so
that it is adequate to support current and future traffic flows. If the node
capacity is too low, or the traffic flows are too high, the node utilization
and traffic processing times will increase correspondingly. Queuing anal-
ysis can be used to determine an acceptable level of node utilization that
will avoid excessive performance degradation.
In sizing the node’s throughput requirement, it may also be necessary
to estimate the number of entry points or ports needed on the node. A
straightforward way of producing a preliminary estimate is given below:

Number of Ports =
( Total Traffic Through Portts in bps )
(Usable Port Capacity in bps )
This estimate is likely to be too low because it does not allow for excess
capacity to handle unanticipated traffic peaks. Queuing models similar to
those used for node sizing can be use to adjust the number of ports
upward to a more appropriate figure. A queuing model allows one to
examine the cost of additional ports versus the improvements in through-
put. Queuing analysis is a very useful tool for port sizing.

8.3.2 Network Topology


The network topology defines the logical and physical configuration of
the network components. The basic types of network topologies are listed
below. Figure 8.4 provides an illustrative example of each network type.

 Star. In this topology, all links are connected through a central


node.
 Ring. In this topology, all the links are connected in a logical or
physical circle.
 Tree. In this topology, there is a single path between all nodes.
 Mesh. In this topology, nodes can connect directly to each other,
although they do not necessarily all have to interconnect.

The selection of an appropriate network topology depends on a


number of factors — including protocol and technology requirements.
Chapter 2 discusses in detail the issues involved in selecting a network
topology.
AU1608_book.fm Page 502 Friday, November 12, 2004 8:28 PM

502  Network Design: Management and Technical Perspectives

Mesh Ring

Star Tree

Figure 8.4 Sample network topologies.

Table 8.2 Taxonomy of Network Routing

Routing Characteristics
One route between nodes versus multiple routes between nodes
Fixed routing versus dynamic routing
Minimum hop routing versus minimum distance routing versus arbitrary
routing
Bifurcated routing versus non-bifurcated routing

8.3.3 Routing Strategies


A number of schemes are used to determine the routing of traffic in a
network. These routing schemes can be described in the terms listed in
Table 8.2. [CAHN98, p. 249]
The meanings of “one route between nodes,” “multiple routes between
nodes,” and “arbitrary routing” are self-explanatory. Fixed routing means
the traffic routing is predetermined and invariant, irrespective of conditions
in the network. In contrast, dynamic routing may change, depending on
network conditions (i.e., traffic congestion, node or link failures, etc.).
Minimum hop routing attempts to send the traffic through the least possible
number of intermediate nodes. Minimum distance routing is used to send
traffic over the shortest possible path. Bifurcated routing splits traffic into
two or more streams that may be carried over different paths through the
network. Nonbifurcated routing requires that all the traffic associated with
a given transmission be sent over the same path.
AU1608_book.fm Page 503 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  503

The actual routing scheme used depends on the characteristics of the


devices and the technology employed in the network. This is demonstrated
in the examples below: [CAHN98, p. 159]

 Router networks frequently use a fixed minimum routing or min-


imum hop routing scheme.
 SNA uses a static, arbitrary, multiple, bifurcated routing scheme.
 Multiplexer-based networks generally use minimum distance or
minimum hop routing schemes.

8.3.4 Architectures
Network architectures define the rules and structures for the network
operations and functions. Before the introduction of network architectures,
application programs interacted directly with the communications devices.
This meant that programs were written explicitly to work with specific
devices and network technologies. If the network was changed, the
application programs were modified accordingly. The purpose of a net-
work architecture is to shield application programs from specific network
details and device operations. As long as the application programs adhere
to the standards defined by the architecture, the architecture can handle
specific device and network implementation details.
Most networks are organized as a series of layers, each built upon its
predecessor. The number of layers used and the function of each network
layer vary by protocol and by vendor. The purpose of each layer is to
off-load work from successively higher layers. This divide and conquer
strategy is designed to reduce the complexity of managing the various
network functions. Layer n on one machine operates with layer n on
another machine. The rules and conventions used in this interaction are
collectively known as the layer n protocol. Peer processes represent
entities operating at the same layer on different machines. Interacting peer
processes pass data and control information to the layer immediately
below, until the lowest level is reached. Between each adjacent pair of
layers is an interface that defines the operations and services the lower
layer offers to the upper one. The network architecture defines these
layers and protocols. Among other things, the network architecture pro-
vides a means for: establishing and terminating device connections, spec-
ifying traffic direction (i.e., simplex, full-duplex, half-duplex), error control
handling, and methods of controlling congestion and traffic flow.
There are both open and proprietary network architectures. IBM’s
proprietary network architecture — Systems Network Architecture (SNA)
— is an example of one of the earliest network architectures. It is based
on a layered architecture. Although SNA was originally designed for
AU1608_book.fm Page 504 Friday, November 12, 2004 8:28 PM

504  Network Design: Management and Technical Perspectives

centralized, mainframe communications, it has been continually updated


over the years and supports distributed and peer-to-peer communications.
Xerox’s XNS (Xerox’s Network Services), Digital’s DecNet, Novell’s NetWare,
and Banyan’s VINES are examples of proprietary LAN network architectures.
One of the primary issues in using a proprietary architecture is that it
tends to lock the user into a single vendor vision or solution. This may
also make it more difficult to incorporate other third-party products and
services into the network. In response to strong market pressures and
technological evolution, proprietary architectures have gradually evolved
toward more open standards.
The International Systems Interconnection (ISO) protocol was devel-
oped by the International Organization for Standardization to provide an
open network architecture. The OSI model is based on seven layers. The
lowest level is the physical layer, which specifies how bits are physically
transmitted over a communications channel. The next layer is the data-
link layer. This layer creates and converts data into frames so that trans-
missions between adjacent network nodes can be synchronized. The third
layer is the network layer. This layer determines how packets received
from the transport layer are routed from source to destination within the
network. The fourth layer is the transport layer. This layer is responsible
for providing end-to-end control and information exchange. It accepts
data from the session layer and passes it to the network layer. The next
higher layer is the session layer. The session layer allows users on different
machines to establish sessions between each other. The presentation layer
performs syntax and semantics checks on the data transmissions, and
structures the data in required display and control formats. It may also
perform cryptographic functions. The highest layer, the application layer,
provides an interface to the end user. It also employs a variety of protocols
(e.g., a file transfer protocol). Appendix B elaborates on this discussion
and presents a comparison of the OSI architecture to TCP/IP.
Network architecture considerations come into play when the network
is being implemented because the network architecture has profound
impacts on the types of devices, systems, and services the network can
support and on how the network can be interconnected with other systems
and networks. The network architecture also has a significant impact on
what and how new products and services can be integrated into the
network, because any additions must be compatible with the architecture
in use. Some of the key decisions involved in selecting a network archi-
tecture include:

 Open or proprietary architecture. In making this decision, it is


helpful to keep in mind that the full promise of open architectures
has yet to be achieved. Although there is steady progress toward
AU1608_book.fm Page 505 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  505

open architectures, they are not fully implemented in the market-


place. An expedient compromise might be to select a network or
network components that encompass a subset of OSI functionality.
 Selection of network management protocol. Network management
protocols come in both proprietary and open varieties. The require-
ments for network management may dictate the selection of one
over the other. For example, it may be necessary to manage a
diverse array of third-party devices. This might influence the deci-
sion to adopt a network management protocol that can successfully
integrate device management across multiple platforms.
 Specification of application requirements. Depending on the
requirements at hand, a particular network architecture may be
selected because it facilitates important applications that must be
supported by the network.
 Special device requirements. A requirement for specific devices that
are supported by a network architecture may influence its selection.
 Selection of communication services. All network architectures sup-
port traditional digital (e.g., T1, fractional T1, and T3 lines) and
analog lines. However, the use of various other network services
may dictate specific network architecture requirements. For exam-
ple, the network architecture must explicitly support satellite data
links, if satellite services are to be used. As another example,
networks offering Frame Relay, SMDS, or ATM services also require
specialized network architectures.
 Future plans and expected growth. Plans for future network migra-
tions may influence the selection of a network architecture, par-
ticularly if the network architecture under consideration is moving
in a direction consistent with the evolution of the organization’s
needs.

8.3.5 Network Management


The importance of being able to manage the network after it is imple-
mented is gaining increasing recognition in the marketplace. This is
reflected in the emergence of both proprietary and open (e.g., SNMP7 and
CMIP8) network management protocols. These protocols provide a means
to collect information and to perform network management functions
relating to:

 Configuration management. This involves collecting information


on the current network configuration, and managing changes to
the network configuration.
AU1608_book.fm Page 506 Friday, November 12, 2004 8:28 PM

506  Network Design: Management and Technical Perspectives

 Performance management. This involves gathering data on the


utilization of network resources, analyzing this data, and acting on
the insights provided by the data to maintain optimal system
performance.
 Fault management. This involves identifying system faults as they
occur, isolating the cause of the fault(s), and correcting the fault(s),
if possible.
 Security management. This involves identifying locations of sensi-
tive data, and securing the network access points as appropriate
to limit the potential for unauthorized intrusions.
 Accounting management. This involves gathering data on resource
utilization. It may also involve setting usage quotas and generating
billing and usage reports.

The selection of a network management protocol can have significant


impacts on the network costs and on the selection of the network devices
and systems. For example, IBM’s proprietary network management system
— NetView — is expensive and requires an IBM operating system;
however, it provides comprehensive network management functionality
for both SNA and non-SNA networks. In selecting a network management
approach, the benefits must be weighed against costs, compatibility issues,
and the network requirements.
In general, network management is easier when the network is simple,
homogeneous, and reliable. Designing a network with this in mind means
that complexity should be avoided unless it serves a good purpose.
Network complexity should reflect a requirement for services and functions
that cannot be provided by simpler solutions. All other things being equal,
it is better to have a network comprised of similar, compatible components
and services. A network that is robust, reliable, and engineered to support
growth will be easier to maintain than a network with limited capacity.
Thus, a network with good manageability characteristics should be given
preference over designs that are more difficult to manage, particularly
when these benefits can be achieved without incurring significantly higher
costs. Network manageability can also be enhanced by careful vendor
selection. In this context, vendors that guarantee the quality and continuity
of network products and services are preferable to those that do not.
Network management encompasses all the processes needed to keep
the network up and running at agreed-upon service levels. Network
management involves the use of various management instruments to
optimize the network operation at a reasonable cost. Network management
is most effective when a single department or organization controls it.
The major players and functions in the network management process are:
AU1608_book.fm Page 507 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  507

 Clients
 Client contact point(s)
 Operations support
 Fault tracking
 Performance monitoring
 Change control
 Planning and design
 Billing and finance

Clients represent internal or external customers or any other users of


management services. Clients may report problems, request changes, order
equipment or facilities, or ask for information through an assigned contact
point. Ideally, this should be a single point of contact. The principal
activities of the contact point include:

 Receiving problem reports


 Handling calls
 Handling and processing inquiries
 Receiving change requests
 Handling orders
 Making service requests
 Opening and referring trouble tickets
 Closing trouble tickets

The contact point forwards trouble tickets (i.e., problem reports) to


operations support. In turn, operations support may respond with the
following types of activities:

 Problem determination by handling trouble tickets


 Problem diagnosis
 Corrective actions
 Repair and replacement of software or equipment
 Referrals to third parties
 Backup and reconfiguration activities
 Recovery processes
 Logging and documenting events and actions

It is possible that various troubleshooting activities by clients or oper-


ations support may result in change control requests. Problem reports and
change requests should be managed by a designated group (usually in
operations support) assigned to fault monitoring. The principal functions
of fault monitoring include:
AU1608_book.fm Page 508 Friday, November 12, 2004 8:28 PM

508  Network Design: Management and Technical Perspectives

 Manual tracking of reported or monitored faults


 Tracking progress on status of problem resolution and escalating
the level of intervention, if necessary
 Information distribution to appropriate parties
 Referral to other groups for resolution and action

Fault monitoring is a key aspect of correcting service- and quality-


related problems. Fault monitoring often results in requests for various
system changes. These requests are typically handled by a change control
group. Change control deals with:

 Managing, processing, and tracking service orders


 Routing service orders
 Supervising the handling of changes

After the change requests have been processed and validated, they
should be reviewed and acted upon by the group designated to perform
planning and design. Planning and design performs the following tasks:

 Needs analysis
 Projecting application load
 Sizing resources
 Authorizing and tracking changes
 Raising purchase orders
 Producing implementation plans
 Establishing company standards
 Quality assurance

The recommendations made by planning and design are generally then


passed on to finance and billing and to implementation and maintenance.
Implementation and maintenance make changes and process work orders
approved by planning and design and by change control. In addition, this
area is in charge of:

 Implementing change requests and work orders


 Maintaining network resources
 Performing periodic inspections
 Maintaining database(s) to tracks various network components and
their configuration
 Performing network provisioning

Network status and performance information should be continuously


monitored. Ideally, fault monitoring should be proactive in detecting
AU1608_book.fm Page 509 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  509

problems, and in opening and referring trouble tickets to the appropriate


departments for resolution. Performance monitoring deals with:

 Monitoring the system and network performance


 Monitoring service level agreements and how well they have been
satisfied
 Monitoring third-party and vendor performance
 Performing optimization, modeling, and network tuning activities
 Reporting usage statistics and trends to management and to users
 Reporting service quality status to Finance and Billing

Security management is also a vital part of network management. It is


responsible for ensuring secure communication and protecting the net-
work operations. It supports the following functions:

 Threat analysis
 Administration (access control, partitioning, authentication)
 Detection (evaluating services and solutions)
 Recovery (evaluating services and solutions)
 Protecting the network and network management systems

Systems administration is responsible for administering such functions


as:

 Software version control


 Software distribution
 Systems management (upgrades, disk space management, job con-
trol)
 Administering the user-definable tables (user profiles, router tables,
security servers)
 Local and remote configuring resources
 Names and address management
 Applications management

Finance and billing is the focal point for receiving status reports regarding
service level violations, network plans, designs, and changes, and invoices
from third parties. Finance and billing is responsible for:

 Asset management
 Costing services
 Billing clients
 Usage and outage collection
 Calculating rebates to clients
AU1608_book.fm Page 510 Friday, November 12, 2004 8:28 PM

510  Network Design: Management and Technical Perspectives

 Bill verification
 Software license control

The instruments available to support each of the network management


functions are highly varied in sophistication, scope, and ease of use. The
tools and organizational processes needed to support the network man-
agement functions are dependent upon the business context in which the
network is being operated.

8.3.6 Security
Network security requirements are not explicitly considered during the
execution of topological network design algorithms. Nonetheless, security
considerations may have a considerable impact on the choice of network
devices and services. For example, an often-cited reason for a private
network, as opposed to a public network, is the need for control and
security.
There are many ways to compromise a network’s security, either
inadvertently or deliberately. Therefore, to be effective, network security
must be comprehensive and should operate on several levels. Threats can
occur from both internal and external sources, and can be broadly grouped
into the following categories:

 Unauthorized access to information. This type of threat includes


wiretapping, and people correctly guessing a password to gain
access to a system they are not authorized to use.
 Masquerading. This type of threat occurs when someone gains
access to the network by pretending to be someone else. An
example of this type of threat is a Trojan horse. An example of a
Trojan horse is a software routine that appears to be benign and
legitimate, but is not. A Trojan horse masquerading as a log-on
procedure can prompt a network user to supply the password
required to gain entry to the system. The network user may never
even know that he has given away his password to the Trojan
horse!
 Unauthorized access to physical network components. This type of
threat might occur if someone were to cut through a communica-
tions link while making building repairs or if a bomb were to
explode where it could disrupt the network.
 Repudiation. This threat occurs when someone denies having used
the network facilities in an inappropriate or improper manner. An
example of this is someone sending harassing e-mail to another
person, while denying it.
AU1608_book.fm Page 511 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  511

 Unauthorized denial of service. This threat occurs when a user


prevents other users from gaining the access to the network to
which they are entitled. This might occur if the network is inun-
dated with traffic flooding the network, thus blocking entry to the
system. This type of threat can be caused by intentional or unin-
tentional acts.
One level of security is offered by protocol security, and thus it is
important to assess the level of vulnerability posed by the presence or
lack of good protocol security in the network. For example, SNMP and
other network management protocols that have been designed with secu-
rity in mind can be used to identify and protect the network against
unauthorized use. IP9 networks, on the other hand, are potentially vul-
nerable to source address spoofing. Spoofing is a form of masquerading
where packets appear to come from a source that they did not. IP networks
are also susceptible to packet flooding caused by an open connection.
This creates system overloads that may lead to a denial of service on the
network. A good defense against this type of attack is to configure routers
and firewalls in the network to filter out incoming packets that are not
from approved sources. In future versions of IP, new security provisions
will undoubtedly become available. For example, in IP Version 6, IP
Authentication Headers and IP Encapsulating Security Payload are pro-
vided. Other security protocols to protect Internet traffic include Secure
Socket Layer (SSL) and Secure Hypertext Transport Protocol (SHTTP).
Operational security provides a second level of network security.
Operational security involves disabling network services that are not
necessary or appropriate for various types of users. In this context, remote
log-in and file transfer protocols may be disabled or controlled so that
viruses and unauthorized personnel are prevented from gaining entry to
the network. Operational security also involves such good practices as
changing passwords regularly, constant use of updated anti-virus pro-
grams, ongoing monitoring of anonymous and guest system access, and
enabling and reviewing security logs and alerts.
Network security can also be implemented at the physical level. This
approach attempts to safeguard access to the network by securing network
components and limiting access to authorized personnel only.
Network security can be implemented at the data level. This involves
the use of encryption technology to protect the confidentiality of data
transmissions. Use of encryption technology implies that both the sender
and the receiver must employ compatible procedures to encrypt and
decrypt data. This, in turn, has implications on the management and
implementation of the network services.
There are two major forms of encryption: single-key and public/private
key. An overview of single key cryptography is provided in Figure 8.5.
Key
Key

Ciphertext Plaintext
Plaintext

E D
AU1608_book.fm Page 512 Friday, November 12, 2004 8:28 PM

Encipherment Decipherment

Legend:
• Plaintext - Data Before Encryption
• Ciphertext - Data After Encryption
• E - Encryption Function
• D - Decryption Function
• Key - Parameter Used In Cipher To Ensure Secrecy
• Random Seed - Randomly Selected Number Used To Generate Public And Secret Keys
512  Network Design: Management and Technical Perspectives

Figure 8.5 Secret key example.


AU1608_book.fm Page 513 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  513

DES is a widely used single key encryption scheme. With DES, the data
to be encrypted is subjected to an initial permutation. The data is then
broken down into 64-bit data fields that are in turn split. The two resulting
32-bit fields are then further permuted in a number of iterations. Like all
secret key encryption schemes, DES uses the same key to perform both
encryption and decryption. The DES key, by convention, is 56 bits long.
Seminal work by Diffie and Hellman, and by Rivest, Shamir, and
Adleman (RSV), led to the development of public/private key cryptogra-
phy. In this scheme, data is encrypted with a public key that can be
known by many, and is decrypted by a private key known only to one.
The beauty of public key encryption is that it is computationally infeasible
to derive the decipherment algorithm from the encipherment algorithm.
Therefore, dissemination of the encryption key does not compromise the
confidentiality of the decryption process. Because the encryption key can
be made public, anyone wishing to send a secure message can do so.
This is in contrast to secret key schemes that require both the sender and
receiver to know and safeguard the key. Public key encryption is illustrated
in Figure 8.6.
One application of public key cryptography is the generation of digital
signatures. A digital signature assures the receiver that the message is
authentic; that is, the receiver knows the true identity of the sender and
that the contents of the message cannot be modified without leaving a
trace. A digital signature is very useful for safeguarding contractual and
business-related transmissions because it provides a means for third-party
arbitration and validation of the digital signature. Public and private keys
belong to the sender, who creates keys based on an initial random number
selection (or random seed). The message recipient applies the encipher-
ment function using the sender’s public key. If the result is plaintext, then
the message is considered valid. Digital signatures are illustrated in Figure
8.7.
In summary, comprehensive network security involves active use of
protocol, operational, and encryption measures. Good management over-
sight and employee training complement these security measures.

8.4 Representation of Networks Using Graph Theory


There are numerous rigorous mathematical techniques for solving network
design problems based on graph theory. Graph theory provides a conve-
nient and useful notation for representing networks. This, in turn, makes
is easier to computerize the implementation of network design algorithms.
Public Key Random
F Seed

Secret Key
Plaintext Ciphertext Plaintext

E D
AU1608_book.fm Page 514 Friday, November 12, 2004 8:28 PM

Legend: Sender Receiver

• Plaintext - Data Before Encryption


• Ciphertext - Data After Encryption
• E - Encryption Function
• D - Decryption Function
• Key - Parameter Used In Cipher To Ensure Secrecy
514  Network Design: Management and Technical Perspectives

• Random Seed - Randomly Selected Number Used To Generate Public & Secret Keys

Figure 8.6 Public key cryptography.


AU1608_book.fm Page 515 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  515

Random Public Key


Seed F

Secret Key Signed


Message Message Restored
Message

D E

Sender’s Secret
Receiver

Legend:
• Plaintext - Data Before Encryption
• Ciphertext - Data After Encryption
• E - Encryption Function
• D - Decryption Function
• Key - Parameter Used In Cipher To Ensure Secrecy
• Random Seed - Randomly Selected Number Used To
Generate Public And Secret Keys (i.e., F and G)

Figure 8.7 Digital signature.

8.4.1 Definitions and Nomenclature


When introducing network design algorithms in later sections, the follow-
ing definitions and nomenclature relating to graph theory are necessary.

8.4.1.1 Definition: Graph


A graph G is defined by its vertex set (nodes) V and its edges (links) E.

8.4.1.2 Definition: Link


A link is a bi-directional edge in which the ordering of the nodes attached
to the link does not matter. A link can be used to represent network
traffic flowing in either direction. Full-duplex lines in a communications
AU1608_book.fm Page 516 Friday, November 12, 2004 8:28 PM

516  Network Design: Management and Technical Perspectives

Undirected Directed

Figure 8.8 Example of an undirected and a directed graph.

network support traffic in both directions simultaneously and are often


represented as links in a graph.

8.4.1.3 Definition: Undirected Graph


An undirected graph contains only bi-directional links. See Figure 8.8 for
an illustration.

8.4.1.4 Definition: Arc


An arc is a link with a specified direction between two nodes. Half-duplex
lines in a communications network handle traffic in only one direction at
a time and can be represented as arcs in a graph.

8.4.1.5 Definition: Directed Graph


A directed graph is a graph containing arcs. See Figure 8.8 for an illustration
and comparison with an undirected graph.

8.4.1.6 Definition: Self-Loop


A self-loop is a link that begins and ends with the same node. See Figure
8.9 for an illustration of a self-loop.

8.4.1.7 Definition: Parallel Link


Two links are considered parallel if they start and terminate on the same
nodes. See Figure 8.9 for an illustration of parallel links and a comparison
with a self-loop.
AU1608_book.fm Page 517 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  517

Parallel Links Self-Loop

Figure 8.9 Parallel links versus self-loop.

8.4.1.8 Definition: Simple Graph


A simple graph is a graph without parallel links or self-loops. Most network
design algorithms assume that the network is represented as a simple
graph.

8.4.1.9 Definition: Adjacency


Two nodes i and j are adjacent if there exists a link (i, j) between them.
Adjacent nodes are also called neighbors.

8.4.1.10 Definition: Degree


The degree of a node is the number of links incident on the node or the
number of neighbors the node has.

8.4.1.11 Definition: Incident Link


A link is said to be incident on a node if the node is one of the link’s
endpoints.

8.4.1.12 Definition: Path


A path is a sequence of links that begins at an initial node, s, and ends
at a specified node, t. A path is sometimes designated as (s, t).

8.4.1.13 Definition: Cycle


A cycle exists if the starting node, s, in a path (s, t) is the same as the
terminating node, t.
AU1608_book.fm Page 518 Friday, November 12, 2004 8:28 PM

518  Network Design: Management and Technical Perspectives

Path Without Cycles Path With Cycle

t t
s s

Figure 8.10 Example of path without cycles and path with cycles.

Tree Star

Figure 8.11 Tree graph versus star graph.

8.4.1.14 Definition: Simple Cycle


A simple cycle exists if the starting node s is the same as the terminating
node t and all intermediate nodes between s and t appear only once. See
Figure 8.10 for an example of a graph with a cycle and a graph with no
cycles.

8.4.1.15 Definition: Connected Graph


A graph is considered connected if at least one path exists between every
pair of nodes.

8.4.1.16 Definition: Strongly Connected Graph


A directed graph with a directed path from every node to every other
node is considered a strongly connected graph.

8.4.1.17 Definition: Tree


A tree is a graph that does not contain cycles. Any tree with n nodes will
contain (n−1) edges. See Figure 8.11 for an example of a tree graph.
AU1608_book.fm Page 519 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  519

8.4.1.18 Definition: Minimal Spanning Tree


A minimal spanning tree is a connected graph that links all nodes with
the least possible total cost or length and does not contain cycles. (Assume
that a weight is associated with each link in the graph. This weight might
represent the length or cost of the link.)

8.4.1.19 Definition: Star


A graph is considered a star if only one node has a degree greater than
1. See Figure 8.11 for an example of a star graph and a comparison with
a tree graph.

8.5 Introduction to Algorithms


Many network design problems are solved using special techniques called
“algorithms.” Given the importance of algorithms in solving network design
problems, it is worthwhile at this juncture to formally define and review
important properties of algorithms.

8.5.1 Definition of Algorithms


An algorithm is a well-defined procedure for solving a problem in a finite
number of steps. An algorithm is based on a model that characterizes the
essential features of the problem. The algorithm specifies a methodology
to solve the problem using the model representation.
Algorithms are characterized by a number of properties. These prop-
erties are necessary to ensure that the algorithm correctly solves the
problem for which it is intended, or correctly identifies when a solution
is impossible to find, in a finite number of steps. These properties include:

 Specified inputs. The inputs to an algorithm must be from a pre-


specified set.
 Specified outputs. For each set of input values, the algorithm must
produce outputs from a prespecified set. The output values pro-
duced by the algorithm comprise the solution to the problem.
 Finiteness. An algorithm must produce a desired output after a
finite number of steps.
 Effectiveness. It must be possible to perform each step of the
algorithm exactly as specified.
 Generality. The algorithm should be applicable to all problems of
the desired form. It should not be limited to a particular set of
input values or special cases.
AU1608_book.fm Page 520 Friday, November 12, 2004 8:28 PM

520  Network Design: Management and Technical Perspectives

Table 8.3 List of Potential Link Costs

From Node To Node Link Cost


A B 1
A C 6
A D +∞
B C 5
B D +∞
C D 2

An example of a “greedy” algorithm is now presented. The algorithm


is considered greedy because it selects the best choice immediately avail-
able at each step, without regard to the long-term consequences of each
selection in totality and in relation to each other.
We use the greedy algorithm to find the cheapest set of links to connect
all of a given set of terminals. The graphical representation in Table 8.3
is used to model this network design problem. Using this representation,
all terminal devices are modeled as nodes (a, b, c, and d), and all
communications lines are modeled as links. Associated with each possible
link (i, j) — where i is the starting node and j is the terminating node —
is a weight, representing the cost of the link if it is used in the network.
A cost of “+∞” is used to indicate when a link is prohibitive in cost or is
not available.

8.5.2 Greedy Algorithm (Also Known as Kruskal’s Algorithm)

1. Sort all possible links in ascending order and put in a link list.
2. Check to see if all the nodes are connected.
– If all the nodes are connected, then terminate the algorithm,
with the message “Solution Complete.”
– If all the nodes are not connected, continue to the next step.
3. Select the link at the top of the list.
– If no links are on the list, then terminate the algorithm. Check
to see if all nodes are connected; and if not, then terminate
the algorithm with the message “Solution Cannot Be Found.”
4. Check to see if the link selected creates a cycle in the network.
– If the link creates a cycle, remove it from the list. Return to
Step 2.
– If the link does not create a cycle, add it to the network, and
remove link from link list. Return to Step 2.
AU1608_book.fm Page 521 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  521

Table 8.4 Sorted Link List

From Node To Node Link Cost


A B 1
C D 2
B C 5
A C 6
A D +∞
B D +∞

A B
1

Figure 8.12 First link selected by greedy algorithm.

Now solve the sample problem using the algorithm specified above:

1. Sort all possible links in ascending order and put in a link list (see
Table 8.4).
2. Check to see if all the nodes are connected. Because none of the
nodes are connected, proceed to the next step.
3. Select the link at the top of the list. This is link AB.
4. Check to see if the link selected creates a cycle in the network. It
does not, so add link AB to the solution and remove it from the
link list. One obtains the partial solution shown in Figure 8.12 and
then proceeds with the algorithm.
5. Check to see if all the nodes are connected. They are not, so
proceed to the next step of the algorithm.
6. Select the link at the top of the list. This is link CD.
7. Check to see if the link selected creates a cycle in the network. It
does not, so add link CD to the solution and remove it from the
link list. One obtains the partial solution shown in Figure 8.13 and
then proceeds with the algorithm.
8. Check to see if all the nodes are connected. They are not, so
proceed to the next step.
9. Select the link at the top of the list. This is link BC.
10. Check to see if the link selected creates a cycle in the network. It
does not, so add link BC to the solution and remove it from the
AU1608_book.fm Page 522 Friday, November 12, 2004 8:28 PM

522  Network Design: Management and Technical Perspectives

A B
1

C D
2

Figure 8.13 Second link selected by greedy algorithm.

A B
1

5
C D
2

Figure 8.14 Third and final link selected by greedy algorithm.

link list. One obtains the partial solution shown in Figure 8.14 and
then proceeds with the algorithm.
11. Check to see if all the nodes are connected. All the nodes are now
connected, so terminate the algorithm with the message “Solution
Complete.” Thus, Figure 8.14 represents the final network solution.

Use the checklist below to verify that the greedy algorithm exhibits all
the necessary properties defined above.

 Specified inputs. The inputs to the algorithm are the prespecified


nodes, potential links, and potential link costs.
 Specified outputs. The algorithm produces outputs (link selections)
from the prespecified link set. The outputs produced by the algo-
rithm comprise the solution to the problem.
 Finiteness. The algorithm produces a desired output after a finite
number of steps. The algorithm stops when all the nodes ar e
connected, or after all the candidate links have been examined,
whichever comes first.
 Effectiveness. It is possible to perform each step of the algorithm
exactly as specified.
AU1608_book.fm Page 523 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  523

 Generality. The algorithm is applicable to all problems of the


desired form. It is not limited to a particular set of input values or
special cases.

The greedy algorithm just described provides an optimal network


solution when there are no restrictions on the amount of traffic that can
be placed on the links. In this form, the algorithm is called an “uncon-
strained optimization technique.” However, it is not realistic to assume
that links can carry an indefinite amount of traffic. When steps to check
the line capacity restrictions are added to the algorithm, the algorithm
becomes a constrained optimization technique. When the algorithm is
constrained, and the restrictions on the algorithm are active (e.g., this
would occur if the traffic limit is reached at some point, and therefore a
link selected for inclusion in the design must be rejected), there is no
longer a guarantee that the algorithm will produce an optimal result.
Many studies have been conducted on the constrained form of Kruskal’s
greedy algorithm. These studies show that although the greedy algorithm
does not necessarily produce an optimal, best-case result, in general it
produces very good results that are close to optimal. [KERS93]
In summary, an algorithm is considered “good” if it always provides
a correct answer to the problem or indicates when a correct answer cannot
be found. A good algorithm is also efficient. The next section discusses
what it means to be an efficient algorithm.

8.5.3 Introduction to Computational Complexity Analysis


The efficiency of an algorithm can be measured in several ways. One
estimate of efficiency is based on the amount of computer time needed
to solve the problem using the algorithm. This is also known as the time
complexity of the algorithm. A second estimate of efficiency is the amount
of computer memory needed to implement the algorithm. This is also
referred to as the space complexity of the algorithm. Space complexity is
very closely tied to the particular data structures used to implement the
algorithm.
In general, the actual running time of an algorithm implemented in
software will largely depend upon how well the algorithm was coded,
the computer used to run the algorithm, and the type of data used by
the program. However, in complexity analysis one seeks to evaluate an
algorithm’s performance independent of its actual implementation. To do
this, one must consider factors that remain constant irrespective of the
algorithm’s implementation.
Because one wants a measure of complexity that does not depend on
processing speed, space complexity is ignored because it is so closely
AU1608_book.fm Page 524 Friday, November 12, 2004 8:28 PM

524  Network Design: Management and Technical Perspectives

tied with implementation details. Instead, one can use time complexity as
a measure of an algorithm’s efficiency. One can measure time complexity
in terms of the number of operations required by the algorithm instead
of the actual CPU time required by the algorithm. Expressing time com-
plexity in these units allows one to compare the efficiency of algorithms
that are very different.
For example, let N be the number of inputs to an algorithm. If Algorithm
A requires a number of operations proportional to N2, and Algorithm B
requires an number of operations proportional to N, one can see that
Algorithm B is more efficient. If N is 4, then Algorithm B will require
approximately four operations, while Algorithm A will require 16. As N
becomes larger, the difference in efficiency between Algorithms A and B
becomes more apparent.
If Algorithm B requires time proportional to f(N), this also implies that
given any reasonable computer implementation of the algorithm, there is
some constant of proportionality C such that Algorithm B requires no
more than (C * f(N)) operations to solve a problem of size N. Algorithm
B is said to be of order f (N) — which is denoted O (f (N)) — and f (N)
is the algorithm’s growth rate function. Because this notation uses the
capital letter O to denote order, it is called the Big O notation.
Some examples and explanations for the Big O notation are given
below:

 If an algorithm is O(1), this means it requires a constant time that


is independent of the problem’s input size N.
 If an algorithm is O(N), this means it requires time that is directly
proportional to the problem’s input size N.
 If an algorithm is O(N2), this means it requires time that is directly
proportional to the problem’s input size N2.

Table 8.5 summarizes commonly used terminology describing compu-


tational complexity. The terms listed below are sorted from low to high
complexity. In general, network design algorithms are considered efficient
if they are of O(N2) complexity or less. The greedy algorithm examined
in the previous section can be shown to be O(N log N), where N is the
number of edges examined, and is considered very computationally effi-
cient. [KERS93] Most of the effort expended in executing the greedy
algorithm presented in the previous section goes toward creating the
sorted link list created in the first step.
Brute force and exhaustive search algorithms are considered strategies
of last resort. Using these methods, all the potential solution candidates
are examined one by one, even after the best one has been found. This
is because these methods do not recognize the optimal solution until the
AU1608_book.fm Page 525 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  525

Table 8.5 Computational Complexity Terminology

Complexity Terminology
O(1) Constant complexity
O(log n) Logarithmic complexity
O(n) Linear complexity
O(n log n) n log n complexity
O(nb) Polynomial complexity
n
O(b ), where b > 1 Exponential complexity
O(n!) Factorial complexity

2N N3 N2
N * log 2N
Number of Operations

Number of Inputs (N)

Figure 8.15 Growth rate function comparison.

end after all the candidates have been compared. Brute force and exhaus-
tive search methods are usually O(bN) or worse. The worst computational
complexity is factorial complexity, which is generally associated with n-
p (i.e., non-polynomial time) complete problems. Public key cryptography
and Traveling Salesman problems are two examples of n-p complete
problems. In general, when the input size is large, n-p complete problems
are exceedingly difficult to solve and require very large amounts of
computing time. Figure 8.15 shows the effect of increasing computational
complexity on the number of operations required by the algorithm to
solve the problem.
A Big “O” estimate of an algorithm’s time complexity expresses how
the time required to solve the program changes as the input grows in size.
Big “O” estimates do not directly translate into actual computer time
AU1608_book.fm Page 526 Friday, November 12, 2004 8:28 PM

526  Network Design: Management and Technical Perspectives

because the Big “O” method uses a simplified reference function to estimate
complexity. The simplified reference function omits constants and other
terms that may affect actual computer time. Thus, the Big “O” method
provides a lower bound on computer time. This is illustrated in the
examples that follow.

Example 1: If algorithm is O(N3 + N2 + 3N), one can ignore the low-


order terms in the growth rate function. Therefore, the algo-
rithm’s complexity can be expressed by a simplified growth
function: O(N3).
Example 2: If algorithm is O(5N3 + 2N2 + 3N), one can ignore multipli-
cative constants in the high-order terms of the growth rate
function. Therefore, the algorithm’s complexity can be
expressed by a simplified growth function: O(N3).
Example 3: If algorithm is O(N3) + O(N3), one can combine the growth
rate functions. Therefore, the algorithm’s complexity can be
expressed by a simplified growth function: O(2 * N3) = O(N3).

A formal generalization of these examples is given below.

If: one is given f(x) and g(x), functions from the set of
real numbers
Then: one can say that f(x) = O(g(x))
If and only if: there are constants C and k such that | f (x) | ≤ C |
(g (x) | whenever x > k

An example to illustrate this generalization is given by:

Show: f(x) = O(x2 + 2x + 1) is O(x2)


Solution: Because 0 ≤ (x2 + 2x + 1) ≤ (x2 + 2x2 + x2) = 4x2,
whenever x > 1 = k, and C = 4
Then it follows: f(x) = O(x2)

Complexity analysis can be used to examine worst-case, best-case, and


average scenarios. Worst-case analysis tries to determine the largest num-
ber of operations needed to guarantee a solution will be found. Best-case
analysis seeks to determine the smallest number of operations needed to
find a solution. Average-case analysis is used to determine the average
number of operations used to solve the problem, assuming all inputs are
of a given size.
Note that time complexity is not the only valid criterion to evaluate
algorithms. For example, other important criteria might include the style and
ease of the algorithm’s implementation. The appeal of time complexity, as
AU1608_book.fm Page 527 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  527

described here, is the fact that it is independent of specific implementation


details and provides a very robust way of characterizing algorithms so
they can be compared.
Also note that different orders of complexity do not always matter
much in an actual situation. For example, a more complex algorithm might
run faster than a less complex algorithm with a high constant of propor-
tionality, especially if the number of inputs to the problem is small. Time
complexity, as presented here, is particularly important when solving big
problems. When there are very few inputs, an algorithm of any complexity
will usually run quickly.
In summary, computational complexity is important because it provides
a measure of the computer time required to solve a problem using a given
algorithm. If it takes too long to solve a problem, the algorithm may not
be useful or practical. In the context of network design, one needs
algorithms that can provide reasonable solutions in a reasonable amount
of time. Consider the fact that if there are n potential node locations in
a network, then there are 2(n * (n-1)/2) potential topologies. Thus, brute force
and computationally complex design techniques are simply not suitable
for network problems of any substantial size.

8.5.4 Network Design Techniques


Three major types of techniques are used in network design: heuristic,
exact, and simulation techniques. Heuristic algorithms are the preferred
method for solving many network design problems because they provide
a close to optimal solution in a reasonable amount of time. For this reason,
the bulk of our attention in this book concentrates on heuristic solution
techniques or algorithms.
Linear programming is a powerful, exact solution technique that is also
used in network design. Linear programming methods are based on the
simplex method. The simplex method guarantees that an optimal solution
will be found in a finite amount of time. Otherwise, the simplex algorithm
will show that an optimal solution does not exist or is not feasible, given
the constraints that have been specified. Linear programming models
require that both the objective function and the constraint functions be
linear. Linear programming also requires that the decision variables be
continuous, as opposed to discrete.
In the context of network design, one might want to find a low-cost
network (i.e., one wants to minimize a cost function based on links costs
or tariffs), subject to constraints on where links can be placed and
constraints on the amount of traffic that the links can carry. Although
linear functions may provide useful approximations for the costs and
constraints to be modeled in a network design problem, in many cases
AU1608_book.fm Page 528 Friday, November 12, 2004 8:28 PM

528  Network Design: Management and Technical Perspectives

300
250
200

Cost in $
150
100
50
0
0 50 100 150 200

Distance In Miles

Figure 8.16 Graph with discontinuities.

they do not. The tariffs that determine the line costs are usually nonlinear
and may exhibit numerous discontinuities. A discontinuity exists when
there is a sharp price jump from one point to the next, or when certain
price/line combinations are not available, as illustrated in Figure 8.16.
Linear programming can be used successfully when the cost and constraint
functions can be approximated by a linear or piece-wise linear function
that is accurate within the range of interest. This implies that the linear
programming approach is best applied, in the context of network design,
when the neighborhood of the solution can be approximated a priori.
When the decision variables for designing the network are constrained
to discrete integer values, the linear programming problem becomes an
integer programming problem. Integer programming problems, in general,
are much more difficult to solve than linear programming problems, except
in selected special cases. In the case of network design, it might be
desirable to limit the constraints to zero (0) or one (1) values to reflect
whether or not a link is being used. However, these restrictions complicate
the problem a great deal. In this context, “complicated” means that the
computational complexity of the integer programming technique increases
to the point where it may be impractical to use.
Simulation is a third commonly used technique in network design. It
is often used when the design problem cannot be expressed in an analytical
form that can be solved easily or exactly. A simulation tool is used to
build a simplified representation of the network. Experiments are then
conducted to see how the proposed network will perform as the traffic
and other inputs are modified. In general, simulation approaches are very
time-consuming and computationally intensive. However, they are helpful
in examining the system performance under a variety of conditions to test
the robustness of the design. There are many software packages available
that are designed exclusively for simulation and modeling studies.
AU1608_book.fm Page 529 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  529

In summary, as network size increases, so does the computational


complexity of the network design process. This occurs to such an extent
that for most problems of practical interest, exact optimal solution tech-
niques are impractical, except in special cases. Heuristic design techniques
providing approximate solutions are usually used instead. Although in
many cases, good solutions can be obtained with heuristic techniques,
they do not necessarily guarantee that the best solution will be found.
After an approximate solution is found using a heuristic method, it may
be helpful to use an exact solution technique or simulation to fine-tune
the design.

8.6 Performance Analysis


Once a network has been designed, its performance characteristics can
be analyzed. The most common measures of network performance are
cost, delay, and reliability. This section presents an overview of the
methods used to calculate these performance measures. However, other
measures — for example, link, memory, and node utilization — of network
performance are possible and may also be helpful in evaluating the quality
of the network design. For more information on other network perfor-
mance measures and their calculation, the reader is referred to [CAHN98]
and [KERS93].

8.6.1 Queuing Essentials


This section summarizes some of the major results from queuing theory
that are applicable to the analysis of network delay and reliability. It
concentrates on results that are easily grasped using basic techniques of
algebra and probability theory. Readers who want more information on
this subject are referred to [KERS93], [GROSS74], and [WAGN75]. In addition
to providing detailed theoretical derivations and proofs, these references
also discuss other more complex queuing models that are beyond the
scope of this book.
Queuing theory was originally developed to help design and analyze
telephone networks. Because the rationale for a network is to provide a
means to share resources, there is an implicit assumption that the number
of resources available in the network will be less than the total number
of potential system users. Therefore, it is possible that some users may
have to wait until others relinquish their control over a telecommunications
facility or device. Queuing analysis is used to estimate the waiting times
and delays that system users will experience, given a particular network
configuration.
AU1608_book.fm Page 530 Friday, November 12, 2004 8:28 PM

530  Network Design: Management and Technical Perspectives

Queuing theory is very broadly applicable to the analysis of systems


characterized by a stochastic10 input process, a stochastic service mecha-
nism, and a queue discipline. The key descriptive features of the input
process are the size of the input population, the source of the input
population, inter-arrival times between inputs, and how the inputs are
grouped, if at all. The features of interest for the service mechanism are
the number of servers, the number of inputs that can be served simulta-
neously, and the average service time. The queue discipline describes the
behavior of the inputs once they are within a queue. Given information
on these characteristics, queuing theory can be used to estimate service
times, queue lengths, delays (while in the queue and while being serviced),
and the required number of service mechanisms.
Some queuing notation is needed for the discussion that follows,
including some essential definitions and nomenclature.

8.6.1.1 Standard Queuing Notation:


The following notation specifies the assumptions made in the queuing
model:

Arrival Process/Service Process/Number of Parallel Servers/Optional


Additions

Optional additions to this notation are:

Limit on Number in the System/Number in the Source Population/Type


of Queue Discipline

The arrival and service processes are defined by probability distribu-


tions that describe the expected time between arrivals and the expected
service time. The number of servers or channels operating in parallel must
be a positive integer. Table 8.6 summarizes abbreviations and assumptions
commonly used in queuing models.
Table 8.7 provides some examples to illustrate these abbreviations.
A Markovian distribution is synonymous with an exponential distribu-
tion. This distribution has the interesting property that the probability a
system input that has already waited T time units must wait another X
time units before being served is the same as the probability that an input
just arriving to the system will wait X time units. Thus, the system is
“memoryless,” in that the time an arrival has already spent in the system
does not in any way influence the time the arrival will remain in the system.
Other widely used queuing notation is summarized in Table 8.8.
AU1608_book.fm Page 531 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  531

Table 8.6 Summary of Standard Queueing Nomenclature

Standard
Abbreviation Meaning
M Markovian or exponential distribution
Ek Erlangian or Gamma distribution with k identical phases
D Deterministic distribution
G General distribution
c Number of servers or channels in parallel
FCFS First come, first served
LCFS Last come, first served
RSS Random selection for service
PR Priority service

Table 8.7 Queuing Abbreviations

Queuing Model
Abbreviation Meaning
M/M/c Markovian input process and Markovian service
distribution with c parallel servers
M/M/1/n/m Markovian input process and Markovian service
distribution with one server, a maximum system capacity
of n, and a total potential universe of m customers
M/C/3/m/m Markovian input process and constant service distribution
with three servers and a maximum system capacity of n
and a total potential universe of m customers

Using the notation in Table 8.8, one can introduce Little’s law. This is
a very powerful relationship that holds for many queuing systems. Little’s
law says that the average number waiting in the queuing system is equal
to the average arrival rate of the inputs to the system multiplied by the
average time spent in the queue. Mathematically, this is expressed as:
[GROSS74, p. 60]
Lq = λ * W q (8.5)

Another important queuing relationship derived from Little’s law says


that the average number of inputs to the system is equal to the average
AU1608_book.fm Page 532 Friday, November 12, 2004 8:28 PM

532  Network Design: Management and Technical Perspectives

Table 8.8 Queuing Notation

Notation Meaning
1/λ Mean inter-arrival time between system inputs
1/µ Mean service time for server(s)
λ Mean arrival rate for system inputs
µ Mean service rate for server(s)
ρ Traffic intensity = (λ/(c * µ)), where c = number of service
channels = utilization factor measuring maximum rate at which
work entering system can be processed
L Expected number in the system, at steady state, including those
in service
Lq Expected number in the queue, at steady state, excluding those
in service
W Expected time spent in the system, including service time, at
steady state
Wq Expected time spent in the queue, excluding service time, at
steady state
N(t) Number of units in the system at time t
Nq(t) Number of units in the queue at time t

arrival rate of inputs to the system multiplied by the average time spent
in the system. Mathematically, this is expressed as: [GROSS74, p. 60]

L = λ* W (8.6)

The intuitive explanation for these relationships goes along the fol-
lowing lines. An arrival, A, entering the system will wait an average of
Wq time units before entering service. Upon being served, the arrival can
count the number of new arrivals behind it. On average, this number will
be Lq. The average time between each of the new arrivals is 1/λ units of
time, because by definition this is the inter-arrival rate. Therefore, the total
time it took for the Lq arrivals to line up behind A must equal A’s waiting
time Wq. A similar logical analysis holds for the calculation of L in Equation
(8.6).
A number of steady-state models have been developed to describe
queuing systems that are applicable to network analysis. A steady-state
model is used to describe the long run behavior of a queuing system after
it has stabilized. It also represents the average frequency with which the
system will occupy a given state11 over a long period of time.
AU1608_book.fm Page 533 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  533

8.6.2 Analysis of Loss and Delay in Networks


The following discussion provides examples of queues relating to network
analysis, and illustrates how the queuing models can be used to calculate
delays and service times. Before using any queuing model, it is important
to first identify the type of queue being modeled (e.g., is it an M/M/2
queue or some other type of queue?). Next, the assumptions of the queue
should be examined to see if they are reasonable for the situation being
modeled. A fundamental assumption inherent in the models presented
here is that the system will eventually reach a steady state. This is not a
reasonable assumption for a system that is always in flux and changing.
One way that this assumption can be checked is to examine the utilization
factor ρ (see definition in Table 8.8). As ρ approaches unity (1), a queuing
system will become congested with infinitely long queues, and will not
reach a steady state. The steady-state queuing models presented below
also assume that the inter-arrival times and the service times can be
modeled by exponential probability distributions.
Sometimes not enough is known about the arrival and service mech-
anism to specify a probability distribution, or perhaps what is known
about these processes is too complex to be analyzed. In this case, one
may have to make do with the first and second moments of the probability
distributions. This corresponds to the mean and variance, respectively.
These measures allow one to calculate the squared coefficient of variation
of the distribution. The coefficient of variation, C2, is defined as:

( )
2
C 2 = V (X ) / X

where:
V(X) = variance of probability distribution
X = mean of probability distribution

When the arrival rate is deterministic and constant, C2 is equal to zero.


When the arrival rate is exponentially distributed, C2 is equal to one. When
C2 is small, the arrival rate tends to be evenly spaced. As C 2 approaches
one, the distribution approaches a Poisson distribution, with exponential
inter-arrival time, and the process is said to be random. When C2 exceeds
one, the probability distribution becomes bursty, with large intermittent
peaks.
Except where explicitly stated otherwise, all the steady-state queuing
models present herein assume an infinite population awaiting service. In
reality, this is seldom the case. However, these formulas still provide a
good approximation when the population exceeds 250 and the number
AU1608_book.fm Page 534 Friday, November 12, 2004 8:28 PM

534  Network Design: Management and Technical Perspectives

of servers is small. When the population is less than 50, a finite population
model should be used. [MINO96]
The steady-state models introduced here also assume that traffic pat-
terns are consistent and do not vary according to the time of day and
source of traffic. This, too, is an unrealistic assumption for most telecom-
munication systems. Despite the fact that this assumption is rarely satisfied
in practice, the steady state models still tend to give very good results.
The steady-state models also assume that all the inputs are independent
of each other. In a network, it is entirely likely that problems in one area
of the network will contribute to other failures in the network. This might
occur, for example, when too much traffic from one source creates a system
overload that causes major portions of the network to overload and fail as
well. Despite the fact that this assumption is also rarely satisfied in practice,
the models still provide useful results in the context of network design.
One of the compelling reasons for using steady-state queuing models
is that, despite their inherent inaccuracies and simplifications of reality,
they often yield robust, good results. The models are also useful because
of their simplicity and closed form solution. If one tries to interject more
realism in the model, the result is often an intractable formula that cannot
be solved (at least as easily). The requirements for realism in a model
must always be weighed against the resulting effort that will be required
to solve a more complex model.

8.6.2.1 M/M/1 Model


This type of queue might be used to represent the flow of jobs to a single
print server, or the flow of traffic on a single T1 link. Using standard
queuing notation, the major steady-state relationships for M/M/1 queues
are given below. For these relationships to hold, λ/µ must be less than
1. When the (λ/µ) ratio equals or exceeds one (1), the queue will grow
without bound and there will be no system steady state.

Probability that the system will be empty = 1 − (λ / µ) (8.7)

Probability that there will n inputs in the system = ρn (1 − ρ) (8.8)

Expected number of inputs in the system = L = ρ / (1 − ρ) (8.9)

Expected number of inputs in the queue = Lq = ρ2 / (1 − ρ) (8.10)

Expected total time in system = W = 1/(µ − λ) (8.11)

Expected delay time in queue = Wq = ρ/ ((1 − ρ) ∗ µ) (8.12)


AU1608_book.fm Page 535 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  535

8.6.2.1.1 M/M/1 Example of Database Access Delay


A very large number of users request information from a database man-
agement system. The average inter-arrival time of each request is 500
milliseconds. The database look-up program requires an exponential
service time averaging 200 milliseconds. How long will each request have
to wait on average before being processed?

Answer:
As given in the problem, λ = 1/0.50 seconds = 2 seconds; µ = 1/0.20
seconds = 5 seconds; and λ / µ = ρ = 2/5
Therefore, the expected delay waiting in queue is:

Wq = ρ / ((1 − ρ) ∗ µ) = 0.4/((1 − 0.4) * 5) = 0.1333 seconds

8.6.2.1.2 M/M/1 Example of Delay on Communications Link


A very large number of users share a communications link. The users
generate, on average, one transmission per minute. The message lengths
are exponentially distributed, with an average of 10,000 characters. The
communications link has a capacity of 9600 bps. One wants to know:

1. What is the average service time to transmit a message?


2. What is the average line utilization?
3. What is the probability that there are no messages in the system?

Answer:
(1) The average service time is the message length divided by the channel
speed:

1/µ = (10,000 characters * 8 bits per character)/9600 bits per second


= 8.3 seconds

(2) The average line utilization is λ / µ = ρ = (0.0167)/(0.12) or the line is


utilized at an average rate of 13.9 percent.

λ = 1 message per minute * 60 seconds per minute =


0.0167 messages per second

µ = 1/8.3 = 0.12 messages per second


AU1608_book.fm Page 536 Friday, November 12, 2004 8:28 PM

536  Network Design: Management and Technical Perspectives

(3) The probability that the system will be empty is 1 − (λ / µ) = 1 −


(0.139) = 0.861, or 86.1 percent of the time the line is empty.

8.6.2.2 M/M/1/k Model


This type of queue is used to represent a system that has a maximum
total capacity of k positions, including those in queue and those in service.
This type of queue might be used to model a network node with a limited
number of ports or buffer size. Using standard queuing notation, the major
steady-state relationships for a M/M/1/k queue are summarized as:

Probability that the system will be empty = (1 − ρ)/(1 − ρk+1) (8.13)

where ρ ≠ 1

Probability that there will n inputs in the system = pk (8.14)


= ρn (1 − ρ))/(1− ρk+1)

for n = 0,1,…,k

Expected number of inputs in the system =


ρ − (k + 1)ρk +1 + kρk + 2
= L= (8.15)
(1 − ρ) (1 − ρk +1 )

Expected number of inputs in the queue = Lq= L − ρ (1 − pk) (8.16)

Expected total time in system = W = L/(λ ∗(1 − pk) (8.17)

Expected delay time in queue = Wq = Lq/(λ ∗(1 − pk) (8.18)

8.6.2.2.1 M/M/1/5 Example of Jobs Lost in Front-End Processor due


to Limited Buffering Size
Assume the buffers in a front-end processor (FEP) can handle, at most,
five (5) input streams (i.e., k = 5). When the FEP is busy, up to a maximum
of four (4) jobs are buffered in queue. The average number of jobs arriving
per minute is five (5), while the average number of jobs the FEP processes
per minute is six (6). How many jobs, on average, are lost, or turned
away, due to inadequate buffering capacity?
AU1608_book.fm Page 537 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  537

Answer:
A job will be turned away when it arrives at the system and there are
already four jobs (one in service and three in the queue) ahead of it.
Thus, to find the number of jobs that are turned away, calculate the
probability that a job will arrive when there are already four jobs in the
queue and one in service, and multiply this by the arrival rate.

λ = 5 per minute ; µ = 6 per minute; λ/µ = ρ = 5/6 = .833


Probability that there will 5 inputs in the system = p5 = ρ5(1 − ρ))/(1 − ρ6)
= (0.833)5/1 − (0.833)6)
= 0.10

Therefore, λ * p5 = (5 per minute) * (0.1) = 0.5 jobs per minute are turned
away.

8.6.2.3 M/M/c Model


This type of queue might be used to represent the flow of traffic to c
dial-up ports, or the flow of calls to a PBX with c lines, or traffic through
a communications link with c multiple trunks. Using standard queuing
notation, the major steady-state relationships for M/M/c queues are listed
below. For these relationships to hold, λ/(c * µ) must be less than 1. When
this ratio equals or exceeds 1, the queue will grow without bound and
there will be no system steady state. Although the equations are more
complicated than the ones introduced thus far, they are nonetheless easily
solved using a calculator or computer.

The probability that the system will be empty = P0 =


−1
 c −1 (λ / µ )c 
∑
 n=0
(λ / µ )n / n ! +

c !1 − λ 
 (8.19)
  cµ  

The probability that the system will have n in the system = Pn =

 λ µ n 
 ( )  *P for 0 ≤ n ≤ c
 n ! 
0


 (8.20)
 ( λ µ )n 
  * P0 for n > c
 c ! c 
n –c

AU1608_book.fm Page 538 Friday, November 12, 2004 8:28 PM

538  Network Design: Management and Technical Perspectives

Expected number of inputs in the system = L = ρ + Lq (8.21)


(λ / µ )c ρ P0
Expected number of inputs in the queue = Lq = (8.22)
c !(1 − ρ)2
Expected total time in system = W = 1/µ + Wq (8.23)

Expected delay time in queue = Wq = Lq /λ (8.24)

8.6.2.3.1 M/M/c Example of Print Server Processing Times


Two print servers are available to handle incoming print jobs. Arriving
print jobs form a single queue from which they are served on a first-
come, first-served (FCFS) basis. The accounting records show that the
print servers handle an average of 24 print jobs per hour. The mean service
time per print job has been measured at two (2) minutes. It seems reasonable,
based on the accounting data, to assume that the inter-arrival times and
service times are exponentially distributed. The network manager is thinking
of removing and relocating one of the print servers to another location.
However, the system users want a third print server added to speed up
the processing of their jobs. What service levels should be expected with
one, two, or three print servers?

Answer:
The appropriate model to use for the current configuration is (M/M/2)
with λ = 24 per hour, and µ = 30 per hour. The probability that both
servers are expected to be idle is equal to the probability that there are
no print jobs in the system. From Equation 8.19, this is calculated as:

−1
 1 (24 / 30 )2 

−1
P0 =  (24 / 30 )n / n ! +  = 1 + 0.8 + 0.5333  = 0.43
 n=0  24  
 2! 1 −  
  60  

One server will be idle when there is only one print job in the system.
The fraction of time this will occur is:

 24 
P1 =   * P0 = 0.344
 30 
AU1608_book.fm Page 539 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  539

Both servers will be busy whenever two or more print jobs are in the
system. This is computed as:

P (Busy ) = 1 − P0 − P1 = 0.226

The expected number of jobs in the queue is given in Equation (8.22):

2
 24   24 
 30   60  ( 0.433 )
Lq = = 0.153
 24 
21 − 
 60 

The expected number of jobs in the system is calculated according to


Equation (8.21):

L = ρ + Lq = 0.153 = 0.8 = 0.953

The corresponding W and Wq waiting times, as computed from Equation


(8.23) and Equation (8.24) are:

W = 1/µ + Wq = 0.953/24 = 0.397 hours * 60 minutes per hour


= 2.3825 minutes
Wq = Lq/µ = 0.153/24 = 0.006375 hours * 60 minutes per hour
= 0.3825 minutes

Similar calculations can be performed for a single print server and for
three print servers. The results of these calculations are summarized in
Table 8.9.

Table 8.9 Comparison of M/M/1, M/M/2, and M/M/3 Queues

1 Print Server 2 Print Servers 3 Print Servers


P0 0.2 0.43 0.44
Wq 8 minutes 0.3825 minutes 0.1043 minutes
W 10 minutes 2.3825 minutes 2.1043 minutes
AU1608_book.fm Page 540 Friday, November 12, 2004 8:28 PM

540  Network Design: Management and Technical Perspectives

8.6.2.3.2 M/M/1 versus M/M/c Example and Comparison of Expected


Service Times
Given the same situation as presented in the previous example, what
would happen to the expected waiting times if a single, upgraded print
server were to replace the two print servers currently used? The network
manager is thinking of installing a print server that would have the capacity
to process 60 jobs per hour.

Answer:
The new option being considered equates to an M/M/1 queue. The new
print server has an improved service rate of µ = 60 jobs per hour. The
calculations for the expected waiting times are shown below.

P0 = 1 − (λ / µ) = 1 − (24/60) = 0.6
W = 1/(λ − µ) = 1/(60 − 24) = 0.0277 hours * 60 minutes/hour = 1.662
minutes
Wq = ρ/((1 − ρ) ∗ µ) = 0.4/((1 − 0.4) * 60) = 0.1111 hours * 60 minutes/hour
= 0.666 minutes

These calculations demonstrate a classic result that it is always better to


have a single, more powerful server whose service rate equals the sum
of c servers, than it is to have c servers with c queues. Another important
implication of this is that when a network is properly configured from a
queuing perspective, it may be possible to provide better service at no
additional cost.

8.6.2.4 M/G/1 Model


In the models presented thus far, it is assumed that all the message lengths
are randomly and exponentially distributed. In the case of packet-switched
networks, this is not a valid assumption. Now consider the effects that
packet-switched data has on the delay in the network. Consider an M/G/1
queue in which arrivals are independent, a single server is present, and
the arrival rate is general. Using the Pallaczek-Khintchine formula, it can
be shown that the average waiting time in an M/G/1 system is: [GROSS74,
p. 226]

λE [1 / µ2 ]
Wq = (8.25)
2 (1 − ρ)
AU1608_book.fm Page 541 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  541

Where: E[1/µ2] = second moment of the service distribution, the second


moment is defined as:

E 1 / µ2  = ∑P µ
j +1
j
2
j (8.26)

where:
Pj = probability of a message being type j
1/µ = service time for message of type j

The variance V of the service distribution is given by:

(
V = E (1 / u )  − E 1 / µ  )
2 2
(8.27)
 

8.6.2.4.1 M/G/1 Example of Packet-Switched Transmission Delay


There are two networks. Both networks use 56-Kbps lines that are, on
average, 50 percent utilized. Both networks transmit 1000-bit messages,
on average. The first network transmits exponentially distributed message
lengths. The second network transmits packets of constant message length.
Compare the waiting times for message processing in the two network
configurations.

Answer:
First consider the case when the message length is constant. From the
data, the average arrival rate can be computed as:

λ=
(56000)(0.5) = 28 messages / second
1000

The mean service time is computed from the data as:

(1 / µ )2  = 0.00648 sec2


  msg

The waiting time can now be computed from Equation (8.25) as:
AU1608_book.fm Page 542 Friday, November 12, 2004 8:28 PM

542  Network Design: Management and Technical Perspectives

λE [1 / µ2 ] (28 m / s ) 0.00648
Wq = = = 0.18144 sec/msg
2 (1 − ρ) 2 (1 − 0.5)

Now consider the case when the message length is exponentially


distributed. This corresponds to an M/M/1 queue. The waiting time
calculation for this queue is given by Equation (8.12) and is calculated as:

1, 000 bits
µ= = 0.018 seconds
56, 000 bits / sec
ρ 0.5
Wq = = = 55.5 sec/msg
µ (1 − ρ) 0.018 (1 − 0.5)

Thus, the delay when the message lengths vary according to an


exponential distribution is considerably longer than when the message
lengths are constant. Note that in both of these cases, the average message
length is the same. This is an important result that demonstrates why, all
other things being equal, packet-switched networks are more efficient
than networks that transmit messages of varying length.

8.6.2.5 M/M/c/k Model


For an M/M/c/k model, one can assume that c is greater than k. This
model corresponds to the situation where the system only has room for
those in service. It has no waiting room. When all the servers are busy,
the arrivals will be turned away and denied service. Certain types of
telephone systems that cannot buffer calls exhibit this behavior. The
steady-state equations for this queue are given below.
The probability that the system will be empty = P0 =

−1
 c −1 ( λ / µ )  1 − ρk −c +1  
c

 ∑
 n=0
(λ / µ ) / n ! +
n
c !  1 − ρ  
 (8.28)
 

where:

0 {
λn = λ 0 ≤ n < k
n≥k {
µn = nµ 1 ≤ n ≤ c
cu c < n ≤ k

ρ= λ

AU1608_book.fm Page 543 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  543

The probability that the system will contain n inputs = Pn =

 λ / µ n
  ( ) P for c < n ≤ k
  c ! c n −c
0

 
 (8.29)
  ( λ / µ ) n
 P0 or 0 ≤ n ≤ c
fo
 n !


The effective arrival rate is less than the service rate under steady-state
conditions, and is calculated as:

λe = λ/(1 − Pk). (8.30)

The expected queue length, Lq, is calculated from the use of sum calculus
and its definition as:

c
 λ  ρ× p
 µ ( 0)
=
c ! (1 − ρ)
2 {1 − (k − c ) (1 − ρ) +1 ρ }
k −c
(8.31)

where:
ρ= λ

Because the carried load is the same as the mean number of busy
servers, one can calculate the expected number in the system as:

L = Lq + λ
µ (1 − pk ) (8.32)

From Little’s law, it follows that the waiting times are:

Wq = L q /λ (1 − P k ) (8.33)

and

W = Lq+ 1 (8.34)
µ
AU1608_book.fm Page 544 Friday, November 12, 2004 8:28 PM

544  Network Design: Management and Technical Perspectives

We now consider a special case of the M/M/c/k model: the M/M/c/c


queue. For this queue, the effective arrival and service rates are:

λ, 0 ≤ n < c
λn =  (8.35)
0 n ≥ c

nµ, 1 ≤ n < c
µn =  (8.36)
0 elsewhere

The probability that the system will be empty = P0 =

−1
 c 
∑
 n=0
(λ / µ )n / n !

(8.37)

The probability that the system will have n in the system = Pn =

e − c ρ ( cρ ) n !
n

c
(8.38)

∑e
j =0
− cρ
(cρ) j j !

where: ρ = λ /c µ

8.6.2.6 Erlang’s Loss Formula


In the M/M/c/c model, the system is saturated when all the channels are
busy (i.e., for Pc). By multiplying the numerator and the denominator of
Equation (8.38) by e-cρ, one can obtain a truncated Poisson distribution
with parameter values (c * ρ = λ/µ). These values can be obtained from
tables of the Poisson distribution, or from an Erlang B table, thereby
simplifying the calculations for Pc below. This formula is perhaps better
known as Erlang’s loss formula. In the context of a telephone system, it
describes the probability that an incoming call will receive a busy signal
and will be turned away. This formula was used to design telephone
systems that satisfy predefined levels of acceptable service.
AU1608_book.fm Page 545 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  545

c
e − cρ  cρ /c !
Pc = c j (8.39)
∑ e − cρ  cρ / j !
j +0

8.6.2.6.1 M/M/c/c/Example of Blocked Phone Lines


An office has four shared phone lines. Currently, half the time a call is
attempted, all the phone lines are busy. The average call lasts two minutes.
What is the probability that a call will be attempted when all four lines
are busy?

Answer:
In this problem, one is given the fact that P 4 = 0.5. Therefore, using
Equation (8.39), and an Erlang loss or Poisson table, the value for P 4 is
obtained when 4 ρ = λ/µ = 6.5. Therefore, the implied arrival rate is:

λ = 6.5 * µ = 3.25 calls per minute

8.6.2.7 Total Network Delay


Thus far, queuing models have been used to estimate delay on a single
network component. This section illustrates how these calculations can
be extended to estimate the overall network delay or response time. Note
that, in general, multiple calculations are needed to compute the overall
network delay, as demonstrated below. These calculations can be involved
and tedious — especially for a large network — and in actual practice it
is best to use software routines to automate these calculations. Many
network design tools offer built-in routines to estimate delay using tech-
niques similar to those described here.
By way of example, one can construct a delay model for a packet-
switched network. In packet-switched networks, propagation delay, link
delay, and node delay are the major components of network delay. This
can be summarized as:

Delay Total = Total average link delay + Total average node delay
+ Total average propagation delay (8.40)

Assuming an M/M/1 service model to approximate traffic routing on


the network, total average link delay on the network can be estimated by:
AU1608_book.fm Page 546 Friday, November 12, 2004 8:28 PM

546  Network Design: Management and Technical Perspectives

i k
j

The shortest path from (i,j) is through the shortest path


from (i,k) and the shortest path from (k,j).

Figure 8.17 Nested shortest paths.

Dlink = I
1
J ∑ (D l
* Fl ) (8.41)

∑ ∑R
i =1 j =1
ij
l =1

where:
Rij = traffic requirement from node i to node j
Fl = flow on link l
( P / Sij )
Dl = delay on link l =
1 − ( Fl / C ij )
P = packet length
Sij = link speed from node i to node j
Cij = link utilization from node i to node j

Thus, the total average network delay is the sum of the expected delay
on all the links. The unknown variable in the above equation for Dlink is
link flow. A shortest path algorithm can be used at this stage to assign
link flows to solve for this variable.
A shortest path algorithm computes the shortest path between two
given points, and is based on the insight that the shortest paths are contained
within each other. This is illustrated in Figure 8.17. Thus, as described in
[KERS93, p. 157]:

“If a node, k, is part of the shortest path from i to j, then the


shortest i, j-path must be the shortest i, k-path followed by the
AU1608_book.fm Page 547 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  547

shortest j, k-path. Thus we can find shortest paths using the


following recursion:

dij = mink (dik – dkj)

where dxy is the length of the shortest path from x to y.”


Dijkstra’s algorithm and Bellman’s algorithm, two well-known examples
of shortest path algorithms, specify ways to initiate the recursion equation
given above so that a solution can be found.
Node delay is a function of node technology. Assume, for the purposes
of illustration, that this delay is a constant 120 milliseconds. Thus, total
average node delay is estimated by:

Dnode = Cnode * A (8.42)

where:
Dnode = total average node delay
Cnode = 120 milliseconds/node
A = average number of nodes per shortest routing path

Propagation delay is the time it takes electromagnetic waves to traverse


the transmission link. Propagation delay is proportional to the physical
distance between the node originating a transmission and the node receiv-
ing the transmission. If the tariff database is used to compute link costs,
it is also likely that V (vertical) and H (horizontal) distance coordinates
for each node are available so that the mileage between nodes can be
estimated. This mileage multiplied by the physical limit of electronic speed
— 8 microseconds per mile — can be used to obtain an estimate of the
total average propagation delay, as indicated below. In general, propaga-
tion delays are small compared to the queuing and transmission delays
calculated in the previous section.

Cprop * ∑F * M
l =1
l l

Dprop = I J
(8.43)
∑∑ R
i =1 j =1
ij

where:
Dprop = total average propagation delay
Rij = traffic requirement from node i to node j
Cprop = 8 microseconds/mile
AU1608_book.fm Page 548 Friday, November 12, 2004 8:28 PM

548  Network Design: Management and Technical Perspectives

Fl = flow on link l
Ml = length in miles of link l

8.6.2.8 Other Types of Queues


There are other types of queues that have not been discussed thus far
but which are useful in network analysis. These queues include:

 Queues with state-dependent services. In this model, the service


rate is not constant and may change according to the number in
the system, either by slowing up or by increasing in speed.
 Queues with multiple traffic types. These types of queues are com-
mon in network applications. To handle the arriving traffic, one
may wish to employ a priority service regime, to give faster service
to some kinds of arrivals relative to the others.
 Queues with impatience. This type of queue is designed to model
the situation in which an arrival joins the queue but then becomes
impatient and leaves before it receives service. This is called
reneging. Another type of impatience, called balking, occurs when
the arrival leaves upon finding that a queue exists. A third type of
impatience is associated with jockeying, in which an arrival will
switch from one server to the next to try to gain an advantage in
being served.

These models interject more realism in the network analysis, at the expense
of simplicity. The interested reader is referred to [KERS93] and [KLIE75]
for more information in this area.

8.6.2.9 Summary
Queuing is broadly applicable to many network problems. This section
presented examples of queuing analysis for the following types of network
configurations:

 Delay in a T1-based WAN


 Transmission speed in a packet-switched network
 Printer-server on a LAN with limited buffering
 Expected call blockage in a PBX
 Total network delay in an APPN packet-switched network

The queueing models presented thus far are based on steady-state


assumptions. These assumptions were made to keep the models simple
and easy to solve. It is important to maintain a perspective on what degree
AU1608_book.fm Page 549 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  549

of accuracy is needed in estimating the network delays. If further refine-


ments to the delay estimates will require substantially more time and
effort, then this may well temper the decision to attempt to introduce
more realism into the models. The models introduced here provide a
high-level approximation of reality, and thus there may well be discrep-
ancies between the anticipated (i.e., calculated) system performance and
actual observed performance.

8.6.3 Analysis of Network Reliability


There are numerous ways that network reliability can be estimated. This
section surveys three commonly used methods for analyzing reliability:
(1) component failure analysis, (2) graphical tree analysis, and (3) k-
connectivity analysis. The interested reader is referred to [KERS93] for a
comprehensive treatment of this subject.
One measure of network reliability is the fraction of time the network
and all of its respective components are operational. This can be expressed
as:

Reliability = 1 − (MTTR)/(MTBF) (8.44)

where MTTR is the mean time to repair failure, and MTBF is the mean
time before failure. For example, if the above equation is used to compute
a reliability of 0.98, this means that the network and its components are
operational 98 percent of the time.
Because the network is comprised of multiple components, each
component contributes to the possibility that something will fail. One
commonly used model of component failure assumes that failures will
occur according to the exponential probability distribution. An exponential
random variable with parameter λ is a continuous random variable whose
probability density function is given for some λ > 0 by:

{
f ( x ) = λe − λx if x ≥ 0 and f ( x ) = {0 if x < 0 (8.45)

The cumulative distribution function F of an exponential variable is


given by:

F (a ) = 1 − e − λa for a ≥ 0 (8.46)

These definitions are now used in an illustrative sample problem.


Assume that the failure of a network component can be modeled by an
AU1608_book.fm Page 550 Friday, November 12, 2004 8:28 PM

550  Network Design: Management and Technical Perspectives

exponential distribution with parameter λ = 0.001 and X = time units in


days. Thus, for this component:

f ( x ) = 0.001e −0.001 X
and
F ( x ) = 1 − e −0.001 X

Using the cumulative probability density function F (x) above, one can
compute the probabilities of failure over various time periods:

Probability that the network component will fail within 100 days = 1 − e–.1 = 0.1
Probability that the network component will fail within 1,000 days = 1 − e–1 = 0.63
Probability that the network component will fail within 10,000 days = 1 − e–10 = 0.99

This model describes the probability of failure for a single network


component. A more generalized model of network failure is given in the
simple model below. This model says that the network is connected only
when all the nodes and links in the network are working. It also assumes
that all the nodes have the same probability of failure p, and all the links
have the same probability of failure p′. A final assumption made by this
model is that the network is a tree containing n nodes. This model of the
probability that the network will fail can be written as: [CAHN98]

Probability (failure) = (1 − p ) × (1 − p ′ )
n n −1
(8.47)

When the networks involved are more complex than a simple tree,
this formula no longer holds. When the network is a tree, there is only
one path between nodes. However, in a more complex network, there
may be more than one path, even many paths, between nodes that would
allow the network to continue to function if one path were disconnected.
With the previous approach, all combinations of paths between nodes
should be examined to determine the probabilities of link failures asso-
ciated with a network failure. For a network of any substantial size, this
gives rise to many combinations; that is, there is a combinatorial explosion
in the size of the solution space. This type of problem is, in general, very
computationally intensive.
The following discussion focuses on alternative strategies for estimating
the reliability of complex networks containing cycles and multiple point-
to-point connections. Except for the smallest of networks, the calculations
are sufficiently involved as to require a computer.
Graph reduction is one technique used to simplify the reliability analysis
of complex networks. The idea of graph reduction is to replace all or
AU1608_book.fm Page 551 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  551

One Connected Graph (k=1) Two Connected Graph (k=2)

Figure 8.18 Example of one-connected and two-connected graphs.

part of the network with an equivalent, yet simpler graphical tree repre-
sentation. For example, one type of reduction is parallel reduction. If there
are parallel edges between two nodes and the two edges are operational
with probabilities p and p′, respectively, then it is possible to replace the
two edges with a single edge whose probability is equal to the probability
that either or both of the edges are operational. Other transformations are
possible (e.g., to reduce a series of edges, etc.) and are likely necessary
to sufficiently transform a complex network so that its reliability can be
calculated. For more information on graph reduction techniques, the
reader can refer to [KERS93].
“K-connectivity” analysis is also useful in assessing network reliability.
It strives to characterize the survivability of a network in the event of a
single-component failure, a two-component failure, or a k-component
failure. If a network contains k separate, disjoint paths between every
pair of nodes in the network, then the network is said to be k-connected.
Disjoint paths have no elements in common. Paths are edge-disjoint if
they have no edges in common. Likewise, paths are node-disjoint if they
have no nodes in common. An example of a one- and a two-connected
graph is provided in Figure 8.18.
It is possible to test for k-connectivity, either node or edge, by solving
a sequence of maximum flow problems. This is a direct result of work
by Kleitman, who showed that: [KLEI69]

Given a graph G with a set of vertices and edges (V, E), G is


said to be k-connected if for any node v ∈ V there are k node-
disjoint paths from v to each other node and the graph G’
formed by removing v and all its incident edges from G is (k−
1) connected.

Thus, it is possible to determine the level of k-connectivity in a network


by performing the following iterative procedure described in [KERS93].
The computational complexity of this process is O(k2N2).
AU1608_book.fm Page 552 Friday, November 12, 2004 8:28 PM

552  Network Design: Management and Technical Perspectives

…it is only necessary to find k paths from any node, say v1, to
all others, k−1 paths from another node, say v2 to all others in
the graph with v1 removed, and k−2 paths from v3 to all others
in the graph with v1 and v2 removed, etc. [KERS93]

Thus far, the discussion has focused on link failures. Clearly, if there is
a node failure, the network will not be fully operational. However, a network
failure caused by a node can not be corrected by the network topology,
because the topology deals strictly with the interconnections between
nodes. To compensate for possible link failures, one can design a topology
that provides alternative routing paths. In the case of a node failure, if
the node is out of service, the only way to restore the network is to put
the node (or some replacement) back in service. In practice, back-up or
redundant node capacity is designed into the network to compensate for
this potential vulnerability.
Let us demonstrate how k-connectivity can be used to assess the impact
of node failures. One begins by transforming the network representation
to an easier one to analyze. Suppose one is given the undirected graph
in Figure 8.19. To transform the network, begin by selecting a target node.
Then transform all the incoming and outgoing links from that node into
directed links, as shown in Step 2 in Figure 8.19. Finally, split the target
node into two nodes: i and i′. We connect nodes i and i′ with a new link.
All incoming links stay with node i and all the outgoing links stay with
node i′. This is shown in Step 3 of Figure 8.19. Once the nodes are
represented as links, the k-connectivity algorithm presented above can be
used to determine the level of k-node connectivity in the network.
This section concludes with some guiding principles. Single points of
failure should be avoided in the network. To prevent a single line failure
from disabling the network, the network should be designed to provide
2-k edge-disjoint connectivity or better. This will provide an alternative
route to transmit traffic if one link should fail. However, multiple k-
connectivity does not come cheaply. In general, a multi-connected network
is substantially more expensive than a similar network of lower connectivity.
A common target is to strive for 2-connectivity, and to compensate for
weakness in the topology by using more reliable network components.
However, k-connectivity alone does not guarantee that the network will
be reliable. Consider the network illustrated in Figure 8.20. In this network,
there are two paths for routing traffic between any two pairs of nodes.
However, should the center node fail, the entire network is disconnected.
A single node whose removal will disconnect the network is called an
articulation point. One solution to this problem is to avoid a design where
any one link or node would have this impact. It is apparent that the
failure of some nodes and links may have more impact on the network
AU1608_book.fm Page 553 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  553

Step 0: Initial Network Configuration

Step 1: Target Node for Transformation

Step 2: Transform Undirected Links

i i’

Step 3: Split Target Node

Figure 8.19 k-Connectivity reliability analysis.

Figure 8.20 Example of two-connected graph with single node point of failure.
AU1608_book.fm Page 554 Friday, November 12, 2004 8:28 PM

554  Network Design: Management and Technical Perspectives

than the failure of others. To the extent that is possible, one wants to
design networks with excess capacity strategically located in the network.
While it is desirable to have excess capacity in the network — for
performance reasons so that traffic of varying intensity can be easily carried
without excessive delays — one would also want to add network capacity
where it will make the most difference on the overall network reliability.
[KERS93]

8.6.4 Network Costs


Many of the costs associated with the network ar e obtainable from
manufacturers and organizations leasing equipment and providing ser-
vices. Important costs associated with the network include:

 Tariffs costs for all network links (these ar e location-specific


charges)
 Monthly charges for other network expenses (including device
costs, software costs, etc.)
 Installation charges
 Usage-sensitivity charges

All network expenses should be reduced to the same units and time
scale. For example, one time costs — such as installation costs and one
time purchases — should be converted to monthly costs by amortization.
For example, if a network device costs $9000, this lump sum should be
converted to a monthly charge. Thus, a $9000 device with an expected
useful life of three years has a straight-line amortized monthly cost of $250.
Likewise, usage-sensitive charges should be converted to consistent
time and dollar units. Usage charges, as the name implies, vary according
to the actual system usage. When calculating the network costs, a decision
must be made as to whether or not an average cost or a worst-case cost
calculation is needed. In the former case, the average monthly usage fee,
and in the latter case the largest possible monthly usage fee should be
used in the final cost calculation as shown in Equation 8.49.
A tariff database gives the cost of individual links in the network, based
on their type and distance. It is the most accurate source of information
on monthly line charges. However, the fees for accurate, up-to-date tariff
information can be substantial. Alternatively, monthly link costs can be
estimated using the techniques described in Section 8.2.3. Once the
individual link costs have been tabulated, by whatever means, they are
summed to obtain the total link operating cost:
AU1608_book.fm Page 555 Friday, November 12, 2004 8:28 PM

Technical Considerations in Network Design and Planning  555

N I J

Total monthly line costs = ∑ ∑ ∑


n =1 i =1 j =1
Onij * Mnij (8.48)

where:
Onij = cost of link type n, from node i to node j
Mnij = number of type n links between nodes i and j

A similar calculation can be performed for the node costs. The total
monthly network cost is computed as the sum of all monthly charges, as
indicated below:

Total Monthly Costs = Monthly Line Costs + Monthly Amortized Costs +


Monthly Usage Costs + Other Monthly Service Costs (8.49)

Notes
1. Capacity: the capacity of a line refers to the amount of traffic it can carry.
Traffic is usually expressed in “bits per second,” which is abbreviated as
bps. The actual carrying capacity of a line depends on technology, because
the technology determines the amount and nature of “overhead” traffic,
which must be carried on the line.
2. Node: in the context of the network topology, a connection point between
lines. It is a very general term for a terminal, a processor, a multiplexer, etc.
3. The notation refers to the number of combinations of n sources and
destination nodes taken two at a time, with each set containing two
different nodes and no set containing exactly the same two nodes.
4. Tariff: a published rate for a specific communications service, equipment,
or facility that is legally binding on the communications carrier or supplier.
5. LATA: A Local Access and Transport Area (LATA) defines geographic regions
within the United States within which the Bell Operating Companies
(BOCs) can offer services. Different LATAs have different tariff rates.
6. V and H: the Vertical and Horizontal coordinate system was developed by
AT&T to provide a convenient method of computing the distance between
two points using a standard distance formula.
7. SNMP (Simple Network Management Protocol): protocol defined to work
with TCP/IP and establish standards for collecting information and for per-
forming security, performance, fault, accounting, and configuration func-
tions associated with network management.
8. CMIP (Common Management Information Protocol): protocol designed,
like SNMP, to support network management functions. However, it is more
comprehensive in scope and is designed to work with all systems con-
forming to OSI standards. It also requires considerably more overhead to
implement than does SNMP.
AU1608_book.fm Page 556 Friday, November 12, 2004 8:28 PM

556  Network Design: Management and Technical Perspectives

9. IP (Internet Protocol): controls the network layer protocol of the TCP/IP


protocol suite.
10. Stochastic process: a process with events that can be described by a proba-
bility distribution function. This is in contrast to a deterministic process
whose behavior is certain and completely known.
11. State: the state of a queuing system refers to the total number in the system,
both in queue and in service. The notation for describing the number of
units in the system (including those in queue and those in service) is N(t).
Similarly, the notation for describing the number of units in the queue at
time t is: Nq(t).

References
[CAHN98] Cahn, R., The Art and Science of Network Design, Morgan Kaufmann,
1998.
[GIFF78] Giffen, W., Queuing Basic Theory and Applications, Grid Series in
Industrial Engineering, Columbus, OH, 1978.
[GROSS74] Gross, D. and Harris, C., Fundamentals of Queuing Theory, John
Wiley & Sons, New York, 1974.
[KERS89] Kershenbaum, A., Interview with T. Rubinson on April 27, 1989.
[KERS93] Kershenbaum, A., Telecommunications Network Design Algorithms,
McGraw-Hill, New York, 1993.
[KLEI69] Kleitman, D., Methods of investigating connectivity of large graphs, IEEE
Transactions on Circuit Theory (Corresp.), CT-16:232–233, 1969.
[KLIE75] Kleinrock, L., Queuing Systems, Volumes 1 and 2, Wiley-Interscience,
New York, 1975.
[MINO96] Minoli, D., Queuing fundamentals for telecommunications, Datapro,
McGraw-Hill Companies, Inc., New York, June 1996.
[PILI97] Piliouras, B., Interview with T. Rubinson on August 6, 1997.
[ROSS80] Ross, S., Introduction to Probability Models, second edition, Academic
Press, New York, 1980.
[RUBI92] Rubinson, T., A Fuzzy Multiple Attribute Design and Decision Procedure
for Long Term Network Planning, Ph.D. dissertation, June 1992.
[SHOO91] Shooman, A. and Kershenbaum, A., Exact graph-reduction algorithms
for network reliability analysis, IEEE Proceedings from Globecomm ‘91, August
19, 1991.
[WAGN75] Wagner, H., Principles of Operations Research, 2nd edition, Prentice
Hall, Inc., Englewood Cliffs, NJ, 1975.

You might also like