Unit V
Unit V
Chapter 8
Technical Considerations
in Network Design and
Planning
Design Inputs
•Traffic Requirements
•Link & Node Costs
•Design Parameters
•Utilization Constraints
Design Process
Modify Design Inputs
Design Outputs
•Network Topology
Performance Analysis
Good Designs
Fine-Tuning
•Cost •Low Cost Final
•Reliability •Refine Link Sizing Design
•High Performance
•Delay •Refine Routing
•Robust
•Utilization •Refine Node Placement
•Easy To Manage
Session type(s)
Length of session(s): average, minimum, and maximum time
Source and destination of data transmissions
Number of packets, characters sent, etc.
Application type
Time of session
Traffic direction and routing
Table 8.1 Sample Traffic and Cost Data Needed for Network Design
network topology. After a network has been designed, the node costs are
factored into the total network cost (see Section 8.6.4).
When collecting traffic and cost data, it is helpful to maintain a
perspective on the level of design needed. The level of design — that is,
be it high-level or finely detailed — helps to determine the amount of
data that should be collected, and when more detailed data is required
and when it is not. It may be necessary to develop multiple views of the
traffic requirements and candidate node sets, in order to perform sensitivity
analysis. It is easier to develop strategies for dealing with missing or
inconsistent data when the design objective is clear.
To the extent that it is practical, the traffic and cost data collected
should be current and representative. In general, it is easier to collect
static data than to collect dynamic data. Static data is data that remains
fairly constant over time, while dynamic data may be highly variable over
time. If the magnitude of the dynamic traffic is large, then it makes sense
to concentrate more effort on trying to estimate it more accurately, because
it may have a substantial impact on the network performance. Wherever
possible, automated tools and methods should be used to collect the data.
However, this may not be feasible, and sometimes the data must be
collected manually. However, a manual process increases the likelihood
of errors and limits the amount of data that can be analyzed.
For a traffic and cost matrix similar to Table 8.1, for a network with
n sources and destinations, the number of potential table entries is:3
n
2
As stated in [CAHN98]:
400
300
Cost in $
200
100
0
1 50 100 150
Distance in Miles
where:
Costij = cost for line between two nodes i and j
F = fixed cost component
V = variable cost based on distance
distij = distance between nodes i and j
dist ij = (V − V )
i j
2 (
10 +
i j )
H −H 2
(8.2)
10
This simple model can also be used to simplify a complex tariff structure.
Linear regression can be used to transform selected points from the tariff
table into a linear cost relationship. The fixed cost component, F, and
the variable cost component, V, can be derived by taking the partial
derivatives of the cost function with respect to F, and with respect to V,
respectively. [CAHN98, p.151] This simplified model can perform well in
special cases where the tariff structure is highly linear.
A somewhat more realistic estimate of cost may be possible using a
piece-wise linear function. The piece-wise linear cost function is very
similar to the linear cost function, except that the F (fixed) and V
(variable) cost components vary according to distance. An example of a
piece-wise linear function is presented below and in Figure 8.2.
300
250
200
Cost in $
150
100
50
0
0 50 100 150
Distance in Miles
where:
Costij = cost for line between two nodes i and j
distij = distance between nodes i and j
Many service providers use this model to price private lines. Note that
there are no additional usage fees in the model presented here. With this
type of cost model, there is an economic incentive to fill the line with as
much traffic as possible for as much time as possible.
A step-wise linear function is illustrated in Figure 8.3. A hypothetical
step-wise linear function is given below. Note that in this function, the
fixed costs are only constant within a given range, and there is no longer
a variable cost component.
where:
Costijk = cost for line between two nodes i and j up to point k
distijk = distance between nodes i and j
When international circuits must be priced, the cost models may need
to be extended to provide more realistic estimates. For example, if a line
is installed across international boundaries, adjustments may be needed
in the cost model to account for differences in the tariff structures of each
respective country. In addition, lines installed across international bound-
aries are usually priced as the sum of two half-circuits. A communication
supplier in one country supplies half of the line, while a communication
supplier in the other country supplies the other half of the line.
AU1608_book.fm Page 499 Friday, November 12, 2004 8:28 PM
allows the network designer to estimate the processing times and delays
expected for traffic flowing through the node, for various node capacities
and utilization rates. Section 8.6.2 provides an introduction to the queuing
models needed to perform this analysis. The node should be sized so
that it is adequate to support current and future traffic flows. If the node
capacity is too low, or the traffic flows are too high, the node utilization
and traffic processing times will increase correspondingly. Queuing anal-
ysis can be used to determine an acceptable level of node utilization that
will avoid excessive performance degradation.
In sizing the node’s throughput requirement, it may also be necessary
to estimate the number of entry points or ports needed on the node. A
straightforward way of producing a preliminary estimate is given below:
Number of Ports =
( Total Traffic Through Portts in bps )
(Usable Port Capacity in bps )
This estimate is likely to be too low because it does not allow for excess
capacity to handle unanticipated traffic peaks. Queuing models similar to
those used for node sizing can be use to adjust the number of ports
upward to a more appropriate figure. A queuing model allows one to
examine the cost of additional ports versus the improvements in through-
put. Queuing analysis is a very useful tool for port sizing.
Mesh Ring
Star Tree
Routing Characteristics
One route between nodes versus multiple routes between nodes
Fixed routing versus dynamic routing
Minimum hop routing versus minimum distance routing versus arbitrary
routing
Bifurcated routing versus non-bifurcated routing
8.3.4 Architectures
Network architectures define the rules and structures for the network
operations and functions. Before the introduction of network architectures,
application programs interacted directly with the communications devices.
This meant that programs were written explicitly to work with specific
devices and network technologies. If the network was changed, the
application programs were modified accordingly. The purpose of a net-
work architecture is to shield application programs from specific network
details and device operations. As long as the application programs adhere
to the standards defined by the architecture, the architecture can handle
specific device and network implementation details.
Most networks are organized as a series of layers, each built upon its
predecessor. The number of layers used and the function of each network
layer vary by protocol and by vendor. The purpose of each layer is to
off-load work from successively higher layers. This divide and conquer
strategy is designed to reduce the complexity of managing the various
network functions. Layer n on one machine operates with layer n on
another machine. The rules and conventions used in this interaction are
collectively known as the layer n protocol. Peer processes represent
entities operating at the same layer on different machines. Interacting peer
processes pass data and control information to the layer immediately
below, until the lowest level is reached. Between each adjacent pair of
layers is an interface that defines the operations and services the lower
layer offers to the upper one. The network architecture defines these
layers and protocols. Among other things, the network architecture pro-
vides a means for: establishing and terminating device connections, spec-
ifying traffic direction (i.e., simplex, full-duplex, half-duplex), error control
handling, and methods of controlling congestion and traffic flow.
There are both open and proprietary network architectures. IBM’s
proprietary network architecture — Systems Network Architecture (SNA)
— is an example of one of the earliest network architectures. It is based
on a layered architecture. Although SNA was originally designed for
AU1608_book.fm Page 504 Friday, November 12, 2004 8:28 PM
Clients
Client contact point(s)
Operations support
Fault tracking
Performance monitoring
Change control
Planning and design
Billing and finance
After the change requests have been processed and validated, they
should be reviewed and acted upon by the group designated to perform
planning and design. Planning and design performs the following tasks:
Needs analysis
Projecting application load
Sizing resources
Authorizing and tracking changes
Raising purchase orders
Producing implementation plans
Establishing company standards
Quality assurance
Threat analysis
Administration (access control, partitioning, authentication)
Detection (evaluating services and solutions)
Recovery (evaluating services and solutions)
Protecting the network and network management systems
Finance and billing is the focal point for receiving status reports regarding
service level violations, network plans, designs, and changes, and invoices
from third parties. Finance and billing is responsible for:
Asset management
Costing services
Billing clients
Usage and outage collection
Calculating rebates to clients
AU1608_book.fm Page 510 Friday, November 12, 2004 8:28 PM
Bill verification
Software license control
8.3.6 Security
Network security requirements are not explicitly considered during the
execution of topological network design algorithms. Nonetheless, security
considerations may have a considerable impact on the choice of network
devices and services. For example, an often-cited reason for a private
network, as opposed to a public network, is the need for control and
security.
There are many ways to compromise a network’s security, either
inadvertently or deliberately. Therefore, to be effective, network security
must be comprehensive and should operate on several levels. Threats can
occur from both internal and external sources, and can be broadly grouped
into the following categories:
Ciphertext Plaintext
Plaintext
E D
AU1608_book.fm Page 512 Friday, November 12, 2004 8:28 PM
Encipherment Decipherment
Legend:
• Plaintext - Data Before Encryption
• Ciphertext - Data After Encryption
• E - Encryption Function
• D - Decryption Function
• Key - Parameter Used In Cipher To Ensure Secrecy
• Random Seed - Randomly Selected Number Used To Generate Public And Secret Keys
512 Network Design: Management and Technical Perspectives
DES is a widely used single key encryption scheme. With DES, the data
to be encrypted is subjected to an initial permutation. The data is then
broken down into 64-bit data fields that are in turn split. The two resulting
32-bit fields are then further permuted in a number of iterations. Like all
secret key encryption schemes, DES uses the same key to perform both
encryption and decryption. The DES key, by convention, is 56 bits long.
Seminal work by Diffie and Hellman, and by Rivest, Shamir, and
Adleman (RSV), led to the development of public/private key cryptogra-
phy. In this scheme, data is encrypted with a public key that can be
known by many, and is decrypted by a private key known only to one.
The beauty of public key encryption is that it is computationally infeasible
to derive the decipherment algorithm from the encipherment algorithm.
Therefore, dissemination of the encryption key does not compromise the
confidentiality of the decryption process. Because the encryption key can
be made public, anyone wishing to send a secure message can do so.
This is in contrast to secret key schemes that require both the sender and
receiver to know and safeguard the key. Public key encryption is illustrated
in Figure 8.6.
One application of public key cryptography is the generation of digital
signatures. A digital signature assures the receiver that the message is
authentic; that is, the receiver knows the true identity of the sender and
that the contents of the message cannot be modified without leaving a
trace. A digital signature is very useful for safeguarding contractual and
business-related transmissions because it provides a means for third-party
arbitration and validation of the digital signature. Public and private keys
belong to the sender, who creates keys based on an initial random number
selection (or random seed). The message recipient applies the encipher-
ment function using the sender’s public key. If the result is plaintext, then
the message is considered valid. Digital signatures are illustrated in Figure
8.7.
In summary, comprehensive network security involves active use of
protocol, operational, and encryption measures. Good management over-
sight and employee training complement these security measures.
Secret Key
Plaintext Ciphertext Plaintext
E D
AU1608_book.fm Page 514 Friday, November 12, 2004 8:28 PM
• Random Seed - Randomly Selected Number Used To Generate Public & Secret Keys
D E
Sender’s Secret
Receiver
Legend:
• Plaintext - Data Before Encryption
• Ciphertext - Data After Encryption
• E - Encryption Function
• D - Decryption Function
• Key - Parameter Used In Cipher To Ensure Secrecy
• Random Seed - Randomly Selected Number Used To
Generate Public And Secret Keys (i.e., F and G)
Undirected Directed
t t
s s
Figure 8.10 Example of path without cycles and path with cycles.
Tree Star
1. Sort all possible links in ascending order and put in a link list.
2. Check to see if all the nodes are connected.
– If all the nodes are connected, then terminate the algorithm,
with the message “Solution Complete.”
– If all the nodes are not connected, continue to the next step.
3. Select the link at the top of the list.
– If no links are on the list, then terminate the algorithm. Check
to see if all nodes are connected; and if not, then terminate
the algorithm with the message “Solution Cannot Be Found.”
4. Check to see if the link selected creates a cycle in the network.
– If the link creates a cycle, remove it from the list. Return to
Step 2.
– If the link does not create a cycle, add it to the network, and
remove link from link list. Return to Step 2.
AU1608_book.fm Page 521 Friday, November 12, 2004 8:28 PM
A B
1
Now solve the sample problem using the algorithm specified above:
1. Sort all possible links in ascending order and put in a link list (see
Table 8.4).
2. Check to see if all the nodes are connected. Because none of the
nodes are connected, proceed to the next step.
3. Select the link at the top of the list. This is link AB.
4. Check to see if the link selected creates a cycle in the network. It
does not, so add link AB to the solution and remove it from the
link list. One obtains the partial solution shown in Figure 8.12 and
then proceeds with the algorithm.
5. Check to see if all the nodes are connected. They are not, so
proceed to the next step of the algorithm.
6. Select the link at the top of the list. This is link CD.
7. Check to see if the link selected creates a cycle in the network. It
does not, so add link CD to the solution and remove it from the
link list. One obtains the partial solution shown in Figure 8.13 and
then proceeds with the algorithm.
8. Check to see if all the nodes are connected. They are not, so
proceed to the next step.
9. Select the link at the top of the list. This is link BC.
10. Check to see if the link selected creates a cycle in the network. It
does not, so add link BC to the solution and remove it from the
AU1608_book.fm Page 522 Friday, November 12, 2004 8:28 PM
A B
1
C D
2
A B
1
5
C D
2
link list. One obtains the partial solution shown in Figure 8.14 and
then proceeds with the algorithm.
11. Check to see if all the nodes are connected. All the nodes are now
connected, so terminate the algorithm with the message “Solution
Complete.” Thus, Figure 8.14 represents the final network solution.
Use the checklist below to verify that the greedy algorithm exhibits all
the necessary properties defined above.
tied with implementation details. Instead, one can use time complexity as
a measure of an algorithm’s efficiency. One can measure time complexity
in terms of the number of operations required by the algorithm instead
of the actual CPU time required by the algorithm. Expressing time com-
plexity in these units allows one to compare the efficiency of algorithms
that are very different.
For example, let N be the number of inputs to an algorithm. If Algorithm
A requires a number of operations proportional to N2, and Algorithm B
requires an number of operations proportional to N, one can see that
Algorithm B is more efficient. If N is 4, then Algorithm B will require
approximately four operations, while Algorithm A will require 16. As N
becomes larger, the difference in efficiency between Algorithms A and B
becomes more apparent.
If Algorithm B requires time proportional to f(N), this also implies that
given any reasonable computer implementation of the algorithm, there is
some constant of proportionality C such that Algorithm B requires no
more than (C * f(N)) operations to solve a problem of size N. Algorithm
B is said to be of order f (N) — which is denoted O (f (N)) — and f (N)
is the algorithm’s growth rate function. Because this notation uses the
capital letter O to denote order, it is called the Big O notation.
Some examples and explanations for the Big O notation are given
below:
Complexity Terminology
O(1) Constant complexity
O(log n) Logarithmic complexity
O(n) Linear complexity
O(n log n) n log n complexity
O(nb) Polynomial complexity
n
O(b ), where b > 1 Exponential complexity
O(n!) Factorial complexity
2N N3 N2
N * log 2N
Number of Operations
end after all the candidates have been compared. Brute force and exhaus-
tive search methods are usually O(bN) or worse. The worst computational
complexity is factorial complexity, which is generally associated with n-
p (i.e., non-polynomial time) complete problems. Public key cryptography
and Traveling Salesman problems are two examples of n-p complete
problems. In general, when the input size is large, n-p complete problems
are exceedingly difficult to solve and require very large amounts of
computing time. Figure 8.15 shows the effect of increasing computational
complexity on the number of operations required by the algorithm to
solve the problem.
A Big “O” estimate of an algorithm’s time complexity expresses how
the time required to solve the program changes as the input grows in size.
Big “O” estimates do not directly translate into actual computer time
AU1608_book.fm Page 526 Friday, November 12, 2004 8:28 PM
because the Big “O” method uses a simplified reference function to estimate
complexity. The simplified reference function omits constants and other
terms that may affect actual computer time. Thus, the Big “O” method
provides a lower bound on computer time. This is illustrated in the
examples that follow.
If: one is given f(x) and g(x), functions from the set of
real numbers
Then: one can say that f(x) = O(g(x))
If and only if: there are constants C and k such that | f (x) | ≤ C |
(g (x) | whenever x > k
300
250
200
Cost in $
150
100
50
0
0 50 100 150 200
Distance In Miles
they do not. The tariffs that determine the line costs are usually nonlinear
and may exhibit numerous discontinuities. A discontinuity exists when
there is a sharp price jump from one point to the next, or when certain
price/line combinations are not available, as illustrated in Figure 8.16.
Linear programming can be used successfully when the cost and constraint
functions can be approximated by a linear or piece-wise linear function
that is accurate within the range of interest. This implies that the linear
programming approach is best applied, in the context of network design,
when the neighborhood of the solution can be approximated a priori.
When the decision variables for designing the network are constrained
to discrete integer values, the linear programming problem becomes an
integer programming problem. Integer programming problems, in general,
are much more difficult to solve than linear programming problems, except
in selected special cases. In the case of network design, it might be
desirable to limit the constraints to zero (0) or one (1) values to reflect
whether or not a link is being used. However, these restrictions complicate
the problem a great deal. In this context, “complicated” means that the
computational complexity of the integer programming technique increases
to the point where it may be impractical to use.
Simulation is a third commonly used technique in network design. It
is often used when the design problem cannot be expressed in an analytical
form that can be solved easily or exactly. A simulation tool is used to
build a simplified representation of the network. Experiments are then
conducted to see how the proposed network will perform as the traffic
and other inputs are modified. In general, simulation approaches are very
time-consuming and computationally intensive. However, they are helpful
in examining the system performance under a variety of conditions to test
the robustness of the design. There are many software packages available
that are designed exclusively for simulation and modeling studies.
AU1608_book.fm Page 529 Friday, November 12, 2004 8:28 PM
Standard
Abbreviation Meaning
M Markovian or exponential distribution
Ek Erlangian or Gamma distribution with k identical phases
D Deterministic distribution
G General distribution
c Number of servers or channels in parallel
FCFS First come, first served
LCFS Last come, first served
RSS Random selection for service
PR Priority service
Queuing Model
Abbreviation Meaning
M/M/c Markovian input process and Markovian service
distribution with c parallel servers
M/M/1/n/m Markovian input process and Markovian service
distribution with one server, a maximum system capacity
of n, and a total potential universe of m customers
M/C/3/m/m Markovian input process and constant service distribution
with three servers and a maximum system capacity of n
and a total potential universe of m customers
Using the notation in Table 8.8, one can introduce Little’s law. This is
a very powerful relationship that holds for many queuing systems. Little’s
law says that the average number waiting in the queuing system is equal
to the average arrival rate of the inputs to the system multiplied by the
average time spent in the queue. Mathematically, this is expressed as:
[GROSS74, p. 60]
Lq = λ * W q (8.5)
Notation Meaning
1/λ Mean inter-arrival time between system inputs
1/µ Mean service time for server(s)
λ Mean arrival rate for system inputs
µ Mean service rate for server(s)
ρ Traffic intensity = (λ/(c * µ)), where c = number of service
channels = utilization factor measuring maximum rate at which
work entering system can be processed
L Expected number in the system, at steady state, including those
in service
Lq Expected number in the queue, at steady state, excluding those
in service
W Expected time spent in the system, including service time, at
steady state
Wq Expected time spent in the queue, excluding service time, at
steady state
N(t) Number of units in the system at time t
Nq(t) Number of units in the queue at time t
arrival rate of inputs to the system multiplied by the average time spent
in the system. Mathematically, this is expressed as: [GROSS74, p. 60]
L = λ* W (8.6)
The intuitive explanation for these relationships goes along the fol-
lowing lines. An arrival, A, entering the system will wait an average of
Wq time units before entering service. Upon being served, the arrival can
count the number of new arrivals behind it. On average, this number will
be Lq. The average time between each of the new arrivals is 1/λ units of
time, because by definition this is the inter-arrival rate. Therefore, the total
time it took for the Lq arrivals to line up behind A must equal A’s waiting
time Wq. A similar logical analysis holds for the calculation of L in Equation
(8.6).
A number of steady-state models have been developed to describe
queuing systems that are applicable to network analysis. A steady-state
model is used to describe the long run behavior of a queuing system after
it has stabilized. It also represents the average frequency with which the
system will occupy a given state11 over a long period of time.
AU1608_book.fm Page 533 Friday, November 12, 2004 8:28 PM
( )
2
C 2 = V (X ) / X
where:
V(X) = variance of probability distribution
X = mean of probability distribution
of servers is small. When the population is less than 50, a finite population
model should be used. [MINO96]
The steady-state models introduced here also assume that traffic pat-
terns are consistent and do not vary according to the time of day and
source of traffic. This, too, is an unrealistic assumption for most telecom-
munication systems. Despite the fact that this assumption is rarely satisfied
in practice, the steady state models still tend to give very good results.
The steady-state models also assume that all the inputs are independent
of each other. In a network, it is entirely likely that problems in one area
of the network will contribute to other failures in the network. This might
occur, for example, when too much traffic from one source creates a system
overload that causes major portions of the network to overload and fail as
well. Despite the fact that this assumption is also rarely satisfied in practice,
the models still provide useful results in the context of network design.
One of the compelling reasons for using steady-state queuing models
is that, despite their inherent inaccuracies and simplifications of reality,
they often yield robust, good results. The models are also useful because
of their simplicity and closed form solution. If one tries to interject more
realism in the model, the result is often an intractable formula that cannot
be solved (at least as easily). The requirements for realism in a model
must always be weighed against the resulting effort that will be required
to solve a more complex model.
Answer:
As given in the problem, λ = 1/0.50 seconds = 2 seconds; µ = 1/0.20
seconds = 5 seconds; and λ / µ = ρ = 2/5
Therefore, the expected delay waiting in queue is:
Answer:
(1) The average service time is the message length divided by the channel
speed:
where ρ ≠ 1
for n = 0,1,…,k
Answer:
A job will be turned away when it arrives at the system and there are
already four jobs (one in service and three in the queue) ahead of it.
Thus, to find the number of jobs that are turned away, calculate the
probability that a job will arrive when there are already four jobs in the
queue and one in service, and multiply this by the arrival rate.
Therefore, λ * p5 = (5 per minute) * (0.1) = 0.5 jobs per minute are turned
away.
λ µ n
( ) *P for 0 ≤ n ≤ c
n !
0
(8.20)
( λ µ )n
* P0 for n > c
c ! c
n –c
AU1608_book.fm Page 538 Friday, November 12, 2004 8:28 PM
Answer:
The appropriate model to use for the current configuration is (M/M/2)
with λ = 24 per hour, and µ = 30 per hour. The probability that both
servers are expected to be idle is equal to the probability that there are
no print jobs in the system. From Equation 8.19, this is calculated as:
−1
1 (24 / 30 )2
∑
−1
P0 = (24 / 30 )n / n ! + = 1 + 0.8 + 0.5333 = 0.43
n=0 24
2! 1 −
60
One server will be idle when there is only one print job in the system.
The fraction of time this will occur is:
24
P1 = * P0 = 0.344
30
AU1608_book.fm Page 539 Friday, November 12, 2004 8:28 PM
Both servers will be busy whenever two or more print jobs are in the
system. This is computed as:
P (Busy ) = 1 − P0 − P1 = 0.226
2
24 24
30 60 ( 0.433 )
Lq = = 0.153
24
21 −
60
Similar calculations can be performed for a single print server and for
three print servers. The results of these calculations are summarized in
Table 8.9.
Answer:
The new option being considered equates to an M/M/1 queue. The new
print server has an improved service rate of µ = 60 jobs per hour. The
calculations for the expected waiting times are shown below.
P0 = 1 − (λ / µ) = 1 − (24/60) = 0.6
W = 1/(λ − µ) = 1/(60 − 24) = 0.0277 hours * 60 minutes/hour = 1.662
minutes
Wq = ρ/((1 − ρ) ∗ µ) = 0.4/((1 − 0.4) * 60) = 0.1111 hours * 60 minutes/hour
= 0.666 minutes
λE [1 / µ2 ]
Wq = (8.25)
2 (1 − ρ)
AU1608_book.fm Page 541 Friday, November 12, 2004 8:28 PM
E 1 / µ2 = ∑P µ
j +1
j
2
j (8.26)
where:
Pj = probability of a message being type j
1/µ = service time for message of type j
(
V = E (1 / u ) − E 1 / µ )
2 2
(8.27)
Answer:
First consider the case when the message length is constant. From the
data, the average arrival rate can be computed as:
λ=
(56000)(0.5) = 28 messages / second
1000
The waiting time can now be computed from Equation (8.25) as:
AU1608_book.fm Page 542 Friday, November 12, 2004 8:28 PM
λE [1 / µ2 ] (28 m / s ) 0.00648
Wq = = = 0.18144 sec/msg
2 (1 − ρ) 2 (1 − 0.5)
1, 000 bits
µ= = 0.018 seconds
56, 000 bits / sec
ρ 0.5
Wq = = = 55.5 sec/msg
µ (1 − ρ) 0.018 (1 − 0.5)
−1
c −1 ( λ / µ ) 1 − ρk −c +1
c
∑
n=0
(λ / µ ) / n ! +
n
c ! 1 − ρ
(8.28)
where:
0 {
λn = λ 0 ≤ n < k
n≥k {
µn = nµ 1 ≤ n ≤ c
cu c < n ≤ k
ρ= λ
cµ
AU1608_book.fm Page 543 Friday, November 12, 2004 8:28 PM
λ / µ n
( ) P for c < n ≤ k
c ! c n −c
0
(8.29)
( λ / µ ) n
P0 or 0 ≤ n ≤ c
fo
n !
The effective arrival rate is less than the service rate under steady-state
conditions, and is calculated as:
The expected queue length, Lq, is calculated from the use of sum calculus
and its definition as:
c
λ ρ× p
µ ( 0)
=
c ! (1 − ρ)
2 {1 − (k − c ) (1 − ρ) +1 ρ }
k −c
(8.31)
where:
ρ= λ
cµ
Because the carried load is the same as the mean number of busy
servers, one can calculate the expected number in the system as:
L = Lq + λ
µ (1 − pk ) (8.32)
Wq = L q /λ (1 − P k ) (8.33)
and
W = Lq+ 1 (8.34)
µ
AU1608_book.fm Page 544 Friday, November 12, 2004 8:28 PM
λ, 0 ≤ n < c
λn = (8.35)
0 n ≥ c
nµ, 1 ≤ n < c
µn = (8.36)
0 elsewhere
−1
c
∑
n=0
(λ / µ )n / n !
(8.37)
e − c ρ ( cρ ) n !
n
c
(8.38)
∑e
j =0
− cρ
(cρ) j j !
where: ρ = λ /c µ
c
e − cρ cρ /c !
Pc = c j (8.39)
∑ e − cρ cρ / j !
j +0
Answer:
In this problem, one is given the fact that P 4 = 0.5. Therefore, using
Equation (8.39), and an Erlang loss or Poisson table, the value for P 4 is
obtained when 4 ρ = λ/µ = 6.5. Therefore, the implied arrival rate is:
Delay Total = Total average link delay + Total average node delay
+ Total average propagation delay (8.40)
i k
j
Dlink = I
1
J ∑ (D l
* Fl ) (8.41)
∑ ∑R
i =1 j =1
ij
l =1
where:
Rij = traffic requirement from node i to node j
Fl = flow on link l
( P / Sij )
Dl = delay on link l =
1 − ( Fl / C ij )
P = packet length
Sij = link speed from node i to node j
Cij = link utilization from node i to node j
Thus, the total average network delay is the sum of the expected delay
on all the links. The unknown variable in the above equation for Dlink is
link flow. A shortest path algorithm can be used at this stage to assign
link flows to solve for this variable.
A shortest path algorithm computes the shortest path between two
given points, and is based on the insight that the shortest paths are contained
within each other. This is illustrated in Figure 8.17. Thus, as described in
[KERS93, p. 157]:
where:
Dnode = total average node delay
Cnode = 120 milliseconds/node
A = average number of nodes per shortest routing path
Cprop * ∑F * M
l =1
l l
Dprop = I J
(8.43)
∑∑ R
i =1 j =1
ij
where:
Dprop = total average propagation delay
Rij = traffic requirement from node i to node j
Cprop = 8 microseconds/mile
AU1608_book.fm Page 548 Friday, November 12, 2004 8:28 PM
Fl = flow on link l
Ml = length in miles of link l
These models interject more realism in the network analysis, at the expense
of simplicity. The interested reader is referred to [KERS93] and [KLIE75]
for more information in this area.
8.6.2.9 Summary
Queuing is broadly applicable to many network problems. This section
presented examples of queuing analysis for the following types of network
configurations:
where MTTR is the mean time to repair failure, and MTBF is the mean
time before failure. For example, if the above equation is used to compute
a reliability of 0.98, this means that the network and its components are
operational 98 percent of the time.
Because the network is comprised of multiple components, each
component contributes to the possibility that something will fail. One
commonly used model of component failure assumes that failures will
occur according to the exponential probability distribution. An exponential
random variable with parameter λ is a continuous random variable whose
probability density function is given for some λ > 0 by:
{
f ( x ) = λe − λx if x ≥ 0 and f ( x ) = {0 if x < 0 (8.45)
F (a ) = 1 − e − λa for a ≥ 0 (8.46)
f ( x ) = 0.001e −0.001 X
and
F ( x ) = 1 − e −0.001 X
Using the cumulative probability density function F (x) above, one can
compute the probabilities of failure over various time periods:
Probability that the network component will fail within 100 days = 1 − e–.1 = 0.1
Probability that the network component will fail within 1,000 days = 1 − e–1 = 0.63
Probability that the network component will fail within 10,000 days = 1 − e–10 = 0.99
Probability (failure) = (1 − p ) × (1 − p ′ )
n n −1
(8.47)
When the networks involved are more complex than a simple tree,
this formula no longer holds. When the network is a tree, there is only
one path between nodes. However, in a more complex network, there
may be more than one path, even many paths, between nodes that would
allow the network to continue to function if one path were disconnected.
With the previous approach, all combinations of paths between nodes
should be examined to determine the probabilities of link failures asso-
ciated with a network failure. For a network of any substantial size, this
gives rise to many combinations; that is, there is a combinatorial explosion
in the size of the solution space. This type of problem is, in general, very
computationally intensive.
The following discussion focuses on alternative strategies for estimating
the reliability of complex networks containing cycles and multiple point-
to-point connections. Except for the smallest of networks, the calculations
are sufficiently involved as to require a computer.
Graph reduction is one technique used to simplify the reliability analysis
of complex networks. The idea of graph reduction is to replace all or
AU1608_book.fm Page 551 Friday, November 12, 2004 8:28 PM
part of the network with an equivalent, yet simpler graphical tree repre-
sentation. For example, one type of reduction is parallel reduction. If there
are parallel edges between two nodes and the two edges are operational
with probabilities p and p′, respectively, then it is possible to replace the
two edges with a single edge whose probability is equal to the probability
that either or both of the edges are operational. Other transformations are
possible (e.g., to reduce a series of edges, etc.) and are likely necessary
to sufficiently transform a complex network so that its reliability can be
calculated. For more information on graph reduction techniques, the
reader can refer to [KERS93].
“K-connectivity” analysis is also useful in assessing network reliability.
It strives to characterize the survivability of a network in the event of a
single-component failure, a two-component failure, or a k-component
failure. If a network contains k separate, disjoint paths between every
pair of nodes in the network, then the network is said to be k-connected.
Disjoint paths have no elements in common. Paths are edge-disjoint if
they have no edges in common. Likewise, paths are node-disjoint if they
have no nodes in common. An example of a one- and a two-connected
graph is provided in Figure 8.18.
It is possible to test for k-connectivity, either node or edge, by solving
a sequence of maximum flow problems. This is a direct result of work
by Kleitman, who showed that: [KLEI69]
…it is only necessary to find k paths from any node, say v1, to
all others, k−1 paths from another node, say v2 to all others in
the graph with v1 removed, and k−2 paths from v3 to all others
in the graph with v1 and v2 removed, etc. [KERS93]
Thus far, the discussion has focused on link failures. Clearly, if there is
a node failure, the network will not be fully operational. However, a network
failure caused by a node can not be corrected by the network topology,
because the topology deals strictly with the interconnections between
nodes. To compensate for possible link failures, one can design a topology
that provides alternative routing paths. In the case of a node failure, if
the node is out of service, the only way to restore the network is to put
the node (or some replacement) back in service. In practice, back-up or
redundant node capacity is designed into the network to compensate for
this potential vulnerability.
Let us demonstrate how k-connectivity can be used to assess the impact
of node failures. One begins by transforming the network representation
to an easier one to analyze. Suppose one is given the undirected graph
in Figure 8.19. To transform the network, begin by selecting a target node.
Then transform all the incoming and outgoing links from that node into
directed links, as shown in Step 2 in Figure 8.19. Finally, split the target
node into two nodes: i and i′. We connect nodes i and i′ with a new link.
All incoming links stay with node i and all the outgoing links stay with
node i′. This is shown in Step 3 of Figure 8.19. Once the nodes are
represented as links, the k-connectivity algorithm presented above can be
used to determine the level of k-node connectivity in the network.
This section concludes with some guiding principles. Single points of
failure should be avoided in the network. To prevent a single line failure
from disabling the network, the network should be designed to provide
2-k edge-disjoint connectivity or better. This will provide an alternative
route to transmit traffic if one link should fail. However, multiple k-
connectivity does not come cheaply. In general, a multi-connected network
is substantially more expensive than a similar network of lower connectivity.
A common target is to strive for 2-connectivity, and to compensate for
weakness in the topology by using more reliable network components.
However, k-connectivity alone does not guarantee that the network will
be reliable. Consider the network illustrated in Figure 8.20. In this network,
there are two paths for routing traffic between any two pairs of nodes.
However, should the center node fail, the entire network is disconnected.
A single node whose removal will disconnect the network is called an
articulation point. One solution to this problem is to avoid a design where
any one link or node would have this impact. It is apparent that the
failure of some nodes and links may have more impact on the network
AU1608_book.fm Page 553 Friday, November 12, 2004 8:28 PM
i i’
Figure 8.20 Example of two-connected graph with single node point of failure.
AU1608_book.fm Page 554 Friday, November 12, 2004 8:28 PM
than the failure of others. To the extent that is possible, one wants to
design networks with excess capacity strategically located in the network.
While it is desirable to have excess capacity in the network — for
performance reasons so that traffic of varying intensity can be easily carried
without excessive delays — one would also want to add network capacity
where it will make the most difference on the overall network reliability.
[KERS93]
All network expenses should be reduced to the same units and time
scale. For example, one time costs — such as installation costs and one
time purchases — should be converted to monthly costs by amortization.
For example, if a network device costs $9000, this lump sum should be
converted to a monthly charge. Thus, a $9000 device with an expected
useful life of three years has a straight-line amortized monthly cost of $250.
Likewise, usage-sensitive charges should be converted to consistent
time and dollar units. Usage charges, as the name implies, vary according
to the actual system usage. When calculating the network costs, a decision
must be made as to whether or not an average cost or a worst-case cost
calculation is needed. In the former case, the average monthly usage fee,
and in the latter case the largest possible monthly usage fee should be
used in the final cost calculation as shown in Equation 8.49.
A tariff database gives the cost of individual links in the network, based
on their type and distance. It is the most accurate source of information
on monthly line charges. However, the fees for accurate, up-to-date tariff
information can be substantial. Alternatively, monthly link costs can be
estimated using the techniques described in Section 8.2.3. Once the
individual link costs have been tabulated, by whatever means, they are
summed to obtain the total link operating cost:
AU1608_book.fm Page 555 Friday, November 12, 2004 8:28 PM
N I J
where:
Onij = cost of link type n, from node i to node j
Mnij = number of type n links between nodes i and j
A similar calculation can be performed for the node costs. The total
monthly network cost is computed as the sum of all monthly charges, as
indicated below:
Notes
1. Capacity: the capacity of a line refers to the amount of traffic it can carry.
Traffic is usually expressed in “bits per second,” which is abbreviated as
bps. The actual carrying capacity of a line depends on technology, because
the technology determines the amount and nature of “overhead” traffic,
which must be carried on the line.
2. Node: in the context of the network topology, a connection point between
lines. It is a very general term for a terminal, a processor, a multiplexer, etc.
3. The notation refers to the number of combinations of n sources and
destination nodes taken two at a time, with each set containing two
different nodes and no set containing exactly the same two nodes.
4. Tariff: a published rate for a specific communications service, equipment,
or facility that is legally binding on the communications carrier or supplier.
5. LATA: A Local Access and Transport Area (LATA) defines geographic regions
within the United States within which the Bell Operating Companies
(BOCs) can offer services. Different LATAs have different tariff rates.
6. V and H: the Vertical and Horizontal coordinate system was developed by
AT&T to provide a convenient method of computing the distance between
two points using a standard distance formula.
7. SNMP (Simple Network Management Protocol): protocol defined to work
with TCP/IP and establish standards for collecting information and for per-
forming security, performance, fault, accounting, and configuration func-
tions associated with network management.
8. CMIP (Common Management Information Protocol): protocol designed,
like SNMP, to support network management functions. However, it is more
comprehensive in scope and is designed to work with all systems con-
forming to OSI standards. It also requires considerably more overhead to
implement than does SNMP.
AU1608_book.fm Page 556 Friday, November 12, 2004 8:28 PM
References
[CAHN98] Cahn, R., The Art and Science of Network Design, Morgan Kaufmann,
1998.
[GIFF78] Giffen, W., Queuing Basic Theory and Applications, Grid Series in
Industrial Engineering, Columbus, OH, 1978.
[GROSS74] Gross, D. and Harris, C., Fundamentals of Queuing Theory, John
Wiley & Sons, New York, 1974.
[KERS89] Kershenbaum, A., Interview with T. Rubinson on April 27, 1989.
[KERS93] Kershenbaum, A., Telecommunications Network Design Algorithms,
McGraw-Hill, New York, 1993.
[KLEI69] Kleitman, D., Methods of investigating connectivity of large graphs, IEEE
Transactions on Circuit Theory (Corresp.), CT-16:232–233, 1969.
[KLIE75] Kleinrock, L., Queuing Systems, Volumes 1 and 2, Wiley-Interscience,
New York, 1975.
[MINO96] Minoli, D., Queuing fundamentals for telecommunications, Datapro,
McGraw-Hill Companies, Inc., New York, June 1996.
[PILI97] Piliouras, B., Interview with T. Rubinson on August 6, 1997.
[ROSS80] Ross, S., Introduction to Probability Models, second edition, Academic
Press, New York, 1980.
[RUBI92] Rubinson, T., A Fuzzy Multiple Attribute Design and Decision Procedure
for Long Term Network Planning, Ph.D. dissertation, June 1992.
[SHOO91] Shooman, A. and Kershenbaum, A., Exact graph-reduction algorithms
for network reliability analysis, IEEE Proceedings from Globecomm ‘91, August
19, 1991.
[WAGN75] Wagner, H., Principles of Operations Research, 2nd edition, Prentice
Hall, Inc., Englewood Cliffs, NJ, 1975.