VNF Placement and Resource Allocation For The Support of Vertical Services in 5G Networks
VNF Placement and Resource Allocation For The Support of Vertical Services in 5G Networks
Abstract— One of the main goals of 5G networks is to support These decisions interact with each other in ways that are
the technological and business needs of various industries (the complex and often counterintuitive. In this paper, we focus
so-called verticals), which wish to offer to their customers a wide on the allocation of computational and network resources,
range of services characterized by diverse performance require- and make such decisions jointly, accounting for (i) the require-
ments. In this context, a critical challenge lies in mapping in an
ments of each VNF and vertical; (ii) the capabilities of the
automated manner the requirements of verticals into decisions
concerning the network infrastructure, including VNF placement, network operator’s infrastructure; (iii) the capacity and latency
resource assignment, and traffic routing. In this paper, we seek of the links between network nodes. A key aspect of our
to make such decisions jointly, accounting for their mutual work, often disregarded by previous literature on 5G and VNF
interaction, efficiently. To this end, we formulate a queuing-based placement, is that our approach allows flexible allocation of
model and use it at the network orchestrator to optimally match the computational capabilities of each host among the VNFs
the vertical’s requirements to the available system resources. it runs.
We then propose a fast and efficient solution strategy, called We identify queuing theory as the best tool to model 5G
MaxZ, which allows us to reduce the solution complexity. Our networks, owing to the nature of their traffic and the processing
performance evaluation, carried out an accounting for multiple
scenarios representing the real-world services, shows that MaxZ such a traffic needs. Indeed:
performs substantially better than the state-of-the-art alternatives • much of 5G traffic, especially that coming from
and consistently close to the optimum. Internet-of-things (IoT) and machine-type communica-
tion (MTC) applications, will consist of REST-ful, atomic
Index Terms— 5G mobile communication, queuing theory,
resource allocation. (in principle) requests, as opposed to long-standing
connections [1];
I. I NTRODUCTION • such requests will traverse one or more processing stages,
as implemented in the emerging multi-access edge com-
5 G NETWORKS are envisioned to provide the compu-
tational, memory, and storage resources needed to run
multiple third parties (referred to as vertical industries or
puting (MEC) implementation Amazon Greengrass [2],
and can trigger additional requests in the process;
• the time it takes to process each request depends on the
verticals) with diverse communication and computation needs.
Verticals provide network operators with the specification of capabilities of the computational entity serving it [2].
the services they want to provide, e.g., the virtual network Requests and processing stages naturally map onto clients and
functions (VNFs) they want to use to process their data and queues they have to traverse. Furthermore, the fact that queues
the associated quality of service. can be assigned different service rates aptly models our flexible
Mobile network operators are in charge of mapping the allocation of computational resources.
requirements of the verticals into infrastructure management We take service delay as our main key performance indi-
decisions. This task is part of the network orchestration, and cator (KPI), and we formulate an optimization problem that
includes making decisions concerning (i) the placement of minimizes the maximum ratio between actual and maximum
the VNFs needed by the verticals across the infrastructure; allowed end-to-end latency, across all services. Furthermore,
(ii) the assignment of CPU, memory and storage resources to and without loss of generality, we focus on CPU as the
the VNFs; (iii) the routing of data across network nodes. resource to assign to VNFs. In light of the complexity of
the problem, we then propose an efficient solution strategy,
Manuscript received February 15, 2018; revised July 13, 2018; accepted closely matching the optimum: based on (i) decoupling the
December 28, 2018; approved by IEEE/ACM T RANSACTIONS ON N ET- VNF placement and CPU assignment decisions, while keeping
WORKING Editor T. Spyropoulos. This work was supported by the European
Commission under the H2020 projects 5G-TRANSFORMER (Project
track of their interdependence, and (ii) sequentially making
ID 761536) and 5G-EVE (Project ID 815074). (Corresponding author: such decisions for each VNF. Traffic routing decisions are
Francesco Malandrino.) simply derived once all placement and assignment decisions
S. Agarwal is with Department of Electrical Engineering, IIT Ropar, Ropar are made. Although made in a decoupled and sequential
140001, India. fashion, our decisions are joint as their mutual impact is
F. Malandrino and C. F. Chiasserini are with the Department of Electronics
and Telecommunications, Politecnico di Torino, 10129 Turin, Italy, also with properly accounted for, e.g., we consider how deploying a
the Institute of Electronics, Computer and Communication Engineering, 10129 new VNF on a host impacts the possible CPU assignments
Turin, Italy, and also with the National Research Council, 10129 Turin, Italy therein.
(e-mail: [email protected]; [email protected]). Our main contributions can be summarized as follows:
S. De is with the Department of Electrical Engineering, IIT Delhi, New
Delhi 10016, India. • our model accounts for the main resources of 5G net-
Digital Object Identifier 10.1109/TNET.2018.2890631 works, namely, hosts and links;
1063-6692 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
•we model the diverse requirements of different VNFs, and the user experience. References [21], [22] study the VNF
allow them to be composed in arbitrarily complex graphs, placement problem, accounting for the computational capabil-
as mandated by [3, Sec. 6.5], instead of simpler chains ities of hosts as well as network delays. Bhamare et al. [23]
or directed acyclic graphs (DAGs); consider inter-cloud latencies and VNF response times, and
• unlike existing work, we allow flexible allocation of CPU solve the resulting ILP through an affinity-based heuristic.
to VNFs, and model the resulting impact on service times; Virtual EPC: The Evolved Packet Core (EPC) is a prime
• we propose a solution strategy, called MaxZ, that is able example of a service that can be provided through soft-
to efficiently and effectively make VNF placement and ware defined networking and network function virtualiza-
CPU allocation decisions, and show how it consistently tion (SDN/NFV). Interestingly, different works use different
performs very close to the optimum across a variety of VNF graphs to implement EPC, e.g., splitting user- and
traffic requirements; control-plane entities [13], [24], [25] or joining together the
• focusing on the special case of fully-load conditions, packet and service gateways (PGW and SGW) [26], [27].
we state and prove several properties of the optimal CPU Our model and algorithms work with any VNF graph, which
allocation decisions, and use them to further speed up the allows us to model any real-world service, including all
decision process. implementations of vEPC.
The remainder of the paper is organized as follows. Sec. II
reviews related work, highlighting the novelty of our con- A. Novelty
tribution. Sec. III positions our work within the context of The closest works to ours, in terms of approach and/or
the ETSI management and orchestration (MANO) framework. methodology, are [21]-[23] and [27].
Sec. IV describes the system model, while Sec. V introduces In particular, [21], [22], and [26] model the assignment
the problem formulation and analyzes its complexity. Sec. VI of VNFs to servers as a generalized assignment problem,
presents our solution concept, while Sec. VII describes how a resource-constrained shortest path problem and a MILP
we deal with the special case of full-load conditions. Sec. VIII problem, respectively. This implies that either a server has
addresses scenarios with multiple VNF instances. Finally, enough spare CPU capacity to offer a VNF, or it does not.
Sec. IX presents performance evaluation results, while Sec. X Our queuing model, instead, is the first to account for the
concludes the paper. flexible allocation of CPU to the VNFs running on a host,
e.g., the fact that VNFs will work faster if placed at a
II. R ELATED W ORK scarcely-utilized server. Furthermore, [21] and [26] have as
Network Slicing and Orchestration: A first body of works objective the minimization of costs and server utilization,
concerns the network slicing paradigm and its role within 5G. respectively. Our objective, instead, is to minimize the delay
Several works, including [4]–[6], focus on the architecture incurred by requests of different classes, which changes the
of 5G networks based on network slicing, pointing out their solution strategy that can be adopted. The work [22] aims
opportunities and challenges. Other works, e.g., [7] and [8], at solving essentially the same problem as ours, albeit in the
address decision-making in 5G networks and the associ- specific scenario where all traffic flows through a deterministic
ated challenges, including computational complexity. Finally, sequence of VNFs, i.e., VNF graphs are chains.
orchestration, including the decision-making involved enti- The queuing model used in [23] is similar (in principle) to
ties and the arising security concerns have been tackled in, ours; however, [23] does not address overlaps between VNF
e.g., [9] and [10], respectively. graphs and only considers DAGs, i.e., requests cannot visit
Network-Centric Optimization: Many works, includ- the same VNF more than once. Furthermore, in both [22]
ing [11]–[15], tackle the problems of VNF placement and and [23] no CPU allocation decisions are made, and the
routing from a network-centric viewpoint, i.e., they aim at objective is to minimize a global metric, ignoring the different
minimizing the load of network resources. In particular, [11] requirements of different service classes. Finally, the affinity-
seeks to balance the load on links and servers, while [12] based placement heuristic proposed in [23] neglects the inter-
studies how to optimize routing to minimize network host latencies, and this, as confirmed by our numerical results
utilization. The above approaches formulate mixed-integer in Sec. IX, can yield suboptimal performance.
linear programming (MILP) problems and propose heuristic Finally, it is worth mentioning that a preliminary version
strategies to solve them. References [13], [14], and [15] of this paper appeared in [28]. While sharing the same basic
formulate ILP problems, respectively aiming at minimizing solution concept, this version includes a substantial amount of
the cost of used links and network nodes, minimizing new and revised material, including a discussion on how our
resource utilization subject to QoS requirements, and work fits in the 5G MANO framework (Sec. III), an extended
minimizing bitrate variations through the VNF graph. discussion of full-load conditions (Sec. VII), and new results
Service Provider’s Perspective: Several recent works take for large-scale scenarios.
the viewpoint of a service provider, supporting multiple ser-
vices that require different, yet overlapping, sets of VNFs, III. O UR W ORK AND THE ETSI MANO F RAMEWORK
and seek to maximize its revenue. The works [16], [17] aim ETSI has standardized [3] the management and orchestra-
at minimizing the energy consumption resulting from VNF tion (MANO) framework, including a set of functional blocks
placement decisions. References [18], [19] study how to place and the reference points, i.e., the interfaces between functional
VNFs between network-based and cloud servers so as to blocks (akin to a REST API) that they use to communicate.
minimize the cost, and [20] studies how to design the VNF Its high-level purpose is to translate business-facing KPIs
graphs themselves, in order to adapt to the network topology. chosen by the vertical (e.g., the type of processing needed
User-Centric Perspective: Closer to our own approach, sev- and the associated end-to-end delay) into resource-facing deci-
eral works take a user-centric perspective, aiming at optimizing sions such as virtual resource instantiation, VNF placement,
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
AGARWAL et al.: VNF PLACEMENT AND RESOURCE ALLOCATION FOR THE SUPPORT OF VERTICAL SERVICES 3
and traffic routing. In this section, we first present a brief the processing each service requires, e.g., the VNFs that the
overview of the ETSI MANO framework; then, in Sec. III-A, traffic has to traverse, in the form of a VNF Forwarding
we focus on the NFV orchestrator and detail the decisions it Graph (VNFFG) descriptor [3, Sec. 6.5.1]. They contain
has to make and the input data at its disposal. deployment flavor information, including the delay require-
Fig. 1 presents the functions composing the MANO frame- ments associated with every service [3, Sec. 6.2.1.3]. Addition-
work (within the blue area) as well as the functions outside ally, from the virtual infrastructure manager (VIM), the NFVO
the framework they interact with. Operation and business sup- fetches information on the state and availability of network
port (OSS/BSS) service block, which represent the interface infrastructure, including VMs able to run the VNFs and the
between verticals and mobile operators. High-level, end-to-end links connecting them.
requirements and KPIs are conveyed, through the Os-Ma-nfvo With such information, the NFVO can make what ETSI
reference point, to the NFV orchestrator (NFVO). The NFVO calls lifecycle management decisions [3, Sec. 7.2] about the
is in charge of deciding the number and type of VNFs to VNFs composing each network slice, i.e., how many instances
instantiate as well as the capacity of virtual links (VLs) of these VNFs to instantiate, where to host them, and how
connecting them. much resources to assign to each of them. Such decisions
Such decisions are conveyed, via the Or-vnfm interface, will correspond to decision variables in our system model,
to the VNF manager (VNFM) function, which is in charge as detailed next.
of actually instantiating the required VNFs. The VNFM
requests from the virtual infrastructure manager (VIM) any IV. S YSTEM M ODEL
resource, e.g., virtual machine (VM) or VL needed by the
VNFs themselves. The VNFM also interacts with the element We model VNFs as M/M/1 queues, belonging to set Q,
management (EM) function, a non-MANO entity that is in whose customers correspond to service requests. The class
charge of Fault, Configuration, Accounting, Performance and of each customer corresponds to the service with which each
Security (FCAPS) management for the functional part of the request is associated; we denote the set of such classes by K.
VNFs, i.e., for the actual tasks they perform. The service rate μ(q) of each queue q reflects the amount of
Finally, the VIM interacts with the NFV infrastruc- CPU (expressed in, e.g., ticks or microseconds of CPU-time)
ture (NFVI), which includes the hardware (e.g., physical each VNF is assigned to. Thus, μ(q) influences the time taken
servers, network equipment, etc.) and software (e.g., hyper- to process one service request. Notice that μ(q) does not
visors) running the VNFs. depend on the class k; that is, CPU is assigned on a per-
VNF rather than per-class basis. This models those scenarios
where the same VNF instance can serve requests belonging to
A. The NFVO: Input, Output, and Decisions multiple services.
As its name suggests, the main entity in charge of orchestra- Arrival rates at queue q ∈ Q are denoted by λk (q). Note
tion decisions is the NFV orchestrator (NFVO), which belongs that these values are class-specific, and reflect the amount of
to the MANO framework depicted in Fig. 1. In the following, traffic of different services. Class-specific transfer probabili-
we provide more details on the decisions the NFVO has to ties P(q2 |q1 , k) indicate the probability that a service request
make and the information it can rely upon, which correspond of class k enters VNF q2 after being served by VNF q1 .
(respectively) to the output and input of our algorithms. Furthermore, P(q|◦, k) indicates the probability that a request
The NFVO receives from the OSS/BSS a data struc- of class k starts its processing at VNF q.
ture called network service descriptor (NSD), defined Physical, or more commonly virtual, hosts are represented
in [3, Sec. 6.2.1]. NSDs include a graph-like description of by set H. Each host h has a finite CPU capacity κh .
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
TABLE I
N OTATION
AGARWAL et al.: VNF PLACEMENT AND RESOURCE ALLOCATION FOR THE SUPPORT OF VERTICAL SERVICES 5
of class k that enter queue q, either from outside the system latency δ(h, l) if q and r are deployed at hosts h and l,
or from other queues. For any k ∈ K, we have: respectively (i.e., if A(h, q) = 1 and A(l, r) = 1).
The average total delay of requests of the generic service
λ̂k (q)= λk,q + P(q|p, k)λ̂k (p). (3) class k is therefore given by:
q∈Q p∈Q
Dk = γk (q)Rk (q) + γk (q)P(r|q, k)
We can then define another auxiliary variable Λ(q), expressing q∈Q q,r∈Q,q=r
the total arrival rate of requests of any class entering queue q:
× A(h, q)A(l, r)δ(h, l). (8)
Λ(q) = λ̂k (q). h,l∈H
k∈K
Link Capacity: Given the finite link capacity C(h, l), which
Using Λ(q), we can impose system stability, requesting that, limits the number of requests that move from any VNF at
for each queue, the arrival rate does not exceed the service host h to any VNF at host l,we have:
rate:
λ̂k (q)P(r|q, k)A(q, h)A(r, l)≤C(h, l). (9)
Λ(q) < μ(q), ∀q ∈ Q. (4) k∈K q,r∈Q
In other words, each VNF should receive at least enough Constraint (9) contains a summation over all classes k and all
CPU to deal with the incoming traffic. If additional CPU is VNFs q, r ∈ Q, such that q is deployed at h and r is deployed
available at the host, it will be exploited to further speed up at l, as expressed by the A-variables. For each of such pair of
the processing of requests. VNFs, λ̂k (q) is the rate of the requests of class k that arrive
Latency: The previous constraints ensure that individual at q. Multiplying it by P(r|q, k), we get the rate at which
VNFs are stable, i.e., they process incoming requests in a requests move from VNF q to VNF r, hence from host h to
finite time. We can now widen our focus, and study how the host l.
processing times of different VNFs and the network times Objective: Dk defined above represents the average delay
combine to form our main metric of interest, i.e., the delay incurred by requests of class k. In our objective function,
each request is subject to. we have to combine these values in a way that reflects the
The processing time, i.e., the time it takes for a request of differences between such classes, most notably, their different
service k to traverse VNF q is represented by an auxiliary QoS limits. Thus, we consider for each class k the ratio of the
variable Rk (q). For FCFS (first come, first serve) and PS actual delay Dk to the limit delay DkQoS , and seek to minimize
(processor sharing) queuing disciplines, we have: the maximum of such ratios:
1 Dk
Rk (q) = , ∀q ∈ Q (5) min max QoS . (10)
μ(q) − Λ(q) A,μ k∈K D
k
Note that the right-hand side of (5) does not depend on class k; Importantly, the above objective function not only ensures
intuitively, this is because the queuing disciplines we consider fairness among service classes while accounting for their limit
are unaware of service classes. The response times for other delay, but it also guarantees that the optimal solution will
queuing disciplines, including those accounting for priority match all QoS limits if possible. More formally:
levels and/or preemption, cannot be expressed in closed form. Property 1: If there is a non-empty set of solutions that
It is also worth stressing that present-day implementations of meet constraints (1)–(9) and honor the services QoS limits,
multi-access edge computing (MEC) [2] are based on FIFO then the optimal solution to (10) falls in such a set.
discipline, and do not support preemption. Proof: We prove the property by contradiction, and
To compute the network latency that requests incur when assume that there is a feasible solution such that Dk ≤ DkQoS
transiting between hosts, we first need the expected number for all service classes, but that the optimal solution has Dk̂ >
of times, γk (q), that a request of class k visits VNF q ∈ Q,
i.e., Dk̂QoS for at least one class k̂ ∈ K.
In this case, the optimal value of the objective (10) would
γk (q) = P(q|◦, k) + P(q|p, k)γk (p). (6) Dk̂
be at least DQoS > 1. However, we know by hypothesis
p∈Q\{q} k̂
that there is a feasible solution where Dk ≤ DkQoS for all
In the right-hand side of (6), the first term is the probability classes, which would result in an objective function value of
that requests start their processing at queue q, and the second Dk
mink∈K DQoS ≤ 1. It follows that the solution we assumed to
is the probability that requests arrive there from another k
queue p. Note that γk (q) is not an auxiliary variable, but be optimal cannot be so.
a quantity that can be computed offline given the transfer Furthermore, when no solution meeting all QoS limits
probabilities P. Using γk (q), the expected network latency exists, the solution optimizing (10) will minimize the damage
incurred by requests of service class k is: by keeping all delays as close as possible to their limit values.
γk (q)P(r|q, k) δ(h, l)A(h, q)A(l, r). (7) A. Problem Complexity
q,r∈Q h,l∈H
The VNF placement/CPU assignment problem is akin to
We can read (7) from left to right, as follows. Given a max-flow problem; however, it has a much higher complexity
service request of class k, it will be processed by VNF q due to the following: (i) binary variables control whether
for γk (q) number of times. Every time, it will move to edges and nodes are activated, and (ii) the cost associated
VNF r with probability P(r|q, k). So doing, it will incur with edges changes according to the values of said variables.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
More formally, the problem of maximizing (10) subject to our analysis. We instead opt to keep the model unchanged and
constraints (1)–(9) includes both binary (A(h, q)) and continu- present an efficient, decoupled solution strategy, leveraging on
ous (μ(q)) variables. More importantly, constraints (1) and (9), sequential decision making.
as well as objective (10) (see also (8)), are nonlinear and non-
VI. S OLUTION S TRATEGY
convex, as both include products between different decision
variables. Our solution strategy is based on decoupling the problems
Below we prove that such a problem is NP-hard, through a of VNF placement and CPU allocation, and then sequentially –
reduction from the generalized assignment problem (GAP). and yet jointly, i.e., accounting for their mutual impact –
Theorem 1: The problem of joint VNF placement and CPU making these decisions. We begin by presenting our VNF
assignment is NP-hard. placement heuristic, called MaxZ, in Sec. VI-A, and then
Proof: It is possible to reduce the GAP, which is discuss CPU allocation in Sec. VI-B.
NP-hard [30], to ours. In other words, we show that (i) for each
A. The MaxZ Placement Heuristic
instance of the GAP problem, there is a corresponding instance
of our VNF placement problem, and (ii) that the translation As mentioned earlier, the two main sources of problem
between them can be done in polynomial time. complexity are binary variables and non-convex functions in
GAP instance: The GAP instance includes items i1 , . . . , iN both objective (10) and constraints (1) and (9). In order to
and bins b1 , . . . , bM . Each bin b has a budget (size) sb ; placing solve the VNF placement problem, our heuristic walks around
item i at bin b consumes a budget (weight) wbi and yields these issues by:
a cost pbi . The decision variables are binary flags xbi stating 1) formulating a convex version of the problem;
whether item i shall be assigned to bin b; also, each item shall 2) solving it through an off-the-shelf solver;
be assigned to exactly one bin. The objective is to minimize 3) computing, for each VNF q and host h, a score Z(h, q),
the cost. expressing how confident we feel about placing q in h;
Reduction: In our problem, items and bins correspond to 4) considering the maximum score Z(h , q ) and placing
VNFs and hosts respectively, and the decision variables xbi VNF q at host h ;
correspond to VNF placement decisions A(i, b). The capacity 5) repeating steps 2–4 until all VNFs are placed.
of each host is equal to the size sb of the corresponding bin. The name of the heuristic comes from step 4, where we seek
Furthermore, we must ensure that: for the highest score Z.
• the weight wbi of item i when placed at bin b corresponds 1) Steps 1–2: Convex Formulation: To make the problem
to the quantity of CPU assigned to VNF i, i.e., wbi = formulation in Sec. V convex, first we need to get rid of
μb (i); binary variables; specifically, we replace the binary vari-
• the cost pbi coming from placing item i in bin b corre- ables A(h, q) ∈ {0, 1} with continuous variables Ã(h, q) ∈
sponds to the opposite1 of the processing time at VNF i, [0, 1].
1
i.e., wbi = − μb (i)−Λ(i) , or equivalently, with a linear We also need to remove the products between Ã-variables
(e.g., in (7), (8), and (9)), by replacing them with a new
equation, Λ(i) − μ (i) = p1bi .
b
variable. To this end, for each pair of VNFs q and r and hosts h
Finally, we set all inter-host delays to zero.
and l, we introduce a new variable Φ(h, l, q, r) ∈ [0, 1], and
Complexity of the Reduction: Performing the reduction
impose that:
described above only requires to solve a linear system of equa-
tions in the μb (i) and Λ(i) variables, which can be performed Φ(h, l, q, r) ≤ Ã(h, q), ∀h, l ∈ H, q, r ∈ Q; (11)
in polynomial (indeed, cubic) time [31]. We have therefore Φ(h, l, q, r) ≤ Ã(l, r), ∀h, l ∈ H, q, r ∈ Q; (12)
presented a polynomial-time reduction of any instance of the
GAP problem to our problem. It follows that our problem is Φ(h, l, q, r) ≥ Ã(h, q)+Ã(l, r)−1, ∀h, l ∈ H, q, r ∈ Q.
NP-hard, q.e.d. (13)
It is interesting to notice how, in the proof of Theorem 1, The intuition behind constraints (11)–(13) is that Φ(h, l, q, r)
we obtain a simplified version of our problem, with non- mimics the behavior of the product Ã(h, q)Ã(l, r):
flexible CPU assignment (if VNF i is placed at host b it gets
if either Ã(h, q) or Ã(l, r) are close to 0, then (11)
exactly μb (i) CPU) and no network delay. This suggests that
and (12) guarantee that Φ(h, l, q, r) will also be close to
our problem is indeed more complex than GAP.
zero; if both values are close to one, then (13) allows also
The NP-hardness of the problem rules out not only the
Φ(h, l, q, r) to be close to one.
possibility to directly optimize the problem through a solver,
Another product between variables, i.e., a term in the
but also commonplace solution strategies based on relaxation,
form A(h, q)μ(q), appears in (1). Following a similar
i.e., allowing binary variables to take values anywhere in [0, 1].
approach, we introduce a set of new variables, ψ(h, q), mim-
Even if we relaxed the A(h, q) variables, we would still be
icking the ratio between the A(h, q)μ(q) product and the host
faced with a non-convex formulation, for which no algorithm
capacity κh . We then impose:
is guaranteed to find a global optimum.
One approach to overcome such an issue could be simpli- ψ(h, q) ≤ Ã(h, q), ∀h ∈ H, q ∈ Q; (14)
fying the model, e.g., by assuming that any host has sufficient ψ(h, q) ≤ 1, ∀h ∈ H , (15)
computing capability to run simultaneously all VNFs, there- q∈Q
fore dispensing with the A(h, q) variables. However, by doing
so we would detach ourselves from real-world 5G systems, which mimic (1). By replacing all products between
thus jeopardizing the validity of the conclusions we draw from Ã-variables with a Φ-variable and all products between Ã-
and μ-variables with a ψ-variable, we obtain a convex problem,
1 So that minimizing the cost is the same as minimizing the service time. which can efficiently be solved through commercial solvers.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
AGARWAL et al.: VNF PLACEMENT AND RESOURCE ALLOCATION FOR THE SUPPORT OF VERTICAL SERVICES 7
2) Steps 3–4: Z-Score and Decisions: Let us assume that B. CPU Allocation
no VNF has been placed yet. We then solve an instance of the Once the MaxZ heuristic introduced in Sec. VI-A provides
convex problem described in Sec. VI-A.1, and use the values us with deployment decisions, we need to decide the CPU
of the variables Ã(h, q) and ψ(h, q) to decide which VNF to allocation, i.e., the values of the μ(q) variables in the original
place at which host. problem described in Sec. V. This can be achieved simply by
Recall that Ã(h, q) is the relaxed version of our place- solving the problem in (10) but keeping the deployment deci-
ment variable A(h, q), so we would be inclined to use that sion fixed, i.e., replacing the A(h, q) variables with parameters
to make our decision. However, we also need to account whose values come from the MaxZ heuristic. Indeed, we can
for how much computational capacity VNFs would get, prove the following property.
as expressed by ψ(h, q). If such a value falls below the Property 2: If the deployment decisions are fixed, then the
threshold Tψ (h, q) = Λ(q)κh , then VNF q may not be able
problem of optimizing (10) subject to (1)–(9) is convex.
to process the incoming requests, i.e., constraint (4) may be Proof: Several constraints of the original problem only
violated. involve A(h, q) variables, and thus simply become conditions
To prevent this, we define our Z-score, i.e., how confident on the input parameters: this is the case of (2), (3), and (9).
we are about placing VNF q at host h, as follows: Also, constraints (1) and (4) are linear in the variables μ(q).
With regard to the objective function, (8) is now linear with
respect to μ(q), while (5) is convex, even if it does not look so
Z(h, q) = Ã(h, q) + 1 [ψ(h,q)≥Tψ (h,q)] , (16) 1
μ(q)−λ(q) =
d
prima facie. Indeed, its second derivative is d2 μ(q)
2
(μ(q)−Λ(q))3
, which is positive for any μ(q) > Λ(q). That
where 1 is the indicator function. Recalling that Ã-values are condition is required for system stability; therefore, we can
constrained between 0 and 1, favoring high values of (16) conclude that constraint (5) is convex over the all region of
means that we prefer a deployment that results in ψ-values interest. Finally, the objective function in (10) is in min-max
greater than the threshold, if such a deployment exists. Other- form, which preserves convexity.
wise, we make the placement decision based on the Ã-values Property 2 guarantees that we can make our CPU allocation
only. decisions, i.e., decide on the μ(q) values, in polynomial time.
Specifically, we select the host h and VNF q associated We can further enhance the solution efficiency by reducing the
with the maximum Z, i.e., h , q ← arg maxh∈H,q∈Q Z(h, q), optimization problem to the resolution of a system of equa-
and place VNF q in host h . We fix this decision and repeat tions, through the Karush-Kuhn-Tucker (KKT) conditions.
the procedure till all VNFs are placed (i.e., we perform exactly 1) KKT Conditions: In several nonlinear problems, includ-
|Q| iterations). ing convex ones, optimal solutions are guaranteed to have
We now present two example runs of MaxZ, for two certain properties, known as KKT conditions [32]. This greatly
scenarios with different inter-host latencies. simplifies and speeds up the search for the optimum, as such
Example 2: Consider a simple case with two hosts H = a search can be restricted to solutions satisfying the KKT
{h1 , h2 } with the same CPU capacity κh = 5requests/s, two conditions.
VNFs Q = {q1 , q2 }, and only one request class k with λk = The KKT conditions are:
1requests/s. Requests need to subsequently traverse q1 1) stationarity;
and q2 . The inter-host latency δ(h1 , h2 ) is set to 5 ms, while 2) primal feasibility;
DQoS = 50 ms. Then, intuitively, the optimal solution is to 3) dual feasibility;
deploy one VNF per host. 4) complementary slackness.
We solve the problem in Sec. VI-A.1. After the first Stating them requires associating (i) re-writing the objective
iteration, we obtain à = [ 0.5 0.5 0.5 0.5
0.5 0.5 ], ψ = [ 0.5 0.5 ], and Z = and constraints in normal form, and (ii) associating a KKT
[ 1.5 1.5 ].2 In such a case, using a tie-breaking rule, we place
1.5 1.5 multiplier with each of the constraints.
VNF q1 at host h1 . In the second iteration, we have à = Therefore, we introduce an auxiliary variable ρ representing
[ 10 0.38 0.8 0.19 2 1.38
0.62 ], ψ = [ 0 0.61 ], and Z = [ 0 1.62 ]. We ignore the the maximum DDQoS k
ratio, and imposing that for each service
entries pertaining to VNF q1 that has been already placed and, k
class k ∈ K:
since Z(h2 , q2 ) > Z(h1 , q2 ), we deploy VNF q2 at host h2 ,
Dk
which corresponds to the intuition that, given the small value ρ ≥ QoS
of δ, VNFs should be spread across the hosts. Dk
⎛
Example 3: Let us now consider the same scenario as in
Example 2, but assume a much longer latency δ(h1 , h2 ) = 1 ⎝ γk (q)
= QoS + γk (q)P(r|q, k)
100 ms. The best solution will now be to place both VNFs at Dk μ(q) − Λ(q)
q∈Q q,r∈Q
the same host. ⎞
After the first iteration, we obtain à = [ 0.7 0.7
0.3 0.3 ], ψ =
0.5 0.5 1.7 1.7
[ 0.3 0.3 ], and Z = [ 1.3 1.3 ]. Again using a tie-breaking rule, A(h1 , q)A(h2 , r)δ(h1 , h2 )⎠ . (17)
we place VNF q1 at host h1 . In the second iteration, we have h1 ,h2 ∈H
à = [ 10 0.7 0.6 0.4 2 1.8
0.2 ], ψ = [ 0 0.2 ], and Z = [ 0 1.2 ]. We again At this point, the objective is simply to minimize ρ.
ignore the entries in the first column and, since Z(h1 , q2 ) > We also need to re-write constraints (1), (4) and (17) in
Z(h2 , q2 ), we place VNF q2 at host h1 , making optimal normal form, and associate to them the multipliers Mq , Mh
decisions. and Mk respectively. The resulting Lagrangian function is:
L=ρ+ M q Xq + Mh Yh + Mk Wk , (18)
2 In all matrices, rows correspond to hosts and columns to VNFs. q∈Q h∈H k∈K
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
AGARWAL et al.: VNF PLACEMENT AND RESOURCE ALLOCATION FOR THE SUPPORT OF VERTICAL SERVICES 9
Fig. 5. The graph G generated for the system in Fig. 3, which is not strongly
Fig. 3. A simple system where two classes of clients traverse the same connected (it is impossible to reach k1 from k2 ).
queue. Host h will always be strained; additionally, depending on the values
of Dkmax , either one or both the classes will be critical.
graph from the corresponding edge and mark all vertices we
can reach as critical (if corresponding to classes) or strained
(if corresponding to hosts). Through edges added according to
rule (ii) and (iii), we will be able to reach all hosts traversed by
clients of the critical class, and those hosts will be strained as
per Property 4. Edges outgoing from the host vertices, created
according to rule (ii), will make us reach all queues deployed
at these hosts. By Property 5, each of these queues serves at
least one critical class. If this class is unique, i.e., if we have
an edge created according to rule (iv), then those classes are
critical as well, and we can repeat the process.
The strong connectivity property implies that we can reach
all vertices (including all classes and all hosts) from any
vertex of G, including the one critical class whose existence
is guaranteed by Lemma 1.
Fig. 4 presents the graph G resulting from the system in
Fig. 2, which is strongly connected. Fig. 5 presents the graph
for the system in Fig. 3, which is not strongly connected
and, thus, it does not meet the sufficient condition stated in
Fig. 4. The graph G generated for the system depicted in Fig. 2. Left, center Theorem 2. Recall that, because that condition is sufficient but
and right edges correspond to hosts, VNFs and classes respectively. Green not necessary, k1 and k2 could still be both critical, depending
edges are created according to rule (ii), blue edges according to rule (iii), and on their Dkmax values.
yellow edges according to rule (iv).
In scenarios like the one in Example 4, where all classes
are critical and all hosts are strained, we have:
However, this is not true in general. A simple counterexample γk (q) 1
is represented in Fig. 3, where two classes share the same
q∈Q
DkQoS μ(q) − Λ(q)
queue. By Lemma 1, one of the two classes will be critical,
and, hence, by Property 4, host h will be strained. Property 5 + A(h1 , q)A(h2 , r)γk (q)
tells us what we already know, i.e., that one of the two classes h1 ,h2 ∈H q,r∈Q
will be critical, but it does not imply that both will be. Indeed,
δ(h1 , h2 )
that depends on the values of Dkmax : if Dkmax 1
= Dkmax2
, P(r|q, k) = ρ ∀k ∈ K,
then both classes are critical; otherwise, the class with the DkQoS
lowest Dkmax value will be critical and the other will not. A(h, q)μ(q) = κh , ∀h ∈ H.
However, we can state a sufficient condition for all classes to q∈Q
be critical (and, hence, all hosts to be strained), regardless the
Dkmax values. It is based on (i) building a graph G representing The above equations can be combined with the KKT condi-
the hosts, VNFs and classes in our system (as shown in Fig. 4), tions stated in Sec. VI-B.1, thus forcing Yh = 0 ∀h ∈ H and
and (ii) verifying a simple property over it. Wk = 0 ∀k ∈ K. This greatly simplifies and speeds up the
Theorem 2: Let G = (V, E) be a directed graph where: process of finding the optimal CPU allocation values μ(q).
(i) there is a vertex for every host, queue, and class,
i.e., V = H ∪ Q ∪ K; VIII. M ULTIPLE VNF I NSTANCES
(ii) for every host h and queue q s.t. A(h, q) = 1, add to E So far, we presented our system model and solution strategy
a pair of edges (q, h) and (h, q); in the case where exactly one instance of each VNF has to be
(iii) for every queue q and class k s.t. γk (h) > 0, add to E deployed. This is not true in general; some VNFs may need
an edge (k, q); to be replicated owing to their complexity and/or load.
(iv) for every queue q and class k s.t. γk (h) > 0 and k is If the number Nq of instances of VNF q to be deployed
the only class using q, i.e., γj = 0, ∀j = k, add to E an is known, then we can replace VNF q in the VNF graph
edge (q, k). with Nq replicas thereof, labeled q 1 , q 2 , . . . q Nq , each with
If graph G is strongly connected, then all classes in K are the same incoming and outgoing edges. With regard to the
critical and all hosts in H are strained. Λ(q) requests/s that have to be processed by any instance
Proof: Lemma 1 guarantees us that there is at least of VNF q, they are split among the instances. If f (q, i)
one critical class k ; let us then start walking through the is the fraction of requests for VNF q that is processed by
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
Nq
instance q i (and thus i=1 f (q, i) = 1), then instance q
i
AGARWAL et al.: VNF PLACEMENT AND RESOURCE ALLOCATION FOR THE SUPPORT OF VERTICAL SERVICES 11
Fig. 7. Normalized service delay as a function of the physical link latency, for the chain (left), light mesh (center), heavy mesh (right) VNF graphs. Note
that the y-axis scale varies across the plots.
Fig. 9. Normalized service delay (log scale) as a function of arrival rate λ for the chain (left), light mesh (center), heavy mesh (right) VNF graphs.
Fig. 10. Multi-class scenario, heavy mesh graph: normalized service delay vs. arrival rate λ for the low-delay (left), medium-delay (center), high-delay (right)
service classes. Note that the y-axis scale varies across the plots.
Fig. 11. Multi-instance scenario: normalized service delay vs. the physical link latency for the chain (left), light mesh (center), heavy mesh (right) VNF
graphs. Note that the y-axis scale varies across the plots.
AGARWAL et al.: VNF PLACEMENT AND RESOURCE ALLOCATION FOR THE SUPPORT OF VERTICAL SERVICES 13
R EFERENCES
[1] 3GPP. (2014). Specification: 37.868; RAN Improvements for Machine-
type Communications. [Online]. Available: http://www.3gpp.org/ftp//
Specs/archive/37series/37.868/
[2] Amazon. AWS Greengrass. Accessed: Dec. 2018. [Online]. Available:
https://aws.amazon.com/greengrass/
[3] ETSI. (2017). Network Functions Virtualisation (NFV); Management
and Orchestration. [Online]. Available: http://www.etsi.org/
deliver/etsi_gs/NFV-MAN/001_099/001/01.01.01_60/gs_NFV-
MAN001v010101p.pdf
[4] X. Foukas, G. Patounas, A. Elmokashfi, and M. K. Marina, “Network
slicing in 5G: Survey and challenges,” IEEE Commun. Mag., vol. 55,
high transition probability between them are placed on the no. 5, pp. 94–100, May 2017.
same host. Thereafter, optimal resources are allocated by solv- [5] H. Zhang et al., “Network slicing based 5G and future mobile networks:
ing the convex optimization problem. There are |Q| number of Mobility, resource management, and challenges,” IEEE Commun. Mag.,
VNFs to be placed in |H| hosts. The affinity based VNF-host vol. 55, no. 8, pp. 138–145, Aug. 2017.
[6] P. Rost et al., “Network slicing to enable scalability and flexibility in
placement algorithm has a complexity of O(|Q||H|). As seen 5G mobile networks,” IEEE Commun. Mag., vol. 55, no. 5, pp. 72–79,
before, the computational complexity of resource allocation May 2017.
algorithm is O((|Q| + 1)3 ). Thus, the overall computational [7] K. Samdanis et al., “5G network slicing—Part 2: Algorithms and
complexity of this scheme is O(|H||Q| + (|Q| + 1)3 ). practice,” IEEE Commun. Mag., vol. 55, no. 8, pp. 110–111, Aug. 2017.
Execution Times: All the above computations refer to the [8] S. Vassilaras et al., “The algorithmic aspects of network slicing,” IEEE
Commun. Mag., vol. 55, no. 8, pp. 112–119, Aug. 2017.
order of magnitude of the worst-case computational com- [9] X. Li et al., “Service orchestration and federation for verticals,” in Proc.
plexity. However, it is also interesting to assess how such IEEE (WCNC) Workshops, Apr. 2018, pp. 260–265.
complexity translates into actual execution times. To this end, [10] M. A. S. Santos et al., “Security requirements for multi-operator virtu-
Tab. II reports the execution times of MaxZ and its coun- alized network and service orchestration for 5G,” in Guide to Security
in SDN and NFV. Berlin, Germany: Springer, 2017.
terparts for different topologies and VNF graphs, measured [11] A. Hirwe and K. Kataoka, “LightChain: A lightweight optimisation of
on a server equipped with a Xeon E5-2600 processor and VNF placement for service chaining in NFV,” in Proc. IEEE NetSoft
48 GByte of RAM. We can clearly observe that, while MaxZ Conf. Workshops, Jun. 2016, pp. 33–37.
takes longer than the affinity-based and greedy heuristics to [12] T.-W. Kuo, B.-H. Liou, K. C.-J. Lin, and M.-J. Tsai, “Deploying
chains of virtual network functions: On the relation between link and
run, their execution times are comparable in the base scenario. server usage,” in Proc. IEEE 35th Annu. IEEE Int. Conf. Comput.
Furthermore, MaxZ runs over two orders of magnitude faster Commun. (INFOCOM), Apr. 2016, pp. 1–9.
than the brute-force procedure. It is also interesting to notice [13] A. Baumgartner, V. S. Reddy, and T. Bauschert, “Mobile core network
that the execution times in the large scenario are still limited, virtualization: A model for combined virtual core network function
placement and topology optimization,” in Proc. 1st IEEE Conf. Netw.
while the brute-force procedure is utterly unable to tackle Softwarization (NetSoft), Apr. 2015, pp. 1–9.
that case. [14] F. B. Jemaa, G. Pujolle, and M. Pariente, “Analytical models for qos-
driven vnf placement and provisioning in wireless carrier cloud,” in Proc.
19th ACM Int. Conf. Modeling, Anal. Simulation Wireless Mobile Syst.,
X. C ONCLUSION 2016, pp. 148–155.
We targeted the problem of orchestration in 5G networks, [15] B. Addis, D. Belabed, M. Bouet, and S. Secci, “Virtual network
that requires to make decisions about VNF placement, CPU functions placement and routing optimization,” in Proc. IEEE 4th Int.
Conf. Cloud Netw. (CloudNet), Oct. 2015, pp. 171–177.
assignment, and traffic routing. We presented a queuing-based [16] A. Marotta and A. Kassler, “A power efficient and robust virtual network
model accounting for all the main features of 5G networks, functions placement problem,” in Proc. 28th Int. Teletraffic Congr. (ITC),
including (i) arbitrarily complex service graphs; (ii) flexible 2016, pp. 331–339.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
[17] N. El Khoury, S. Ayoubi, and C. Assi, “Energy-aware placement and Francesco Malandrino received the M.S. and
scheduling of network traffic flows with deadlines on virtual network Ph.D. degrees from the Politecnico di Torino, Italy,
functions,” in Proc. 5th IEEE Int. Conf. Cloud Netw. (CloudNet), in 2008 and 2012, respectively. Prior to his current
Oct. 2016, pp. 89–94. appointment, he was an Assistant Professor and
[18] M. Mechtri, C. Ghribi, and D. Zeghlache, “A scalable algorithm for a Research Fellow of the Politecnico di Torino,
the placement of service function chains,” IEEE Trans. Netw. Service a Fibonacci Fellow of the Hebrew University of
Manag., vol. 13, no. 3, pp. 533–546, Sep. 2016. Jerusalem, and a Research Fellow of Trinity College,
[19] L. Gu, S. Tao, D. Zeng, and H. Jin, “Communication cost efficient Dublin. He is currently a tenured Researcher with the
virtualized network function placement for big data processing,” in Proc. Institute of Electronics, Computer and Communica-
IEEE Conf. Comput. Commun. Workshops (INFOCOM), Apr. 2016, tion Engineering, National Research Council, Italy,
pp. 604–609. headquartered in Turin, Italy. His research interests
[20] J. Cao et al., “VNF-FG design and VNF placement for 5G mobile include the architecture and management of wireless, cellular, and vehicular
networks,” Sci. China Inf. Sci., vol. 60, no. 4, 2017, Art. no. 040302. networks. He is a member of CNIT, Parma, Italy.
[21] R. Cohen, L. Lewin-Eytan, J. S. Naor, and D. Raz, “Near optimal
placement of virtual network functions,” in Proc. IEEE Conf. Comput.
Commun. (INFOCOM), Apr./May 2015, pp. 1346–1354.
[22] B. Martini et al., “Latency-aware composition of virtual functions in
5G,” in Proc. 1st IEEE Conf. Netw. Softwarization (NetSoft), Apr. 2015,
pp. 1–6.
[23] D. Bhamare et al., “Optimal virtual network function placement in
multi-cloud service function chaining architecture,” Comput. Commun.,
vol. 102, pp. 1–16, Apr. 2017.
[24] G. Hasegawa and M. Murata, “Joint bearer aggregation and control-data Carla Fabiana Chiasserini (M’98–SM’09–F’18)
plane separation in LTE EPC for increasing M2M communication capac- received the degree from the University of Florence
ity,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Dec. 2015, in 1996 and the Ph.D. degree from the Politecnico di
pp. 1–6. Torino, Italy, in 2000. She was a Visiting Researcher
[25] A. Ksentini, M. Bagaa, and T. Taleb, “On using SDN in 5G: with UCSD from 1998 to 2003 and a Visiting
The controller placement problem,” in Proc. IEEE Global Commun. Professor with Monash University in 2012 and 2016.
Conf. (GLOBECOM), Dec. 2016, pp. 1–6. She is currently an Associate Professor with the
[26] D. Dietrich, C. Papagianni, P. Papadimitriou, and J. S. Baras, “Network Department of Electronic Engineering and Telecom-
function placement on virtualized cellular cores,” in Proc. 9th Int. Conf. munications, Politecnico di Torino. Her research
Commun. Syst. Netw. (COMSNETS), 2017, pp. 259–266. interests include architectures, protocols, and the
[27] J. Prados-Garzon et al., “Modeling and dimensioning of a virtualized performance analysis of wireless networks. She has
mme for 5G mobile networks,” IEEE Trans. Veh. Technol., vol. 66, no. 5, published over 300 papers in prestigious journals and in leading international
pp. 4383–4395, May 2017. conferences. She is a member of CNIT, Parma, Italy.
[28] S. Agarwal, F. Malandrino, C.-F. Chiasserini, and S. De, “Joint VNF
placement and CPU allocation in 5G,” in Proc. IEEE IEEE Conf.
Comput. Commun. (INFOCOM), Apr. 2018, pp. 1943–1951.
[29] Intel. Power Management States: P-States, C-States, and
Package C-States. Accessed: Dec. 2018. [Online]. Available:
https://software.intel.com/en-us/articles/power-management-states-p-
states-c-states-and-package-c-states
[30] D. G. Cattrysse and L. N. Van Wassenhove, “A survey of algorithms for
the generalized assignment problem,” Eur. J. Oper. Res., vol. 60, no. 3,
pp. 260–272, 1992.
[31] R. Barrett et al., Templates for the Solution of Linear Systems: Building
Blocks for Iterative Methods. Philadelphia, PA, USA: SIAM, 1994, Swades De (S’02–M’04–SM’14) received the
vol. 43. B.Tech. degree in radiophysics and electronics from
[32] H. W. Kuhn and A. W. Tucker, “Nonlinear programming,” in Proc. the University of Calcutta, Kolkata, India, in 1993,
Berkeley Symp. Math. Statist. Probab., 1951, pp. 481–492. the M.Tech. degree in optoelectronics and optical
[33] R. M. Lewis and V. Torczon, “Pattern search methods for linearly communication from IIT Delhi, New Delhi, India,
constrained minimization,” SIAM J. Optim., vol. 10, no. 3, pp. 917–941, in 1998, and the Ph.D. degree in electrical engi-
2000. neering from The State University of New York at
[34] C. Casetti et al., “Arbitration among vertical services,” in Proc. IEEE Buffalo, Buffalo, NY, USA, in 2004. Before joining
29th Annu. Int. Symp. Pers., Indoor Mobile Radio Commun. (PIMRC), IIT Delhi in 2007, he was a tenure-track Assis-
Sep. 2018, pp. 153–157. tant Professor with the Department of Electrical
and Computer Engineering, New Jersey Institute of
Satyam Agarwal received the Ph.D. degree in elec- Technology, Newark, NJ, USA, from 2004 to 2007. He was an ERCIM
trical engineering from IIT Delhi in 2016. He is Post-Doctoral Researcher with ISTI, CNR, Pisa, Italy, in 2004. He has
currently an Assistant Professor with the Depart- nearly five years of industry experience on communications hardware and
ment of Electrical Engineering, IIT Ropar, India. software development in India, from 1993 to 1997 and in 1999. He is
Prior to this, he was an Assistant Professor with currently a Professor with the Department of Electrical Engineering, IIT Delhi.
IIT Guwahati. In 2017, he was a Post-Doctoral His research interests are in communication networks, with emphasis on
Researcher with Politecnico di Torino, Turin, Italy. performance modeling and analysis. He currently serves as a Senior Editor
His research interests are in the wide areas of for the IEEE C OMMUNICATION L ETTERS , and an Associate Editor for the
wireless communication networks, including next- IEEE T RANSACTIONS ON V EHICULAR T ECHNOLOGY, the IEEE W IRELESS
generation networks, 5G networks and architecture, C OMMUNICATION L ETTERS , the IEEE N ETWORKING L ETTERS , and IETE
and air-borne networks. Technical Review journal.