3D Printing Assistive Devices
3D Printing Assistive Devices
Thomas Zinner
Rossitza Goleva
Andreas Timm-Giel
Phuoc Tran-Gia (Eds.)
141
Mobile Networks
and Management
6th International Conference, MONAMI 2014
Würzburg, Germany, September 22–24, 2014
Revised Selected Papers
123
Lecture Notes of the Institute
for Computer Sciences, Social Informatics
and Telecommunications Engineering 141
Editorial Board
Ozgur Akan
Middle East Technical University, Ankara, Turkey
Paolo Bellavista
University of Bologna, Bologna, Italy
Jiannong Cao
Hong Kong Polytechnic University, Hong Kong, Hong Kong
Falko Dressler
University of Erlangen, Erlangen, Germany
Domenico Ferrari
Università Cattolica Piacenza, Piacenza, Italy
Mario Gerla
UCLA, Los Angels, USA
Hisashi Kobayashi
Princeton University, Princeton, USA
Sergio Palazzo
University of Catania, Catania, Italy
Sartaj Sahni
University of Florida, Florida, USA
Xuemin (Sherman) Shen
University of Waterloo, Waterloo, Canada
Mircea Stan
University of Virginia, Charlottesville, USA
Jia Xiaohua
City University of Hong Kong, Kowloon, Hong Kong
Albert Zomaya
University of Sydney, Sydney, Australia
Geoffrey Coulson
Lancaster University, Lancaster, UK
More information about this series at http://www.springer.com/series/8197
Ramón Agüero Thomas Zinner
•
Mobile Networks
and Management
6th International Conference, MONAMI 2014
Würzburg, Germany, September 22–24, 2014
Revised Selected Papers
123
Editors
Ramón Agüero Andreas Timm-Giel
University of Cantabria Hamburg University of Technology
Santander Hamburg
Spain Germany
Thomas Zinner Phuoc Tran-Gia
University of Würzburg University of Würzburg
Würzburg Würzburg
Germany Germany
Rossitza Goleva
Technical University of Sofia Faculty
of Telecommunications
Sofia
Bulgaria
This volume is the result of the Sixth International ICST Conference on Mobile Net-
works and Management (MONAMI), which was held in Würzburg, Germany during
September 22–24, 2014, hosted by the University of Würzburg.
The MONAMI conference series aims at closing the gap between hitherto consid-
ered separated and isolated research areas, namely multi-access and resource man-
agement, mobility and network management, and network virtualization. Although
these have emerged as core aspects in the design, deployment, and operation of current
and future networks, there is still little to no interaction between the experts in these
fields. MONAMI enables cross-pollination between these areas by bringing together
top researchers, academics, and practitioners specializing in the area of mobile network
and service management.
In 2014, after a thorough peer-review process, 20 papers were selected for inclusion
in the main track of the technical program. In addition, MONAMI 2014 hosted a well-
received workshop on Enhanced Living Environments, which featured 10 papers. All
in all, 30 papers were orally presented at the conference. The Technical Program
Committee members made sure that each submitted paper was reviewed by at least
three competent researchers, including at least one TPC member.
The conference opened with one half-day tutorial: “SDN Experimentation Facilities
and Tools,” addressing one of the main leitmotifs of the MONAMI conference,
organized and presented by Dr. Kostas Pentikousis and Dr. Umar Toseef (EICT GmbH,
Germany) and Philip Wette and Martin Dräxler (University of Paderborn, Germany).
Prof. Klaus Moessner, from the Centre for Communication Systems Research at the
University of Surrey, UK, officially opened the conference day with his vision on
“Networks in Times of Virtualisation.” In addition, the conference featured a panel
session on “Cloudification of Mobile Networks - Expectations, Challenges and
Opportunities,” organized by Dr. Andreas Maeder and with the participation of the
following reputable researchers: Dr. Klaus Moessner, Dr. Dirk Kutscher, Dr. Wolfgang
Kellerer, Dr. Wolfgang Kiess, and Prof. Alberto Leon-Garcia. The second day of the
conference started with a keynote by Dr. Dirk Kutscher from NEC Europe Ltd.,
Germany, who gave a speech entitled “From Virtualization to Network and Service
Programmability – A Research Agenda for 5G Networks.” This was followed by a
special session on Software Defined Networking and Network Function Virtualization,
organized by Prof. Wolfgang Kellerer (Technische Universität München, Germany)
and Dr. Marco Hoffmann (Nokia, Germany) featuring the talk “Commodity Hardware
as Common Denominator of SDN and NFV,” given by Dr. Hagen Woesner (BISDN
GmbH, Germany).
The First Workshop on Enhanced Living Environments, organized by the IC1303
AAPELE COST Action, featured a keynote by Prof. Phuoc Tran-Gia (University of
Würzburg, Germany) and a talk from the AAPELE Science Officer, Dr. Guiseppe
Lugano.
VI Preface
It is worth highlighting that the attendance increased in MONAMI 2014 and all
newcomers acknowledged the collegial atmosphere which characterizes the conference,
making it an excellent venue, not only to present novel research work, but also to foster
stimulating discussions between the attendees.
The papers included in this volume are organized thematically into five parts,
starting with LTE Networks in Part I. Virtualization and Software Defined Networking
aspects are discussed in Part II. Part III presents new approaches related to Self-
Organizing Networks, while Part IV addresses Energy Awareness in Wireless Net-
works. Part V includes papers presenting avant-garde Algorithms and Techniques for
Wireless Networks, and Part VI entails papers related to Applications and Context
Awareness. The next three parts of the volume deal with Ambient Assisted Living
systems: Part VII focuses on architectural issues, Part VIII discusses Human Interaction
Technologies and, finally, Part IX closes the volume with three papers on Devices and
Mobile Cloud for AAL.
We close this short preface to the volume by acknowledging the vital role that the
Technical Program Committee members and additional referees played during the
review process. Their efforts ensured that all submitted papers received a proper
evaluation. We thank EAI and ICST for assisting with organization matters, CREATE-
NET and University of Würzburg for hosting MONAMI 2014. The team that put
together this year’s event was large and required the sincere commitment of many
folks. Although too many to recognize by name, their effort should be highlighted. We
particularly thank Petra Jansen for her administrative support on behalf of EAI, and
Prof. Imrich Chlamtac of CREATE-NET for his continuous support of the conference.
Finally, we thank all delegates for attending MONAMI 2014 and making it such a
vibrant conference!
We hope to see you all in Santander, 2015.
General Chairs
Phuoc Tran-Gia University of Würzburg, Germany
Andreas Timm-Giel Hamburg University of Technology, Germany
TPC Chairs
Thomas Zinner University of Würzburg, Germany
Ramón Agüero University of Cantabria, Spain
Tutorials Chair
Oliver Blume Alcatel-Lucent Bell Labs, Germany
Financial Chair
Maciej Muehleisen Hamburg University of Technology, Germany
Webchair
Jarno Pinola VTT Technical Research Centre of Finland,
Finland
Panel Chair
Andreas Mäder NEC Laboratories Europe, Germany
Contents
LTE Networks
SDN and NFV Dynamic Operation of LTE EPC Gateways for Time-Varying
Traffic Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Arsany Basta, Andreas Blenk, Marco Hoffmann, Hans Jochen Morper,
Klaus Hoffmann, and Wolfgang Kellerer
Self-Organizing Networks
1 Introduction
Femto-cells are small, low power/cost and plug and play cellular base stations
that can be placed inside homes and small business. It can be connected to
the operator’s network through Internet Protocol (IP) by means of a third
party backhaul connection such as Asymmetric Digital Subscriber Line (ADSL)
or through fiber optics. Femto-cells aim at providing better indoor coverage,
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 3–17, 2015.
DOI: 10.1007/978-3-319-16292-8 1
4 S. Palipana et al.
increasing network capacity and also providing new services to users. Accord-
ingly, they provide higher data rates while reducing the macro-cells load.
However, there are several technical challenges that must be addressed before
femto-cells can coexist among other macro- and pico-cells. These challenges
can be categorized into: inter-cell interference, handover in areas with multiple
femto-cells, self configuration, healing and optimization, spectrum accuracy, and
providing quality of service using the shared backhaul connection [1,2].
In a heterogeneous network with femto-cells and macro-cells coexistence, the
downlink inter-cell interference can occur across femto-macro tiers (cross-tier
interference), as well as in femto-femto tiers (co-tier interference). The main
reasons for this downlink inter-cell interference are the deployment of femto-
cells without proper planning, spectrum reuse or co-channel deployment by
femto-cells, Closed Subscriber Group (CSG) access and uncoordinated opera-
tion among femto-cells and macro-cells [3].
Methods available for the downlink data channel protection can be divided
into two areas: power control and radio resource management. Radio resource
management involves methods such as component carrier aggregation, almost
blank subframes and PRB level resource partitioning. To overcome the down-
link cross-tier interference, several power controlling schemes are proposed in
the literature. Claussen et al. [4] provides a transmit power calculation method
considering the distance between the femto-cell and the most interfering macro-
cell which offers a minimum coverage for its serving Home User Equipment
(HUE). Yavuz et al. [5] use a power setting based on the received signal strength
from the macro-cell which is measured by the Home NodeB and adjusts the
transmit power to achieve a minimum quality level for the macro-cell control
channel. However, in these methods there is a high probability that the femto-
cell decreases its power without a MUE in the vicinity, resulting in an unnec-
essary performance degradation. Lalam et al. [6] proposes a dynamic power
control algorithm that uses the CQI reports from HUEs in Frequency Division
Duplex (FDD) High Speed Downlink Packet Access (HSDPA) and the excellent
transmission quality associated to a femto-cell to adjust the downlink power
transmission according to a given targeted CQI. However, the femto-cell is
restricted here to achieve a target Quality of Service (QoS) even without MUE
influence. Morita et al. [7] introduced an adaptive power level setting scheme that
depends on the availability of MUEs. Here, the HeNB measures the variation of
uplink received power from the MUEs and thereby, the transmit power of the
femto-cell can be adjusted intelligently. This scheme requires the femto-cell to
enable the Network Listen Mode (NLM) to sniff the environment as a UE.
Several resource partitioning schemes are proposed in literature trying to
alleviate the downlink cross-tier interference. A dynamic resource partitioning
method that denies Home enhanced NodeBs (HeNBs) to access the downlink
resources that are assigned to macro UEs in their vicinity was introduced by
Bharucha et al. [8]. Here, the interference on most vulnerable MUEs can be effec-
tively controlled at the expense of HeNB’s capacity. Nonetheless, this method
requires an X2 link for backhaul communication which is delay prone. In [9] the
eNodeB schedules the UEs affected by HeNBs to a special part of the spectrum
Scalable and Self-sustained Algorithms for Femto-Cell 5
such that the HeNBs map the downlink resource blocks from uplink sensing.
However, the problem lies at the uplink to downlink Resource Block mapping
that’s performed by the HeNB which implies that the mapping scheme must
be exchanged among the cells. Mahapatra and Strinati [10] describes a method
which measures the interference of each RB at the HeNB, classifies the RBs
and allocates them to the appropriate users with suitable transmit powers. This
method is computationally intensive and the interference measurement is done
at the HeNB and not the UE. Wang et al. [11] describes a scheme that uses time
domain muting where the MUEs in a coverage hole are protected by scheduling
them only on the muted sub-frames, but it may waste resources by scheduling
macro users only on muted sub-frames.
This paper puts forward two novel interference mitigation schemes: FPCS
and RPSS. FPCS is an adaptive power control scheme that detects affected
MUEs based on their CQI feedback utilizing the Network Listen Mode (NLM)
of the femto-cell. RPSS is an efficient yet simple resource partitioning scheme
which does not rely on extra signaling, measurements and estimations.
where, Ptx,M is the transmit power of the strongest interfering macro-cell, Prx,M
is the received power by the femto-cell from this strongest interferer, P LM,F is
the path loss between this interferer and the femto-cell.
We define the relationship of estimated SINR without femto-cell’s interfer-
ence (γ woi ), SINR with femto-cell interference (γwi ) and SINR reduction factor
(c) in Eq. 2. This relationship can be utilized to determine γ woi and as a result
an expression for the femto-cell’s transmit power can be derived based on known
and estimated parameters.
1
× γwi [dB], if γwi [dB] > 0
γ woi [dB] = c (2)
c × γwi [dB], if γwi [dB] < 0
The femto-cell estimates the received power from the macro-cell using the macro
BS’s transmit power (Ptx,M ) and the estimated path loss between the MUE and
the macro-cell (P LM U E,M ).
Prx,M [dB] = Ptx,M [dB] − P LM U E,M [dB] (5)
The received power from the femto-cell at the MUE is estimated using the
femto BS’s transmit power (Ptx,F ), estimated path loss between the MUE and
the femto-cell (P LM U E,F ), and estimated wall penetration loss (Low ) as:
Prx,F [dB] = Ptx,F [dB] − P LM U E,F [dB] − Low [dB] (6)
Scalable and Self-sustained Algorithms for Femto-Cell 7
0
Initialization
yes
Femtocell uplink
transmission
Power [dB] Power [dB] MUE uplink
Macro-cell
transmission
transmission
Ptx, F, UL
Ptx, M
Ptx, MUE L’ow PLMUE, F +L’ow
PLM, F +L’ow
L’ow Prx, M, UL L’ow PLMUE, F +L’ow
Prx, M Prx, F, UL
HeNB HeNB
eNodeB eNodeB
Fig. 2. These figures explain the estimation of wall penetration loss and the position
of the macro UE. Femto-cell performs uplink power control and behaves similar to an
MUE during the period when the MUE’s position is estimated
3 Channel Model
There are three path loss models used in this work according to [12] which are
used conditional to the type of link that exists between a transmitter and a
receiver. Expression 9 is used as the path loss model for an outdoor link (useful
or interfering) between a macro-cell and a MUE.
P L1 [dB] = 15.3 + 37.6log10 R (9)
where, R is the distance between the UE and the macro-cell. Equation 10 is used
for a HUE that’s served by a HeNB in the same house.
P L2 [dB] = 38.46 + 20log10 R (10)
Finally, Eq. 11 is used for a MUE which is situated outside a house but receiving
signals from a HeNB
P L3 [dB] = max(15.3 + 37.6log10 R, 38.46 + 20log10 R) + Low (11)
where, Low is the wall penetration loss.
All links are modeled with shadow fading using Log-Normal distribution
with spatial correlation according to [13]. The fast fading model used in this
work is a Jakes’-like model [14,15]. Hence the fast fading attenuation depends
on both time and frequency as it considers delay spread for frequency selectivity
and Doppler spread for time selectivity. Mobility of the users prompts Doppler
spread. The power delay profile caused by multi-path propagation which is the
reason behind frequency selectivity is modeled using the ITU Pedestrian B chan-
nel specification [16]. This is a commonly used medium delay empirical channel
model for office environments. Unlike path loss and slow fading, fast fading is
different for each PRB of each user since the channel is frequency selective.
10 S. Palipana et al.
their way. This is done to avoid the extreme interference they have to confront
inside houses with femto-cells. This implies that when a MUE from outside
enters a house, it joins the Closed Subscriber Group of that house and hence the
HeNB doesn’t behave as an interference source. Macro-cells do not serve any of
the HUEs placed in their respective coverage areas, and HUEs are only served
through the femto-cells. Figure 3b illustrates the mobility of a femto and a macro
user having the above mentioned behavior. In Fig. 3b the red line marks the
macro-cell coverage boundary, the 100 m × 100 m yellow rectangle represents the
femto-cell interference area and the light blue rectangle represents the femto-cell
coverage area. The difference of the femto-cell coverage area and the interference
area is that femto-cells do not serve any users beyond their coverage although
the MUEs can receive their power as interference.
In this section, we present the simulation results for our evaluations. The purpose
of these evaluations is to study and compare the effects of the two interference
mitigation schemes proposed in this paper: FPCS and RPSS. The performance of
FPCS is examined using three scenarios having three different SINR reduction
factors: c = 95 %; 90 % and 85 %. These percentages reflect the amount of SINR
reduction that is expected at a macro UE due to the presence of a femto-cell.
Results of the three FPCS scenarios are compared with the results of RPSS.
Additionally, two reference scenarios are used to benchmark the best and worst
performances. ‘No HeNB’ is the ideal scenario, and femto-cells do not interfere
with the macro UEs here. ‘Fixed’ is the worst scenario with maximum interfer-
ence from femto-cells having fixed transmit powers. Hence, altogether six sce-
narios are compared, and they are listed in Table 2 along with the terms used
to represent them in the results. Ten simulations are performed with ten differ-
ent seeds for each scenario. The confidence interval calculations in all the result
graphs are carried out using Student’s t distribution.
Scenario Term
FPCS: 95 % SINR reduction ‘95 %’
FPCS: 90 % SINR reduction ‘90 %’
FPCS: 85 % SINR reduction ‘85 %’
RPSS ‘Random’
No interference from femto-cells ‘No HeNB’
Femto-cells with fixed transmit power ‘Fixed’
The types of collected results from the above mentioned six scenarios are as
follows: for FTP users the SINRs and the download response times are com-
pared during the interference periods from HeNBs. For VoIP users, the Mean
Opinion Scores (MOS), end-to-end delays and SINRs are compared. Finally for
Video users the end-to-end delays and SINRs are compared. MOS measures the
subjective quality of a voice call and returns a scalar one digit score to express
the quality of the call [19]. The MOS values range from 1 to 5, with 5 being
the best quality and 1 the worst quality. MOS values in the simulations were
calculated based on the end-to-end delays and jitter of the delay of VoIP users,
and the experience of humans were not considered.
Figure 4a illustrates the SINR of VoIP users, Figs. 4b and c depict the per-
formances of their applications in terms of end-to-end delay and MOS. In the
SINR comparisons of ‘95 %’, ‘90 %’ and ‘85 %’ against the ‘Fixed’ scenario, the
three power control scenarios of FPCS have outperformed ‘Fixed’ with the gain
margin ranging from 30.52 % to 42.75 %. This shows a clear improvement over
the worst case scenario. This demonstrates the ability of FPCS to mitigate the
femto-cell interference. However, RPSS shows the best performance with the
exception being that they achieve much lower data rates for the HUEs.
VoIP is an example for a GBR real time application that is sensitive to delays.
Usually an end-to-end delay of more than 150 ms for a VoIP application results in
bad call quality [19] and deteriorates user satisfaction. The significant fact is that
the two interference mitigation schemes show values less than 150 ms, while the
12 0.4 5
10 4
End−to−end delay (s)
0.3
8
SINR (dB)
3
MOS
6 0.2
2
4
0.1
1
2
0 0 0
No Random 95% 90% 85% Fixed No Random 95% 90% 85% Fixed No Random 95% 90% 85% Fixed
HeNB HeNB HeNB
(a) VoIP mean SINR (b) VoIP mean delays (c) VoIP mean MOS
Fig. 4. Results comparison of VoIP users for ‘No HeNB’, ‘Random’, ‘95 %’, ‘90 %’,‘85 %’
and ‘Fixed’ scenarios
14 S. Palipana et al.
12 10
10
SINR (dB) 8
6
6
4
4
2 2
0 0
No Random 95% 90% 85% Fixed No Random 95% 90% 85% Fixed
HeNB HeNB
Fig. 5. Results comparison of FTP users for ‘No HeNB’, ‘Random’, ‘95 %’, ‘90 %’,‘85 %’
and ‘Fixed’ scenarios
‘Fixed’ scenario shows that MUE is having very bad call quality which is much
higher than 150 ms. This indicates the performance enhancement in the VoIP
application of the macro users due to the interference alleviation. MOS values
depend on the end-to-end delays and the delay jitter of VoIP users. Hence MOS
is also an important metric on the performance of the VoIP application. Any
improvement of SINR at the macro UEs due to mitigation of interference should
finally reflect on the performance of the user’s application. MOS values give an
indication on the performance enhancement of the VoIP application under the
two interference mitigation schemes over ‘Fixed’ scenario.
Figure 5a depicts the mean SINRs of FTP users for the six scenarios. Here
RPSS achieves a gain margin of 38.92 %, whereas FPCS with sensitivities of
‘95 %’, ‘90 %’ and ‘85 %’ achieve 32 %, 23.19 % and 17.94 % respectively com-
pared to ‘Fixed’. This clearly shows that the two interference mitigation schemes
perform better compared to the ‘Fixed’ scenario in terms of SINR for FTP users.
Non-GBR bearers usually carry non real time or best effort type of services; FTP
is an example for such a service. Hence FTP does not have high delay require-
ments in contrast to real time or GBR services. Figure 5b depicts the mean
download response times of FTP users across the six scenarios. As expected, the
download response time of the ‘Fixed’ scenario has the highest delay with 8.31s
FPCS and RPSS scenarios all have better download response times showing a
clear edge over the worst case user application performance.
Figure 6a depicts the Video users’ mean SINRs for the compared six scenar-
ios. FPCS with sensitivities of ‘95 %’, ‘90 %’ and ‘85 %’ achieve gain margins
of 28.34 %, 19.37 % and 13.59 % respectively and RPSS achieves a gain margin
of 43.24 % compared to ‘Fixed’. This shows that the two interference mitigation
schemes perform better compared to ‘Fixed’ in terms of SINR for Video users.
Figure 6b shows the mean end-to-end delays of video users. ‘Fixed’ scheme has
the highest delay of about one second and the confidence interval is also high
suggesting a higher variation of delays. The mean end-to-end delays for all other
scenarios are less than 0.2 s with a much lower delay variation suggesting a clear
improvement over ‘Fixed’. This shows how the worst case scenario’s video per-
formance is affected, emphasizing the importance of the interference mitigation.
Scalable and Self-sustained Algorithms for Femto-Cell 15
12 12
10 10
8 8
SINR (dB)
SINR (dB)
6 6
4 4
2 2
0 0
No Random 95% 90% 85% Fixed No Random 95% 90% 85% Fixed
HeNB HeNB
(a) Mean SINR (b) Mean delays
Fig. 6. Results comparison of Video users for ‘No HeNB’, ‘Random’, ‘95 %’, ‘90 %’,
‘85 %’ and ‘Fixed’ scenarios
Fig. 7. Comparison of HUEs for ‘95 %’, ‘90 %’,‘85 %’ and ‘Random’ scenarios
However, in all of the above comparisons, FPCS does not perform interference
mitigation optimally. RPSS out performs FPCS in all the scenarios. This can
be attributed to the amount of fading prevalent at the macro UEs which is not
estimated by the femto-cell in the FPCS interference mitigation process.
Figures 7a and b represent respectively the number of used PRBs and the
throughputs of the four HUEs. It can be observed that the number of PRBs and
throughputs of RPSS are much less than the other three in all scenarios. The
other significant fact is the throughputs and the number of used PRBs of FPCS
have similar values in all three. Hence it is evident that the performance of HUEs
are limited in RPSS due to the limited number of PRBs. On the other hand the
FPCS provides a much balanced scheme that mitigates macro UE interference
successfully while being able to provide a better service to femto-cell users.
5 Conclusion
In this paper, two novel interference mitigation schemes are proposed and eval-
uated, mainly: FPCS and RPSS. Results of these two schemes were compared
against an ideal case, ‘No HeNB’, where there is no interference from femto-cells,
and a worst case where there is maximum interference from femto-cells, ‘Fixed’.
The macro users were configured with three types of applications, VoIP, video
16 S. Palipana et al.
and FTP and their performance was evaluated under the two interference mit-
igation schemes. The results have shown that as the SINR of the macro users
improve, performance of the user applications have also improved compared to
the worst case situation. Our results show that the two interference mitigation
schemes perform efficiently compared to the worst case situation.
Although RPSS performs better than FPCS with regards to MUE SINR and
the users’ applications performance, the femto-cell users suffer because only a
subset of PRBs is allocated to their users. In addition, in real life situations the
cells can become increasingly loaded with MUEs and as a result when choosing
a subset of PRBs, there might still be a high probability that this subset would
interfere with certain MUEs. On the other hand, FPCS gives a balanced per-
formance between the performance of the MUEs and the HUEs, creating a fair
trade off between the two. It efficiently alleviates the femto-cell’s interference
on MUEs while providing a good service to the HeNB users. The main issue
with FPCS is, as it’s not able to estimate the amount of fading at the MUEs,
the efficiency of interference mitigation decreases. This has to be further studied
and a solution on how to estimated these additional effects and deal with these
situations must be devised.
The two schemes have several novel features compared to the current SoA:
simplicity, lower hardware intensiveness and non-reliance on backhaul communi-
cation. Most SoA solutions focus on in/out-band signaling with high information
exchange, but our schemes do not rely on any, which is a major advantage.
References
1. Lopez-Perez, D., Valcarce, A., de la Roche, G., Zhang, J.: OFDMA femtocells: a
roadmap on interference avoidance. IEEE Commun. Mag. 47, 41–48 (2009)
2. Lopez-Perez, D., Guvenc, I., de la Roche, G., Kountouris, M., Quek, T., Zhang, J.:
Enhanced intercell interference coordination challenges in heterogeneous networks.
IEEE Wirel. Commun. 18, 22–30 (2011)
3. Burchardt, H., Bharucha, Z., Haas, H.: Distributed and autonomous resource allo-
cation for femto-cellular networks. In: Signals, Systems and Computers (2012)
4. Claussen, H., Ho, L.T.W., Samuel, L.: Self-optimization of coverage for femtocell
deployments. In: Wireless Telecommunications Symposium (2008)
5. Yavuz, M., Meshkati, F., Nanda, S., Pokhariyal, A., Johnson, N., Raghothaman,
B., Richardson, A.: Interference management and performance analysis of
UMTS/HSPA+ femtocells. IEEE Commun. Mag. 47, 102–109 (2009)
6. Lalam, M., Papathanasiou, I., Maqbool, M., Lestable, T.: Adaptive downlink power
control for HSDPA femtocells. In: Future Network Mobile Summit (2011)
7. Morita, M., Matsunaga, Y., Hamabe, K.: Adaptive power level setting of femtocell
base stations for mitigating interference with macrocells. In: VTC Fall (2010)
8. Bharucha, Z., Saul, A., Auer, G., Haas, H.: Dynamic resource partitioning for
downlink femto-to-macro-cell interference avoidance. EURASIP J. (2010)
9. Guvenc, I., Jeong, M-R., Sahin, M., Xu, H., Watanabe, F.: Interference avoidance
in 3GPP femtocell networks using resource partitioning and sensing. In: PIMRC
(2010)
10. Mahapatra, R., Strinati, E.: Radio resource management in femtocell downlink
exploiting location information. In: ANTS (2011)
Scalable and Self-sustained Algorithms for Femto-Cell 17
11. Wang, Y., Pedersen, K., Frederiksen, F.: Detection and protection of macro-users
in dominant area of co-channel CSG cells. In: VTC (2012)
12. 3GPP R4-092042. Simulation assumptions and parameters for FDD HeNB RF
requirement. 3GPP Technical report (2009)
13. Claussen, H.: Efficient modelling of channel maps with correlated shadow fading
in mobile radio systems. In: PIMRC (2005)
14. Lichte, H.S., Valentin, S.: Implementing MAC protocols for cooperative relaying:
a compiler-assisted approach. In: SIMUTools (2008)
15. Köpke, A., Swigulski, M., Wessel, K., Willkomm, D., Haneveld, P.T.K., Parker,
T.E.V., Visser, O.W., Lichte, H.S., Valentin, S.: Simulating wireless and mobile
networks in OMNeT++ the MiXiM Vision. In: SIMUTools (2008)
16. ITU-R Recommendation M.1225. Guidelines for evaluation of radio transmission
technologies for IMT-2000. ITU, Technical Report (1997)
17. Zahariev, N., Zaki, Y., Li, X., Goerg, C., Weerawardane, T., Timm-Giel, A.: Opti-
mized service aware LTE MAC scheduler with comparison against other well known
schedulers. In: Koucheryavy, Y., Mamatas, L., Matta, I., Tsaoussidis, V. (eds.)
WWIC 2012. LNCS, vol. 7277, pp. 323–331. Springer, Heidelberg (2012)
18. Zaki, Y., Weerawardane, T., Görg, C., Timm-Giel, A.: Long term evolution (LTE)
model development within OPNET simulation environment. In: OPNETWORK
(2011)
19. Zaki, Y.: Future mobile communications: LTE optimization and mobile network
virtualization. Ph.D. dissertation, University of Bremen (2012)
20. Zaki, Y., Zahariev, N., Weerawardane, T., Görg, C., Timm-Giel, A.: Optimized
service aware LTE MAC scheduler: design, implementation and performance eval-
uation. In: OPNETWORK (2011)
21. Zaki, Y., Weerawardane, T., Gorg, C., Timm-Giel, A.: Multi-QoS-Aware fair
scheduling for LTE. In: VTC (2011)
22. Ikuno, J., Wrulich, M., Rupp, M.: System level simulation of LTE networks. In:
VTC (2010)
Enhancing Video Delivery in the LTE Wireless
Access Using Cross-Layer Mechanisms
Abstract. The current evolution of the global Internet data traffic shows an
increasing demand of video transmissions, which potentially leads to the satu-
ration of mobile networks. To cope with this issue, this paper describes tech-
niques to handle the video traffic load in the last hop, of the communication
network, i.e., the wireless access. The general idea is to benefit from a cross-
layer architecture for efficient video transport, where multiple wireless access
technologies, represented by Wi-Fi and next generation cellular technologies
(4G and beyond), interact with the upper layers through an abstract interface.
This architecture enables the introduction of enhancements in the LTE-A
wireless access: evolved Multimedia Broadcast and Multicast Services (eM-
BMS) extended with dynamic groupcast communications, video relay at the
Packet Data Convergence Protocol (PDCP) level and a smart video frame
dropping mechanism to provide mobile users with a satisfactory level of Quality
of Experience (QoE). These video-aware mechanisms leverage the abstract
interface and allow mobile operators to fine-tune their networks while coping
with the upcoming mobile video traffic increase.
1 Introduction
Recent market studies [1] and future technology forecast reports [2] show that the share
of video in global Internet traffic is growing at a rapid pace. It already represents the
majority of the Internet traffic and is going to become dominant in the near future. In
parallel, due to the diffusion of smart mobile phones and tablets, users consume videos
via wireless networks, either local or cellular. Mobile network operators face the
growing challenge of providing wireless accesses tailored to the expected level of QoE
at the user side when consuming Mobile TV, Video on Demand or user-generated
content (upstreaming).
Taking this challenge into consideration, the objective of the MEDIEVAL project
[3] was to enhance the existing network architecture to efficiently deliver video
© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 18–31, 2015.
DOI: 10.1007/978-3-319-16292-8_2
Enhancing Video Delivery in the LTE Wireless Access Using Cross-Layer Mechanisms 19
applications to the mobile users. The designed architecture is composed of four sub-
systems, Video Services Control on top to provision the network services, then
Transport Optimization (TO) to enhance video quality using transport and caching
mechanisms and Mobility Management (MM) to allow video flow continuation when
roaming [4] and finally, Wireless Access to optimise access network functions for
video delivery in the last hop through heterogeneous wireless access technologies.
Hence, novel mechanisms in the Wireless Access sub-system are designed and focus
on enhanced access techniques which exploit cross-layer optimisations through the
interaction with upper layers, e.g., application and transport layers. Contention-based
techniques, such as the IEEE 802.11 standard for Wireless Local Area Networks
(WLANs) [5], and coordination-based, e.g., the Long Term Evolution Advanced (LTE-
A) of Third Generation Partnership Project (3GPP) cellular systems are covered.
As a main pillar of its global architecture, a wireless abstract interface guarantees a
transparent interaction between the underlying wireless technologies and the video
traffic-aware upper layers. This interaction is built upon the IEEE 802.21 standard,
pictured in Fig. 1, which proposes three different Media Independent Handover (MIH)
Services [6] and offers to the upper layer management protocols generic triggers,
information acquisition and the tools needed to perform mobility. The Event Service
(MIES) provides the framework needed to manage the classification, filtering and
triggering of network events, and to dynamically report the status of the links. The
Command Service (MICS) allows the upper layer management entities to control the
behaviour of the links. The Information Service (MIIS) is distributes the topology-
related information and policies from a repository located in the network. They result in
a cross-layer architecture where the Media Independent Handover Function (MIHF)
operates as a relay between the media-specific Link layer entities and the media-
agnostic upper layer entities, e.g., MIH-Users. In the mobile terminal, the MIH-User is
usually represented by a Connection Manager (CMGR) whose main role is to decide
which path is best suited to reach the application server or the Correspondent Node
(CN) located across the Internet [7].
with the monitoring and dynamic configuration functions, the wireless components
have been enriched with technology-specific functionalities benefiting from the cross-
layer architecture. Video applications are characterised by high throughput, i.e. large
bandwidth to ensure good visual quality, and a strong sensitivity to jitter. Novel features
and techniques should address these constraints. The focus of this paper is on the work
performed from a system view on the upper layers of the LTE-A radio interface, con-
tained in the “LTE-A Specific Enhancement” block shown on the right of Fig. 2. The
enhancement applied to the cellular system covers group communications based on the
3GPP evolved Multimedia Broadcast and Multicast Services (eMBMS) standard. It
further extends the cell capabilities and coverage thanks to the introduction of a relay at
Layer 3 level between the eNodeB and the User Equipment (UE) and finally, when these
methods are not sufficient, smartly drops part of the video traffic to ensure a target quality
to the users. All three techniques can be used independently or complement one another.
The objective of this paper is to describe the enhancements achieved by the project
for the upper layers of the LTE radio interface and provide directions to help the
network operators better deliver video traffic in their cellular networks. The discussion
is organised as follows. Section 2 discusses the optimization of group communications
in the cellular LTE technology, i.e., the improvements proposed for the eMBMS
multicast support. In Sect. 3, relays operating at the Packet Data Convergence Protocol
(PDCP) level, just below networking layer, are introduced. Their impact on the quality
of the video transmitted in the cell is analysed and evaluated. In Sect. 4, we propose a
mechanism to smooth the load in the cell and avoid visual degradation of the video.
Finally, we conclude the paper by assessing these different techniques, highlighting
their benefits and suitability for future mobile networks.
The first enhancement applied to the LTE-A system addresses group communications.
Since video content uses a large amount of the available transport capacity, distributing
the same data to several users located in the same area wastes radio resources. Con-
versely, multicasting or broadcasting the service allows saving the resources that would
be used if unicast Data Radio Bearers (DRB) were established for other users and/or
purposes. Multicast communications allow sharing the resources on the wireless hop
when a geographically-close and potentially large group of mobile listeners watches the
same program. In LTE-A, the services broadcast by eMBMS are enhanced to support
dynamic multicast sessions together with user mobility.
In the cellular part of the WA architecture, multicast is optimised by supporting and
extending the eMBMS bearer service specified in the 3GPP standards [9, 10]. Its
objective is to enable point-to-multipoint communications (p-t-m) over the radio
interface (or Access Stratum), allowing resources to be shared in the network. The
MBMS support has been subject to serious revisions within the 3GPP standardization,
with the inclusion of new tools and procedures to improve its performance. Actually,
the handling of multicast flow has disappeared in the transition between the initial and
22 M. Wetterwald et al.
evolved versions of this standard, mostly due to business causes, costs and complexity
of deployment. In the LTE and LTE-A systems, only broadcast sessions are proposed.
The Multicast-Broadcast Single Frequency Network (MBSFN) areas, pictured in
Fig. 3, hosting the eMBMS, are configured semi-statically. When the network is built,
some eNodeBs are set-up in order to support point-to-multipoint transmissions, while
others, pertaining to reserved cells in the same area, do not offer that service. The
MBMS configuration is beaconed over the related cells in two different messages (or
System Information Blocks, SIB), independently of the number of listening mobile
users in the cell. To avoid the allocation of broadcast resources (MBMS Radio Bearer
or MRB) when the number of users is low, the eNodeB implements a counting pro-
cedure, where the connected MTs in the cell are invited to signal themselves back to the
base station in uplink. This procedure is used to perform admission control and allo-
cation of the MRB resources. In more recent advances, mobile nodes are able to inform
the network of their interest and have the capability to receive MBMS sessions from a
certain set of frequencies of the MBSFN, allowing the network entities to further
enhance resource allocation in the cell. This information is transferred to the target
eNodeB during the handover preparation phase within a specific MBMS context
associated to the MT.
Table 1. Traces obtained in eNodeB when applying the dynamic Session Start (time in ms.)
Steps Start End
- Final step of MT arrival in the cell (MT connected) 0.000
- MBMS context for service 97 established for the MT 0.011 0.070
- Successful MBMS context setup in the lower layers 2.673 2.666
- First multicast packet from IP to be sent to the MT 14858.290 14858.297
- eMBMS session start triggered 14858.301 14858.343
- Procedure on-going, packet sent as unicast, which prevents it 14858.344 14858.360
from being delayed
- Notification: successful completion of the procedure 14922.571 14922.586
- IP multicast packet forwarded on the MBMS bearer 15859.957 15859.983
- IP multicast packet forwarded on the MBMS bearer 16857.716 16857.743
Another impact is expected also on the configuration of the radio access when
taking into account the spectrum usage and the resource allocation. Multicast flows
require bandwidth reservation based on the dedicated eMBMS Bearer parameters
received from upper layers and the worst Channel Quality Indicator (CQI) of Multicast
clients measured in the lower layers. This results in a bad spectrum usage because users
with a robust link underutilise the bandwidth resources. Our solution combines H.264/
SVC (Scalable Video Coding) together with cross-layer optimization to dynamically
increase/decrease the video quality perceived by each user according to the different
channel feedback messages, using mechanisms similar to those described in Sect. 4.
This is of particular interest for the Personal Broadcast Service [12] studied by the
project and that is expected to gain momentum in the coming years. Here, user gen-
erated video content is distributed to a group of mobile listeners. When they are located
in the same area, an eMBMS session can be activated. A typical use case is a group of
tourists receiving personalised information from their guide during a visit [13] or the
dissemination of a road hazard event in a cooperative vehicular system.
The eMBMS can be coupled with another feature introduced in the project. An LTE-A
relay, operating on top of Layer 2, is able to improve the coordination between the
unicast and the multicast transmissions in the cell by offloading the eMBMS sessions
from the regular user traffic. This is made possible thanks to the flexibility provided by
the cross-layer architecture to start the session dynamically in the LTE PoA.
The relaying scheme is introduced at the PDCP level in the LTE access network. It
is worth noticing that in parallel to this work, Layer-3 relays were also being studied
within 3GPP, and included in the LTE-A architecture at stage 2 level (i.e. high level
design) [14]. The work achieved in the standard focuses on a new interface, the Un,
between a dedicated eNodeB (called the Donor eNodeB) and the Relay. Moreover, as
we mainly focus on video transmissions, we decide to assess the impact of the delay
introduced on video streams by the relaying architecture.
Enhancing Video Delivery in the LTE Wireless Access Using Cross-Layer Mechanisms 25
Figure 5 depicts how the RN plays a role in the wireless access architecture. A radio
configuration similar to 3GPP is adopted. The eNodeB and RN signals at the physical
layer level are assumed to be differentiated either by operating each link on a different
frequency or by time-division multiplexing. The control plane analysis we perform
mostly focuses on the impact on the latency and on the radio interface procedures for
network attachment, session setup and tear down, and detachment of the mobile node
or the LTE relay from the network. We consider here that the LTE relay serves as an
extension of the network to increase its capacity and thus is not moving. The analysis
also involves the wireless abstract interface, which allows the upper control layers to be
agnostic from the specifics of the LTE technology.
At the initialization phase, the LTE module triggers the attachment of the RN to the
LTE eNodeB, signalling that it is actually a RN. When the procedure is over, the RN
starts broadcasting the system information in its cell. When a MT connects to the
network, the RN informs the eNodeB that a new MT has appeared and retrieves its new
cell configuration parameters, differentiating those related to the link with the eNodeB
from those related to the link with the MT. A similar but reversed procedure is trig-
gered when the connection has to be reconfigured because a new video session has
started at the MT. In the data plane, the RN receives the packets from the PDCP layer
on one side and forwards them to the opposite path. It can accommodate eMBMS
sessions in an identical manner, potentially providing a different PoA for those MTs
that are interested in receiving the multicast communications and alleviating the impact
of eMBMS on other types of sessions.
The impact on the control plane turns into additional latency for establishing sig-
nalling and data radio bearers during session setup or when executing a handover.
Execution traces, recorded by one of our partners in an operational network during
the attachment of a MT, show that a radio reconfiguration takes only a very few
milliseconds (less than 4 ms) compared to a total attachment time of 1.33 s. It can thus
be accounted that in the control plane the impact of adding a relay at PDCP level will
be minimal.
The theoretical analysis of the impact of the LTE relay on data traffic can be split
into two parts. Firstly, the impact of the forwarding in the LTE Relay itself and
secondly, the impact of adding a second radio link before the delivery of packets to the
MT. The second radio link doubles the burden of radio transmissions on the traffic
flow. It increases the effect of the Relay-to-eNodeB radio link on the QoS metrics for
the delay or the jitter, but can be compensated by an adaptation of the coding and
modulation techniques and parameters used on each link. Packet loss is compensated
by the fact that the relay operates at PDCP level and that Layer 2 recovery mechanisms
are fully operational.
In order to evaluate the resulting performance of such a scheme, we implemented a
simple scenario within a network simulation performed with the open-source simulator
ns-3 [19]. There, we show the improvement in terms of throughput achieved in a
cellular network when relay nodes are enabled to help the eNodeB deliver the packets
to multiple users. In this scenario, we first place 20 users in the coverage area of an
eNodeB (transmission power of 30 dBm, bandwidth 5 MHz), using the Friis propa-
gation loss model. In a second phase, we place 2 relays at few km from the base station.
The base station sends 500 packets of 1024 bytes every 20 ms to each node. The
Enhancing Video Delivery in the LTE Wireless Access Using Cross-Layer Mechanisms 27
simulation runs do not take into account signalling, which was studied independently
as aforementioned and we assume that the channel between eNodeB and relays is ideal.
This simplification can be justified by the fact that the RN is considered static with an
optimised radio link towards the eNodeB.
Figure 6 shows the comparison of data reception for the different nodes according
to their distance to the eNodeB. The blue points show the reception in the case without
relays, whereas the magenta squares show the simulations with two relays. The figure
confirms that in all cases the situation of the worst nodes, i.e., that suffered from losses
in the standard case, has been improved to a large extent.
This functionality permits to extend the network coverage while still benefiting
from the transmission quality and error recovery present in the link layer protocols.
MTs closer to the RN than to the eNodeB can access the cell while still obtaining a
good communication quality. A larger number of users can be accommodated through
the same eNodeB by distributing their load between several relay nodes, hence
improving the scalability performance of the wireless access. The traffic passing
through the eNodeB can be increased, compared to a standard MT-eNodeB attachment,
since the transmission between the Relay and the eNodeB is expected to be of good
quality and can use modulation and coding schemes with low redundancy. The results
obtained prove that this type of relay has a moderate impact on the general control
plane procedures, while improving drastically the transmission and coverage of the
LTE cell, which benefits network operator and users.
Even though such relays had been under specification in 3GPP since the beginning
of the project, our study has shown how they could positively impact the video traffic
delivery. Beside enlarging the coverage and improving the reception quality in the
related cells, we propose that such relays are used to separate the eMBMS groupcast
listeners from the regular users with unicast traffic, which would put aside current
limitations faced by operators to deploy the eMBMS. One of the major reasons for not
deploying MBMS in previous releases of 3GPP was its radio impact on other types of
28 M. Wetterwald et al.
communications when sharing the same cell. Coupling an LTE RN to the eNodeB to
handle specifically the MBMS traffic allows a dedicated node with differentiated
physical and medium access parameters to serve as MBMS PoA for video delivery.
Users listening to MBMS broadcast or multicast sessions can be attached to the LTE
RN while the others remain attached to the eNodeB (or another LTE Relay attached to
it) and are unaffected.
In the previous sections, mechanisms were introduced to extend the capability of the
LTE-A cell. However, there are cases when this is not sufficient and sudden heavy
traffic load conditions have to be handled. The simple, yet very unpopular, solution
consists in denying access to new users or even breaking some existing communica-
tions. Accepting all data traffic means that part of the data packets will not be able to go
through, being dropped in a random fashion at the link layer, which may generate a
temporary degradation or even stalling of the image on the screen [20].
The last mechanism outlined in this paper to improve the transport of video
applications in the LTE-A cells selects instead specific video frames in the eNodeB to
address overload in the last hop. We propose a cross-layer mechanism where we try to
resolve the issue of high occupancy of Radio Link Control (RLC) buffers, by reporting
it through the abstract interface to the TO. The upper layers can mark the priority of the
IP packets according to their video content (e.g., SVC video layer). The lower priority
packets can be dropped based on parameters transferred through another cross-layer
interaction in the eNodeB.
A cross-layer Video Frames Selection function performs this temporary rate
adaptation on the last hop, yet avoiding deep packet inspection. It classifies and filters
the received video frames according to a dedicated mark previously introduced in the
IP packet header. When a congestion is detected in the network, the data packets are
marked for prioritisation by the TO. The lower priority packets can then be dropped
before the video frames are actually handled by the Link layer protocols, according to
the receiver capabilities. This reduces the bandwidth occupation and loosens the level
of traffic load in the last hop. The process initially designed performs the full process
inside the PoA itself: detect the congestion, decide on the filtering and drop the packets.
However, considering that a global SVC layer optimization algorithm exists in the TO,
an alternative solution has been adopted that keeps the decision and marking update of
the IP packets in the TO, based on the results of its algorithms, while the decision is
executed in the LTE-A specific wireless component. This last operation, restricted to
the overloaded cell, is accomplished in the eNodeB, after the packets coming from the
Core Network have been decapsulated from the General Packet Radio Service (GPRS)
Tunnelling Protocol-User (GTP-U) tunnel and before they get encapsulated in the
PDCP protocol.
Figure 7 indicates with a (*) the components of the implementation involved in this
mechanism. New functions have been introduced in the RRC (Radio Resource Control)
and LTE-A specific wireless components at the eNodeB that retrieve the measurement
of buffer occupancy from the RLC layer and signal an event to the upper layers through
Enhancing Video Delivery in the LTE Wireless Access Using Cross-Layer Mechanisms 29
the abstract interface when this occupancy reaches a certain threshold corresponding to
heavy load conditions. In the case of the initial solution, where the whole process is
performed in the eNodeB, a classifier located at the Non-Access Stratum (NAS) driver
above the PDCP layer is able to drop silently the least significant video frames, based
on the marking of the packets arriving from the IP protocol stack. The classifier
operates by comparing the Differentiated Services Code Point (DSCP) field of the IP
packet header with an active mask, thus avoiding deep packet inspection of other
header or even data fields in the classifier, and of the network layer fields in the wireless
access layers. In the alternative solution, on request from the TO, some measurements
of the planned Physical Resource Blocks (PRB) and total data volume from the MAC
layer are reported through the abstract interface, enabling the TO to drop the least
important packets directly in the core network. The implemented process affects the
eNodeB only, and is split between the LTE radio interface protocols (RRC and MAC
layers), and the LTE-A specific component which retrieves and analyses the mea-
surements, then executes the required actions.
Functional results could be obtained with a local testing system. This successful test
has been performed on a small testbed focusing mostly on network measurements and
congestion detection. Another part of the testbed complemented this evaluation, taking
care of the packet dropping as reported in [21]. The test performed here allowed
validating the correct operation of the LTE-A specific module in cooperation with the
radio interface protocol layers and the abstract interface. The traces obtained are
summarised in Table 2. From a functional point of view, the correct execution of the
following features has been verified: detecting the congestion situation in the eNodeB
according to the specified threshold, triggering notification about the high load event to
the TO, returning link traffic parameters on request from the upper layers and finally
stopping the specific measurements when the situation has returned to normal condi-
tion, in order to reduce the mechanism overhead on the control plane.
30 M. Wetterwald et al.
Table 2. Traces recorded at the eNodeB during a congestion event (time in s.)
Event Time
- LTE-A module receives an event subscription for congestion notification. 0.000
- It polls periodically the lower layers to check the cell correct operation. 35.991
- Congestion detected (RLC buffers for MT0 above threshold); a notification is sent 41.296
to the upper layers.
- Upper layer (TO) requests periodic measurement retrieval 41.297
- Link traffic parameters forwarded through the L2.5 Abstraction Layer. 44.421
- After the problem resolution by the TO, the measures fall back to normal 72.975
conditions.
- Request received from the TO to stop forwarding the measurements 72.976
- Request executed by the LTE-A module 75.579
5 Conclusion
This paper has described several enhancements proposed by the MEDIEVAL project to
mobile network operators in order to help them more efficiently distribute the video
traffic in the wireless cells. Our objective is to reduce the load imposed by this specific
type of applications, which are undertaking a huge growth in the coming future. Under
this objective, we have focused on next generation wireless networks where we aim at
providing video-friendly optimizations. Towards that goal, we have based our archi-
tecture on three main pillars: cross-layer abstraction, access network monitoring and
network interface dynamic configuration. They have served as a basis to the devel-
opment of innovative features that should improve the current design of operator
networks in the last hop. The first concept was based on group communications. We
have enhanced the eMBMS to configure dynamic multicast sessions, with better per-
formance for the session setup procedure, benefiting from the cross-layer design which
allows receiving the eMBMS parameters at the eNodeB ahead of the session start. We
have evaluated the impact of introducing eNodeB relays operating at the PDCP level
on the QoS and cell coverage extension, including for separating eMBMS traffic from
legacy service. Finally, we have implemented a cross-layer mechanism to selectively
drop IP packets containing lower priority video frames in order to handle heavy load
conditions in a specific cell and potentially avoid congestion or access rejection. This
filtering applies in the eNodeB, at the junction between the GTP-U tunnel and the
PDCP protocol. From these enhancements, we have demonstrated that the abstract
interface introduced between the upper layer control entities and the wireless access
modules provides additional capabilities to efficiently manage the network traffic and to
introduce novel network mechanisms in a video-optimised way. Moreover, the com-
bination of enhanced link-specific mechanisms allows the wireless link access to go
beyond a simple wireless transmission of data.
Acknowledgments. The research leading to these results has received funding from the
European Community’s Seventh Framework Programme (FP7-ICT-2009-5) under grant agree-
ment n. 258053 (MEDIEVAL project).
Enhancing Video Delivery in the LTE Wireless Access Using Cross-Layer Mechanisms 31
References
1. Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2013–2018.
http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_
paper_c11-520862.html/. Accessed May 2014
2. Celtic Plus Purple book, March 2012. http://www.celtic-initiative.org/PurpleBook+/
Purplebook.asp. Accessed May 2014
3. http://www.ict-medieval.eu
4. Costa, R., Melia, T., Munaretto, D., Zorzi, M.: When mobile networks meet content delivery
networks: challenges and possibilities. In: ACM MobiArch, August 2012
5. IEEE Standard for Information Technology-Telecommunications and information exchange
between systems-Local and metropolitan area networks-Specific requirements - Part 11:
Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications,
IEEE Std. 802.11, 2007
6. Piri, E., Pentikousis, K.: IEEE 802.21. Internet Protoc. J. 12(2), 7–27 (2009)
7. Kassar, M., Kervella, B., Pujolle, G.: An overview of vertical handover decision strategies in
heterogeneous wireless networks. Comput. Commun. 31(10), 2607–2620 (2008)
8. Corujo, D., Bernardos, C.J., Melia, T., Wetterwald, M., Badia, L., Aguiar, R.L.: Key
function interfacing for the MEDIEVAL project video-enhancing architecture. In:
Pentikousis, K., Aguiar, R., Sargento, S., Agüero, R. (eds.) MONAMI 2011. LNICST,
vol. 97, pp. 230–243. Springer, Heidelberg (2012)
9. Lecompte, D., Gabin, F.: Evolved multimedia broadcast/multicast service (eMBMS) in
LTE-advanced: overview and Rel-11 enhancements. IEEE Comm. Mag. 50, 68–74 (2012)
10. 3GPP TS 23.246: Multimedia Broadcast/Multicast Service (MBMS); Architecture and
functional description, Release 12
11. Figueiredo, S., Wetterwald, M., Nguyen, T., Eznarriaga, L., Amram, N., Aguiar, R.L.: SVC
multicast video mobility support in MEDIEVAL project. In: Proceedings of Future Network
and Mobile Summit 2012, Berlin, Germany, 4–6 July 2012
12. 3GPP TR 22.947: Study on Personal Broadcast Service (PBS), Release 10
13. Badia, L., Bui, N., Miozzo, M., Rossi, M., Zorzi, M.: Improved resource management
through user aggregation in heterogeneous multiple access wireless networks. IEEE Trans.
Wireless Commun. 7(9), 3329–3334 (2008)
14. 3GPP TS 36.300: Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved
Universal Terrestrial Radio Access Network (E-UTRAN); Overall description; Stage 2,
Release 10
15. Quer, G., Librino, F., Canzian, L., Badia, L., Zorzi, M.: Inter-network cooperation exploiting
game theory and Bayesian networks. IEEE Trans. Commun. 61(10), 4310–4321 (2013)
16. Wirth, T., Venkatkumar, V., Haustein, T., Schulz, E., Halfmann, R.: LTE-advanced relaying
for outdoor range extension. In: Proceedings of VTC Fall, September 2009
17. Beniero, T., Redana, S., Hämäläinen, J., Raaf, B.: Effect of relaying on coverage in 3GPP
LTE-advanced. In: Proceedings of VTC Spring, April 2009
18. Huang, X., Ulupinar, F., Agashe, P., Ho, D., Bao, G.: LTE relay architecture and its upper
layer solutions. In: Proceedings of IEEE GLOBECOM, December 2010
19. NS-3 simulator. http://www.nsnam.org/. Accessed May 2014
20. Quality of Experience for Mobile Data Networks: White Paper, Citrix, 2013
21. Fu, B., Kunzmann, G., Wetterwald, M., Corujo, D., Costa, R.: QoE-aware traffic
management for mobile video delivery. In: Workshop on Immersive and Interactive
Multimedia Communications over the Future Internet, IEEE ICC 2013, Budapest, Hungary,
9–13 June 2013
Novel Schemes for Component Carrier
Selection and Radio Resource Allocation
in LTE-Advanced Uplink
Abstract. The LTE (Long Term Evolution) provides mobile users high
throughput and low latency. In order to meet the requirements of future mobile
data traffic, the 3rd Generation Partnership Project (3GPP) has introduced
advanced features to the LTE system, including Carrier Aggregation (CA),
enhanced MIMO (Multiple Input Multiple Output), and coordinated multipoint
(CoMP). The enhanced system is called LTE-Advanced (LTE-A) system. This
paper investigates Component Carrier Selection (CCS) and radio resource
scheduling for uplink in LTE-A. In this work, a CCS algorithm depending on
the pathloss and the slow fading of the radio signals is developed. Based on the
channel conditions and the Quality of Service (QoS) requirements of the users, a
Channel and QoS Aware (CQA) uplink scheduler is also designed, which is time
and frequency domain decoupled. The simulation results demonstrate that the
proposed schemes provide a good QoS and throughput performance as com-
pared to other reference schemes.
1 Introduction
The 3rd Generation Partnership Project (3GPP) has designed the Long Term Evolution
(LTE) standard to support high data rates of up to 100 Mbps in downlink and 50 Mbps
in uplink [1]. The 3GPP Release 8 series specify the LTE standards, with enhance-
ments in Release 9. However, these specifications do not meet the 4G requirements set
by ITU-R (International Telecommunication Union Radiocommunication Sector), e.g.
data rate of up to 1 Gbps. To achieve such requirements, the LTE system has been
extended by introducing several new features. The 3GPP Release 10 documents feature
new technologies designed for improving the performance of LTE. The improved
system is termed as LTE-Advanced (LTE-A). The main features of LTE-A include
Carrier Aggregation (CA), enhanced Multiple Input Multiple Output (MIMO), as well
© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 32–46, 2015.
DOI: 10.1007/978-3-319-16292-8_3
Novel Schemes for Component Carrier Selection and Radio Resource Allocation 33
as coordinated multipoint (CoMP). With these new features the LTE-A can support
transmission over a bandwidth of up to 100 MHz as compared to only 20 MHz of LTE.
The primary focus of this paper is to design a Component Carrier Selection (CCS)
algorithm to perform CA and a radio resource allocation scheme for LTE-A uplink.
The function of CA is to aggregate several bands to achieve a wider bandwidth for data
transmission. However, a wider bandwidth does not necessarily ensure a better per-
formance in uplink. Terminals lacking sufficient power, for example, may not benefit
from it. Therefore, it is essential to determine whether the frequency bands should be
aggregated or not, which is why an efficient CCS scheme is required. The scheme
presented in this work is based on a time variant radio channel model with real-time
channel conditions of the users. The scheme takes into account the pathloss and slow
fading of the users under mobility consideration. The radio resource allocation scheme
presented in this work performs scheduling decisions with awareness of the CCS and
Quality of Service (QoS) requirements of the mobile terminals.
2 Literature Review
Despite the relative novelty of the topic, a considerable amount of literature is available
for downlink CA. The main difference between uplink and downlink transmission is
the power constraint of a user terminal. The contiguity constraint of frequency resource
allocation to single user is relinquished in LTE-A. The performance gain of CA in
terms of throughput and fairness is investigated in [2] over independently deployed
carriers for downlink with system level simulations. The LTE-A uplink CA is inves-
tigated in [3] with the help of a simple CCS algorithm. The results depict a performance
improvement as compared to LTE users. In [4], an advanced CCS algorithm is pro-
posed to optimize the system performance with CA in uplink. The results show an
improvement in performance over the algorithm in [3]. However, the authors do not
address the user mobility. Chunyan et al. [5] investigates a CCS scheme for diverse
coverage of CA deployment, such that Component Carriers (CCs) of an eNodeB have
antennas with beams in different directions.
Recent work on radio resource scheduling mainly focuses on LTE. For LTE
downlink, [6] proposes a QoS aware scheduler. In [7], the performance of the scheduler
in [6] is compared with other traditional schedulers. For LTE uplink, [8] suggests a
QoS and channel aware scheduler, which takes Power Control (PC) and contiguity
constraint into consideration. For LTE-A downlink, the authors of [9] employ user
grouping based downlink resource allocation algorithm with CA, results show that the
proposed algorithm is fairer to users than the proportional fair algorithm. Various load
balancing methods over multiple CCs are analyzed in [10]. For LTE-A uplink, [11]
proposes a cross-carrier scheduling method along with a per-carrier scheduling method,
enabling scheduling on several carriers. Liu and Liu [12] proposes a subcarrier allo-
cation method which assumes equal power allocation among all subcarriers. The
proposed method achieves better sector throughput and cell-edge user throughput. In
[13], the work in [12] is further elaborated with new results indicating performance
improvement in terms of sector throughput and cell-edge user throughput.
34 S.N.K. Marwat et al.
Our motivation is to design schemes for both CA and radio resource scheduling
functionalities incorporated into LTE-A environment according to CCS scheme under
mobility consideration. The scheduler is designed in such a way that the CA func-
tionality is taken into account while making radio resource allocation decisions.
Band 1 Band 2
Inter-band, non-contiguous
distance are assigned to primary CC, otherwise both CCs. This algorithm is easy to
implement and requires simple calculations; however, different threshold distances give
different performances. If the best performance is to be achieved, the determination of
the distance threshold requires a great amount of testing. Moreover, the network
environment changes over the time, while the distance threshold would not adapt
accordingly, once it is set.
An effective pathloss threshold based CC selection algorithm was proposed in [4]
to distinguish between power limited and non-power limited LTE-A users.
10 log K þ Pbackoff
Lthreshold ¼ L95% ð1Þ
a
where L95% is the estimated 95 percentile user pathloss, K is the total number of CCs
and a is the pathloss compensation factor used in the PC scheme. Pbackoff is the
estimated power backoff to model the effects of increased PAPR and CM (Cubic
Metric) when a user transmits over multiple CCs simultaneously. If a user is scheduled
for transmission only on one CC, there is no power backoff; otherwise, it is set with a
fixed value, for example, 4 dB or 6 dB. With a higher power backoff, less LTE-A users
will be assigned on multiple CCs due to the limitation of user transmission power. The
users are stationary in the cell. When the pathloss of the LTE-A users is higher than
Lthreshold , they are considered to be power limited and assigned on a single CC;
otherwise they are considered to be non-power limited and can use multiple CCs for
data transmission. Hence, the cell-edge users would not experience performance loss
Novel Schemes for Component Carrier Selection and Radio Resource Allocation 37
from being scheduled over multiple CCs, while the non-power limited users can benefit
from the advantages of a wider bandwidth.
This scheme is further extended in this work, and provides improvement to achieve
better system performance. Instead of assuming that the users are stationary, a time
variant radio channel model is used to get the real-time channel conditions of the users.
Furthermore, not only the pathloss is considered when determining the threshold and
the number of CCs, the slow fading is also taken into consideration. Therefore, instead
of labeling the users as power limited and non-power limited, the CCS would be based
on the real-time channel conditions of the users. The proposed algorithm can be
illustrated as:
10 log K þ Pbackoff
Lthreshold;mob ¼ ðL þ SFÞ95% ð2Þ
a
where SF is the slow fading of the user. At the time, when the sum of the user’s
pathloss and slow fading is higher than the threshold, one CC is assigned; otherwise,
the user can use multiple CCs for data transmission.
Quality Class Identifier (QCI), which is an index associated with predefined values for
the priority, delay budget and packet loss rate. Nine QCI classes are defined by the
3GPP - four for the GBR (Guaranteed Bit Rate) bearers and five for the non-GBR
bearers. Network operators are allowed to define additional new classes based on their
specific needs. Since each traffic type has different QoS requirements, each bearer
(associated with a traffic type) is assigned with a single QCI class. In the MAC
scheduler, the bearers are distributed into five MAC QoS classes according to [6]: two
classes - MAC QoS Class 1 and Class 2 (not considered in this work) for the GBR
bearers and three classes - MAC QoS Class 3, 4 and 5 for the non-GBR bearers.
Table 1 shows how MAC QoS classes are mapped onto QCI classes.
The TDPS priority metric value for the bearer k, similar to [8], is expressed as:
Rinst;i ðtÞ
kk ðtÞ ¼ Wk ðtÞ ð3Þ
Ravg;k ðtÞ
where Rinst;i ðtÞ is the instantaneously achievable throughput at time t of user i to which
bearer k belongs, Ravg;k ðtÞ is the average throughput of bearer k at time t obtained by
using the Exponential Moving Average (EMA) time window of 1 s, Wk ðtÞ is the
dynamic QoS weight of bearer k at time t and explained below. The bearers are placed
in GBR and non-GBR bearer lists and sorted according to TDPS metric.
Wk ðtÞ is the TDPS weight of bearer k and is calculated according to the following
formula, which takes the QoS requirements of bearer k into consideration.
Rmin;k sk ðtÞ
Wk ðtÞ ¼ q ðtÞ ð4Þ
Ravg;k ðtÞ smax;k k
where Rmin;k is the bit rate budget, smax;k is the end-to-end delay budget, Ravg;k ðtÞ is the
average throughput, and sk ðtÞ is the packet delay of bearer k, qk ðtÞ is a variable with
value set to 10 if sk ðtÞ is above the threshold value of bearer k at time t, otherwise equal
to 1. The value 10 of qk ðtÞ raises the metric value of bearer k by 10 times and ensures
immediate scheduling. This value works well for the traffic load scenarios investigated
in this paper. This variable is to avoid large packet delays of delay-sensitive traffic (e.g.
video). The bit rate budget, packet delay budget and delay threshold values [8] used in
this work for the QoS classes are given in Table 2. Bit rate budget for a QoS class is
defined according to its traffic model in this work. However, the network operators can
modify the behavior of the scheduler by tuning these values.
a. If TBS is greater than the buffer size, the bearer can be completely served in this
TTI, the PC constraint of the UE is checked.
(i) If the number of PRBs reserved for the user of this bearer exceeds the PC
limit, do not allocate the reserved PRB, serve the UE with remaining
reserved PRBs if any and discard it from the candidate list for this TTI.
(ii) Otherwise the reserved PRB is allocated to the bearer and scheduled.
b. If the TBS is smaller compared to the buffer size, the bearer waits until the
remaining bearers from the subset candidate list go through the above procedure.
4. Once all the bearers in the candidate list get a PRB reserved or get discarded after
being scheduled, the above process is repeated again for the remaining bearers and
the effective SINR of more than one PRB is calculated using link-to-system map-
ping. This procedure continues until all the PRBs are allocated or all the bearers in
the subset candidate list are served.
5. Once the subset list is completely served, and there are still some bearers to be
served in the non-GBR candidate list, then in case of availability of PRBs, the
N þ 1 th bearer in the candidate list is moved to the subset candidate list and
provided PRBs according to the above procedure. This continues until there are no
more PRBs available or no more bearers to serve in candidate list.
5 Simulation Results
The OPNET Modeler is used as the simulation and analysis tool in this work. The LTE-A
model implementation is based on the LTE simulation model designed in [15]. The
simulations are performed under the parameter settings and traffic models given in
Table 3 and are in accordance with [16].
Figure 6 shows the average voice delay. Since the voice users are defined as the
GBR users and served with highest priority. Therefore, the voice packet end-to-end
delay does not vary much for different CCS algorithms. The video and the file transfer
users are non-GBR, so their performance varies with different CCS schemes. Figures 7
and 8 present the average video end-to-end delay and average file upload time. The
case with pathloss and slow fading as the threshold gives lower video delay and file
upload time, compared to the one with the pathloss as the threshold. Also, different
distance thresholds result in good results for 200 m and poor for 100 m.
hand, the BET scheduler is designed to provide equal throughput to all the users,
therefore the delay requirements of video users are not taken into account. MaxT
scheduler also neglects the requirements of the video bearer, and PF scheduler in also
not considering the video QoS. In summary, the CQA scheduler is QoS-aware and able
to treat traffic types according to their respective requirements.
44 S.N.K. Marwat et al.
6 Conclusion
In this paper, we investigated several schemes for CCS and MAC scheduling. The CCS
scheme proposed in this work is an improvement of the one proposed in [4]. We also
analysed a reference scheme based on distance from eNodeB. Simulation results
illustrated that the proposed scheme can enhance the user QoS and cell throughput
performance. We also designed a QoS based scheduler (CQA) with channel awareness
We analysed the performance of the TDPS metric algorithm of CQA scheduler having
dynamic QoS weight by comparing its performance with other contemporary TDPS
metric algorithms. The simulation results show a promising performance of the
designed approach. Our future goal would be to extend our model for multiple con-
tiguous and non-contiguous CCs.
References
1. 3GPP, 3GPP TSG RAN Future Evolution Work Shop, Toronto, Canada, Technical Paper,
November 2004
2. Chen, L., Chen, W., Zhang, X., Yang, D.: Analysis and simulation for spectrum aggregation
in LTE-Advanced system. In: IEEE 70th Vehicular Technology Conference, Anchorage,
AK, USA, 20–23 September 2009
3. Wang, H., Rosa, C., Pedersen, K.I.: Performance of uplink carrier aggregation in LTE-
Advanced systems. In: IEEE 72nd Vehicular Technology Conference, Ottawa, ON, Canada,
6–9 September 2010
4. Wang, H., Rosa, C., Pedersen, K.I.: Uplink component carrier selection for LTE-Advanced
systems with carrier aggregation. In: IEEE International Conference on Communications,
Kyoto, Japan, 5–9 June 2011
5. Chunyan, L., Wang, B., Wang, W., Zhang, Y., Xinyue, C.: Component carrier selection for
LTE-A systems in diverse coverage carrier aggregation scenario. In: IEEE 23rd International
Symposium on Personal Indoor and Mobile Radio Communications, pp. 1004–1008.
Sydney, NSW, Australia, 9–12 September 2012
6. Zaki, Y., Weerawardane, T., Goerg, C., Timm-Giel, A.: Multi-QoS-aware fair scheduling for
LTE. In: IEEE 73rd Vehicular Technology Conference, Yokohama, Japan, 15–18 May 2011
7. Zahariev, N., Zaki, Y., Li, X., Goerg, C., Weerawardane, T., Timm-Giel, A.: Optimized
Service Aware LTE MAC Scheduler with Comparison against Other Well Known
Schedulers. In: Koucheryavy, Y., Mamatas, L., Matta, I., Tsaoussidis, V. (eds.) Wired/
Wireless Internet Communication. Lecture Notes in Computer Science, vol. 7277, pp. 323–
331. Springer, Heidelberg (2012)
8. Marwat, S.N.K., Weerawardane, T., Zaki, Y., Goerg, C., Timm-Giel, A.: Performance
evaluation of bandwidth and QoS aware LTE uplink scheduler. In: Koucheryavy, Y.,
Mamatas, L., Matta, I., Tsaoussidis, V. (eds.) Wired/Wireless Internet Communication.
Lecture Notes in Computer Science, vol. 7277, pp. 298–306. Springer, Heidelberg (2012)
9. Songsong, S., Chunyan, F., Caili, G.: A resource scheduling algorithm based on user
grouping for LTE-Advanced system with carrier aggregation. In: International Symposium
on Computer Network and Multimedia Technology, Wuhan, China, 18–20 January 2009
10. Wang, Y., Pedersen, K.I., Mogensen, P.E., Sorensen, T.B.: Carrier load balancing methods
with bursty traffic for LTE-Advanced systems. In: IEEE 20th International Symposium on
Personal, Indoor and Mobile Radio Communications, pp. 22–26. Tokyo, Japan, 13–16
September 2009
11. Wang, Y., Pedersen, K.I., Sorensen, T.B., Mogensen, P.E.: Carrier load balancing and
packet scheduling for multi-carrier systems. IEEE Trans. Wirel. Commun. 9(5), 1780–1789
(2010)
12. Liu, F., Liu, Y.: Uplink scheduling for LTE-Advanced system. In: IEEE International
Conference on Communication Systems, pp. 316–320. Singapore, 17–19 November 2010
13. Liu, F., Liu, Y.: Uplink channel-aware scheduling algorithm for LTE-Advanced system. In:
7th International Conference on Wireless Communications, Networking and Mobile
Computing, Wuhan, China, 23–25 September 2011
14. Boussif, M., Quintero, N., Calabrese, F.D., Rosa, C., Wigard, J.: Interference based power
control performance in LTE uplink. In: IEEE International Symposium on Wireless
Communication Systems, pp. 698–702, 21–24 October 2008
15. Zaki, Y., Weerawardane, T., Goerg, C., Timm-Giel, A.: Long Term Evolution (LTE) model
development within OPNET simulation environment. In: OPNET Workshop 2011,
Washington, DC, USA, 29 August–1 September 2011
46 S.N.K. Marwat et al.
16. Marwat, S.N.K., Weerawardane, T., Zaki, Y., Goerg, C., Timm-Giel, A.: Design and
performance analysis of bandwidth and QoS aware LTE uplink scheduler in heterogeneous
traffic environment. In: 8th International Wireless Communications and Mobile Computing
Conference, Limassol, Cyprus, 27–31 August 2012
17. Cavers, J.K.: Mobile Channel Characteristics. Kluwer Academic Publishers, Boston (2002)
Optimising LTE Uplink Scheduling by Solving
the Multidimensional Assignment Problem
1 Introduction
In recent years the demand for higher data rates in mobile networks has sig-
nificantly increased. Therefore higher data rates in mobile communication are a
major goal within the optimisation of mobile networks. The efficiency of such
communication systems can be described by the Cell Spectral Efficiency (CSE)
which is defined as achieved data rate per unit of radio spectrum and cell.
LTE [1] is a mobile communication system applying OFDM with a frequency
reuse factor of one. Therefore, the complete frequency spectrum used by the sys-
tem is available in every cell. This allows high flexibility in resource assignment,
but is causing severe inter-cell interference, especially in neighbouring cells.
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 47–59, 2015.
DOI: 10.1007/978-3-319-16292-8 4
48 R. Elsner et al.
assignment in this simple example with user terminals UT1,1 and UT2,2 on the
same resources and UT1,2 and UT2,1 on the same resources will lead to other
sum data rates. Due to a different interference situation the sum data rates r1,2
and r2,1 are calculated. Thus, the different possible resource assignments are
causing different interference situations between the UTs and therefore lead to
a different overall system throughput. The aim is to optimise this overall system
throughput. By doing this, the cell spectral efficiency is optimised.
Fig. 1. Example of a possible resource assignment for two cells and two user terminals
per cell.
For scenarios with more than two cells the sum data rates are written in a tensor.
It has as many indices as cells. This is leading to a multidimensional assignment
problem [16] which is defined for three dimensions by [17]:
N N N
max i=1 j=1 k=1 cijk rijk (5)
N N
s.t. j=1 k=1 cijk = 1, for i = 1, 2, ..., N, (6)
N N
i=1 k=1 cijk = 1, for j = 1, 2, ..., N, (7)
N N
i=1 j=1 cijk = 1, for k = 1, 2, ..., N, (8)
cijk ∈ {0, 1} (9)
2 Simulation Scenarios
Different scenarios were taken into account for evaluating the impact of cooper-
ative resource management in groups. First, small scenarios representing simple
indoor femto-cell deployments scenarios with a closed subscriber group were
analysed. The scenario is adapted from the ITU IMT-Advanced indoor scenario
described in [14]. All links are set to have Non-line-of-sight (NLoS) channel con-
ditions. The user terminals are randomly distributed within cell areas of 100 m
radius. As described in the previous section all coordinated cells must serve the
same number of user terminals to formulate the problem as an multidimensional
assignment problem. A simple approach assuring an equal amount of terminals
scheduled in each TTI is chosen: The number of users served by each base station
is equal. This way no time domain scheduling is necessary.
The layout of the scenario set up is shown in Fig. 2. Results have been
obtained for different numbers and locations of cooperatively managed base sta-
tions. The different analysed groupings are depicted in Fig. 3. The assignment
problem in these scenarios has been solved using a brute force algorithm such
that the optimal user assignment for the cooperatively managed base stations is
found.
The more cells are cooperatively managed by the central scheduling entity
in the cloud, the more potential for optimisation exists. Thus the interference
can be reduced and the system throughput increased. So, it is expected that the
cooperative resource management of all base stations will lead to a higher system
throughput than the cooperative management of two cells. Furthermore, the
coordination should at least not give a worse throughput than the uncoordinated
scenario.
In a next step the coordination of different groups has been applied to a
larger scenario. The urban macro-cell scenario has been described by the ITU
in [14]. For this evaluation 21 sectors at 7 base station sites are simulated. The
coordination in groups enables us to analyse different coordination setups of this
52 R. Elsner et al.
(a) Neighbouring base (b) Neighbouring base (c) Opposing base stations
stations cooperatively stations cooperatively- cooperatively managed
managed managed
Scenarios with same random number seed but different groupings are com-
pared. This way it is assured that different performance is caused by different
groupings and not by more or less favourable user terminal deployments.
(c) Six base stations cooper- (d) Two base stations coop-
atively managed eratively managed
Fig. 4. Urban macro-cell scenario groupings. Base stations are indicated at their sites.
Arrows indicate the main radiation directions of sector antennas. Ungrouped cells are
left blank. The centre cells are marked by a blue border.
Optimised LTE Uplink Scheduling 55
3 Results
The results in Fig. 5 show the average cell throughput for the different group-
ings of Fig. 3 and for two different cell centre distances. The provided black lines
in the figure (as in all following as well) indicate the 95 % confidence interval.
The results proof the assumption that it is better to coordinate all base stations
than only few. This holds independently of the distance. It can be seen that
no significant (or even none at all) improvement is achieved for a coordination
of only two base stations, no matter which are coordinated. For a more spe-
cific analysis the ratio for a certain grouping to achieve the best throughput is
analysed.
Fig. 5. Average cell throughput for different groupings as in Fig. 3 for the small
scenarios
The ratio how often a particular grouping lead to the highest overall CSE was
evaluated. This grouping is referred to as the “winner” for the particular user
terminal deployment. In Fig. 6 the results for the winning coordination set up in
achieving the best throughput for the small scenarios are shown. As expected,
coordinating all base stations leads to the highest throughput. Furthermore it
shows that it is important which base stations are coordinated. As coordinating
all base station leads to the highest throughput in most of the cases this group-
ing is taken out off the evaluation in this Figure. Here, in almost 90 % of the
simulation runs the coordination of one of both possible neighbouring groupings
has lead to the best system throughput. On the other hand it can be seen that
56 R. Elsner et al.
in a few simulation runs the neighbouring groupings are leading to the worst
throughput. So by not coordinating all base stations it is even possible to get a
worse solution than without any coordination. But nevertheless it gives a better
performance for a large majority of simulation runs.
Fig. 6. Ratio of different groupings (see Fig. 3) for the small scenarios for achieving
the best/worst throughput over 100 simulation runs with a cell distance of 200 m using
the brute force algorithm for optimising the user assignment
In Fig. 7 the ratio of achieving the best throughput for the urban macro-cell
scenarios is shown. It is obvious that in most simulation runs the best perfor-
mance was achieved as expected by coordinating six base stations. The grouping
which lead to the best throughput in the fewest simulation runs is the grouping
of only two base stations. The coordination of three cells has been investigated
with different configurations as explained in Sect. 2. If the six cell grouping is
taken out of the evaluation the best throughput is achieved by the grouping
of three cells of one site in most of the runs (see Fig. 8). Here, only in 28 % of
the simulations the three base station grouping of different sites is leading to the
best throughput.
The average CSE gain over an uncoordinated scenario is shown in Fig. 9. The
coordination of six base stations is leading to a gain in CSE of more than 10 %.
This confirms the expectation that the more base stations are coordinated the
higher the gain in CSE is. Furthermore, it shows again that the two different
groupings with three base stations show different performances. The coordina-
tion of cells belonging to one base station site is leading to a better performance
than the coordination of cells served by base stations of different sites. The first
location of base stations is leading to an average improvement in CSE of 6 %
compared to only 2.6 % in the later location. This is beneficial with regard to
the information that must be exchanged in order to enable coordination. The
Optimised LTE Uplink Scheduling 57
Fig. 7. Ratio of different groupings (see Fig. 4) for the urban macro-cell scenario for
achieving the best throughput over 150 simulation.
reason is that the coordinated user terminals for the first grouping often lie in
an area where antenna attenuation of user terminals of the neighbouring sector
is not very high and therefore a high interference power is received. Interference
at other sites experiences a higher attenuation due to the higher path loss. The
maximum antenna attenuation among sectors from the same site is 17 dB [14]
while the attenuation difference resulting from path loss among different sites
Fig. 8. Ratio of different groupings (see Fig. 4) for the urban macro-cell scenario for
achieving the best throughput over 150 simulation without the grouping of six cells.
58 R. Elsner et al.
can be much higher. Therefore a coordination of sectors of the same site can
avoid very low SINR values due to reduced interference power.
Fig. 9. Average gain in CSE for the different groupings (see Fig. 4) for the urban
macro-cell scenario
References
1. 3GPP, Evolved Universal Terrestrial Radio Access (E-UTRA); LTE physical layer;
LTE Physical layer; General description, Technical report 3GPP 36.201 (2009)
2. Rost, P., Bernardos, C.J., De Domenico, A., Di Girolamo, M., Lalam, M., Maeder,
A., Sabella, D., Wübben, D.: Cloud technologies for flexible 5G radio access net-
works. IEEE Commun. Mag. 52(5), 68–76 (2014)
3. Mühleisen, M., Henzel, K., Timm-Giel, A.: Design and evaluation of scheduling
algorithms for LTE femtocells. In: ITG Fachbericht Mobilkommunikation 2013,
Osnabrück (2013)
4. Girici, T., Zhu, C., Agre, J.R., Ephremides, A.: Proportional fair scheduling algo-
rithm in OFDMA-based wireless systems with QoS constraints. J. Commun. Netw.
12(1), 30–42 (2010)
5. Hamza, A.S., Khalifa, S.S., Hamza, H.S., Elsayed, K.: A survey on inter-cell inter-
ference coordination techniques in OFDMA-based cellular networks. IEEE Com-
mun. Surv. Tutorials 15(4), 1642–1670 (2013)
6. Boudreau, G., Panicker, J., Guo, N., Chang, R., Wang, N., Vrzic, S.: Interference
coordination and cancellation for 4G networks. IEEE Commun. Mag. 47(4), 74–81
(2009)
7. Kwan, R., Leung, C.: A survey of scheduling and interference mitigation in LTE.
J. Electr. Comput. Eng. 2010, 10 (2010). Hindawi Publishing Corporation, New
York
8. Shi, Z., Luo, Y., Huang, L., Gu, D.: User fairness-empowered power coordination in
OFDMA downlink. In: 2011 IEEE Vehicular Technology Conference (VTC Fall),
San Francisco, pp. 1–5 (2011)
9. Guo, W., Wang, X., Li, J., Wang, L.: Dynamic fair scheduling for inter-cell interfer-
ence coordination in 4G cellular networks. In: 2nd IEEE/CIC International Con-
ference on Communications in China (ICCC), Xi’an, pp. 84–88 (2013)
10. Liang, Y.-S., Chung, W.-H., Yu, C.-M., Zhang, H., Chung, C.-H., Ho, C.-H., Kuo,
S.-Y.: Resource block assignment for interference avoidance in femtocell networks.
In: 2012 IEEE Vehicular Technology Conference (VTC Fall), Quebec City, pp. 1–5
(2012)
11. Zulhasnine, M., Changcheng H., Srinivasan, A.: Efficient resource allocation for
device-to-device communication underlaying LTE network. In: IEEE 6th Interna-
tional Conference on Wireless and Mobile Computing, Networking and Communi-
cations (WiMob), Niagara Falls, pp. 368–375 (2010)
12. Tabassum, H., Dawy, Z., Alouini, M.-S.: Sum rate maximization in the uplink of
multi-cell OFDMA networks. In: 7th International Wireless Communications and
Mobile Computing Conference (IWCMC), Istanbul, pp. 1152–1157 (2011)
13. Garcia Luna, J. A., Mühleisen, M., Henzel, K.: Performance of a heuristic uplink
radio resource assignment algorithm for LTE-advanced. In: ITG Fachbericht
Mobilkommunikation 2011, Osnabrück (2011)
14. ITU, ITU-R M.2135 Guidelines for evaluating of radio interface technologies for
IMT-Advanced, Technical report (2009)
15. 3GPP, Evolved Universal Terrestrial Radio Access (E-UTRA); LTE physical layer;
Physical channels and modulation, Technical Report 3GPP 36.211 (2009)
16. Pierskalla, W.P.: Letter to the editor - the multidimensional assignment problem.
Oper. Res. 16(2), 422–431 (1968)
17. Balas, E., Saltzman, M.J.: An algorithm for the three-index assignment problem.
Oper. Res. 39(1), 150–161 (1991)
Virtualization and Software
Defined Networking
SDN and NFV Dynamic Operation of LTE EPC
Gateways for Time-Varying Traffic Patterns
1 Introduction
Today’s network operators are faced with the steadily increasing network traffic
given the limited flexibility provided by currently deployed architectures. This
lack of flexibility leads to an inefficient use of the available resources, which in
turn leads to decreasing revenues for operators [1], when the changing dynamics
of the user demands cannot be fully considered. This means that network oper-
ators let their network regularly undergo periods of over- and under-utilization.
New concepts, namely Network Function Virtualization (NFV) and Software
Defined Networking (SDN), emerged over the last few years that may allow net-
work operators to operate their network resources in a more fine granular way [2]
and to apply dynamic network changes over time.
The flexibility in resource allocation for network components is obtained
by NFV, where network components are hosted as virtual components on vir-
tualized commodity hardware and can be migrated between different virtual-
ized environments, hence datacenters. Flexibility in control is provided by SDN,
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 63–76, 2015.
DOI: 10.1007/978-3-319-16292-8 5
64 A. Basta et al.
where the network traffic can be shaped dynamically on run time with a cen-
tralized control. Both concepts allow the operators to exploit more gains from
dimensioning and operating the mobile network considering time-varying traffic
patterns to achieve more efficient load balancing or energy savings.
The adoption of both concepts to increase the flexibility of existing networks
has been discussed in some related work. Google [3] set up an architecture based
on SDN for their WAN interconnecting their datacenters, where SDN allows
them to plan their network operation and resource allocation according to their
applications in advance driving their network utilization to nearly 100 %. A sim-
ilar approach is introduced in [4], where the network is dynamically adapted
to the demands of big data application. The consideration of traffic-patterns in
the embedding of virtual networks was shown in [5]. Regarding the application
of SDN and NFV in mobile networks, some studies have discussed the archi-
tecture of virtualized mobile network as in [6] and their different advantages
and use-cases. First steps migration have been presented in [7] which utilize the
virtualized resources hosted in datacenters only to offload the network traffic
within the mobile core. Additionally, conceptual architectures and use-cases of
an SDN-based mobile core have been discussed in [8]. In our previous work, a
qualitative analysis of the benefits of SDN and NFV to the mobile core have been
discussed in [9]. Additionally, we have studied the influence of SDN and NFV on
the gateways and datacenter placement within the mobile core in [10], however
considering only uniform traffic demands. Hence, existing work still lacks quan-
titative evaluation of the impact of virtualization and SDN on mobile networks
and their flexibility gain considering the time-varying property of the traffic.
In this paper, we consider a mobile network architecture with virtual com-
ponents and SDN control, which offers a fine-granular control of the available
resources. Our main attention is drawn to the application of NFV and SDN con-
cepts on the high volume data-plane within the mobile core network. Hence, as
a first investigation, we focus on LTE core network elements which handle both
control-plane and data-plane, namely the Serving Gateway (SGW) and PDN
Gateway (PGW).
Our objective is to find the optimal datacenter(s) location, hosting the virtual
gateway components, which achieves a minimum transport network load while
considering time-varying traffic pattern and under a data-plane delay budget.
We provide further approaches to maximize power savings by adapting the dat-
acenter(s) operation according to the traffic patterns and datacenter resources.
The remainder of the paper is structured as follows. In Sect. 2, we introduce
the architecture that enables a traffic-pattern aware orchestration of datacenters
and network traffic. In Sect. 3, we give a short overview of measurement papers
that backup our assumption of predictable traffic. In Sect. 4, we introduce our
new models that optimize the datacenter placement and resource allocation for
our proposed architecture. In Sect. 5, we show the results of our analysis, and
we finally conclude our work in Sect. 6.
SDN and NFV Dynamic Operation of LTE EPC Gateways 65
2 Architecture
This section describes the architecture that applies virtualization and SDN to
mobile core network gateways in order to achieve dynamic resource allocation
with respect to traffic demands. In particular, we discuss the required compo-
nents and the interaction between them.
The considered architecture applies the concept of NFV, where the current
mobile core network gateways are transformed to virtual instances hosted by
a datacenter platform and SDN Network Element (NE) at the transport net-
work as shown in Fig. 1. Hence, the operator gets the advantage of deploying
and operating the core gateways in a datacenter environment, where datacen-
ters provide a flexible and dynamic allocation for the intra-datacenter network
as well as computational resources for each virtualized gateway.
Within the transport network, each gateway would be replaced by an SDN
NE, which is responsible for transporting and steering the traffic that comes
from the access network or external data networks and is intended to the vir-
tual gateway instances at the datacenters. These replacement SDN NEs offer the
capability of changing the traffic routes and forward the traffic to different dat-
acenters when needed, which requires that all datacenters should be reachable
from any SDN NE at the transport network.
Fig. 1. Architecture of virtual mobile core gateways and SDN transport NEs
66 A. Basta et al.
2.2 Orchestration
It is noted that in the introduced architecture, multiple orchestration and con-
trol elements are needed as illustrated in Fig. 1. In the following, we explain
the Datacenters Orchestrator (DC-O), the SDN Controller (SDN-C), and the
Operator Central Controller (OCC).
for different vantage points, for up to four major U.S. carriers including UMTS
networks, and for mega events.
In [11], an event based analysis of cellular traffic for the Super Bowl is
provided, which shows that there are traffic peaks related to events, e.g., the
Half-time Show or a Power Outage. In [12], the authors investigate the traffic
proportion of companies, e.g. Google, Facebook, Akamai, and Limelight. The
traffic shows a time-dependent behavior, where Google has the highest pro-
portion. Reference [13] analyzed user activity at different locations in order to
investigate users’ mobility. Reference [14] shows that traffic patterns for wired
and wireless network are affecting each other where more users utilize the mobile
network during commuting time. Further, different points in space are showing
a different but timely correlated traffic intensity. In [15], a correlation even for
different applications can be concluded.
calculates the aggregated traffic at time t, where timec,sgw (t) is the local time of
city c calculated depending on the local time of sgw, and bc,sgw is an indicator
whether city c is connected to sgw or not. Note again that cities and gateways
may belong to different time-zones, thus, the traffic intensity of a city c always
depends on c’s local time that has be calculated depending on the current time,
i.e., the time zone of the sgw.
68 A. Basta et al.
where d is a demand between each SGW and PGW, which are considered to
be time-varying, bi-directional and non-splittable. T rd,t is the traffic volume of
demand d at a time slot t, while lengthP athd,t is the length of the path taken
for a demand d at time slot t. A demand path within the core network is defined
between an SGW NE, a datacenter and a PGW NE. Hence, chosen paths in fact
determine the location of datacenters and assigned demand to each DC. While
data-plane delay is defined as propagation delay on the path for each demand.
Note that datacenters are assumed to be placed in a location where an opera-
tor has already an existing site to reduce the floor space cost, i.e., the datacenter
is placed in a location where an operator has gateways. We also keep the gate-
ways geographical locations unchanged, i.e., replace conventional gateway with
an SDN NE. All models are formulated as path-flow models.
where the set C includes all possible locations of the K datacenters. The binary
variable δc,d,t denotes that a datacenter c is chosen for demand d at time slot t.
The parameter Nc,d,t is pre-calculated load resulted from a combination of dat-
acenter location, demand and time slot. The constraints are given by:
δc = K (6)
c∈C
SDN and NFV Dynamic Operation of LTE EPC Gateways 69
δc,d,t ≤ δc ∀d ∈ D, c ∈ C, t ∈ T (7)
δc,d,t = 1 ∀d ∈ D, t ∈ T (8)
c∈C
This model acts as a next step after solving DCP-ATS in Sect. 4.1 as it takes the
resulted chosen DCs and their available resources as an input from the solution of
the model 4.1. The objective of such model is again to minimize the total trans-
port network load, while having the degree of freedom of operating a number of
DCs less than K at each time slot, from the set of previously chosen DCs and
within the available DC resources. This implies that among the deployed DCs
and considering the traffic characteristics, operators would be able to minimize
the power consumption of unutilized DCs, resulting in power and cost savings.
In this model, the objective is updated to find the optimal solution for each
time slot as follows:
min δc,d,t Nc,d,t ∀t ∈ T (10)
c∈Cs d∈D
Additionally, constraint (6) is replaced with constraints 11, which allows for
operating fewer DCs and (12) that ensures that one DC is at least in operation.
δc ≤ K (11)
c∈Cs
δc ≥ 1 (12)
c∈Cs
Fig. 2. Presumed core gateways topology based on LTE coverage map in [16]
This models is an extension for the previous PS-ETS model, where it provides
a room of having additional resources at each DC which should not exceed
the available resources multiplied by a factor P . It also considers the set of
DCs as an input from the solution of model DCP-ATS in addition to applying
constraints (11) and (12) as in the previous model. However, constraint (13) is
subsititured by (14) to allow additional resources under the boundary provided
by Rc ∗ P .
δc,d,t Rd,t ≤ Rc ∗ P ∀c ∈ Cs (14)
d∈D t∈T
For simulation, a java framework has been implemented with GUROBI [17]
used as the optimization solver. We created a US mobile core gateways network
shown in Fig. 2, based on LTE coverage map in [16] which correlates with the
US population distribution as well [18]. The core gateways network consists of 4
PGWs and 18 SGWs, where a traffic demand exists between each SGW and the
corresponding PGW in its cluster, resulting in a total of 18 demands. The core
network is assumed to be fully meshed which implies that any gateway location
could be chosen to deploy a DC with an available link to all other locations.
The network load Nc,d,t and path latency Lc,d,t parameters are pre-calculated
for all combinations of DC locations, demands and time slots. This decreases the
optimization solving complexity and run time as well.
Based on Eq. (3), a traffic pattern for each gateway is determined. Here, the indi-
cator bc,sgw is set to one if a gateway sgw is the closest to a city c.
SDN and NFV Dynamic Operation of LTE EPC Gateways 71
Daytime 0:00 1:00 2:00 3:00 4:00 5:00 6:00 7:00 8:00 9:00 10:00 11:00
Intensity 0.65 0.55 0.35 0.25 0.2 0.16 0.16 0.2 0.4 0.55 0.65 0.75
Daytime 12:00 13:00 14:00 15:00 16:00 17:00 18:00 19:00 20:00 21:00 22:00 23:00
Intensity 0.79 0.79 0.85 0.86 0.85 0.83 0.8 0.79 0.76 0.76 0.69 0.6
7 7
x 10 x 10
4.5 4.5
Slot 1 Slot 2 Slot 3 Slot 4
4 4
Traffic [Population ⋅ Intensity]
The population and the geo-location information of all U.S. cities is taken
from [19,20]. Table 1 contains the daytime and the corresponding traffic intensity
according to [14].
Figure 3 shows the traffic patterns of 18 gateways, which consider the pop-
ulation and intensity based on the time during a day. The time zone is set to
EDT (New York’s local time). As SGW 2 is located at the east coast, and, thus
is connected to cities such as New York, it has the highest traffic demand. SGW
16 is located at the west coast and serves cities such as Los Angeles, therefore,
having the second highest demand. Between both gateways, we can also see the
time shift according to the different time zones of the cities both gateways are
serving. As SGW 12 is the most northern gateway that connects only cities with
a small population, it has the lowest demand over time.
The daily time frame was split into 4 time slots according to the traffic
distribution, where hours with minor variation traffic patterns were grouped
into a time slot. The evaluation is done based on the averaged traffic patterns
that are shown in Fig. 4.
First, the problem has been solved for the DCP-ATS model which considers
all 4 time slots in a day, with 3 DCs. The chosen DC locations and their resources
have been taken as an input to solve the DC power saving models, namely PS-
ETS and PS-ETS-AR, which exploit the operation of fewer datacenter than 3
for each time slot, to achieve operational power savings. For the PS-ETS model,
it is solved with the resulting available DC resources from the optimization of
datacenter placement over all 4 time slots (DCP-ATS). While for PS-ETS-AR
model, it allows additional resources for each DC up to factor P , which has been
set to 2 i.e. the additional acquired resources at each DC should not exceed the
value of the existing resources or in other words that the total resources should
not be doubled. The case with K −1 datacenters has been considered to compare
the gains and drawbacks of the two proposed power savings approach.
DC 1 DC 2 DC 3
100 100 100
90 90 90
80 80 80
70 70 70
DC Utilization %
DC Utilization %
DC Utilization %
60 60 60
50 50 50
40 40 40
30 30 30
20 20 20
10 10 10
0 0 0
1 2 3 4 1 2 3 4 1 2 3 4
Time Slot Time Slot Time Slot
The utilization percentage at each time slot is shown in Fig. 7, respectively. The
figure also shows the active operation periods at each time slot for each DC. It is
shown that model PS-ETS is able to divert the allocated demands from DC1 to
DC2 at time slot 2, which means that the operator would operate two DCs only
at time slot 2 instead of three and hence power saving for one time slot daily. This
can be noted due to the constraint on the already existing resources at DC2. The
dynamic allocation is supported by the cloud orchestration, which synchronizes
the state of DC1 and DC2, while SDN provides the dynamic network steering
of the traffic to DC2 for time slot 2 and back to DC1 for the other time slots.
Again it is not possible to re-allocate the demands of DC3 at any time slot due
to its remoteness and the data-plane delay budget of 15 ms.
Moreover, model PS-ETS-AR is able to divert the traffic demands of DC1 to
DC2 for time slots 1, 2 and 3, due to the acquired additional resources at DC2
which leads to more daily power savings. However, note that the utilization %
at DC2 can be observed to be lower compared to the other models for time
slots 2 and 4, due to the increased amount of available resources and the traffic
demands at these time slots.
74 A. Basta et al.
100 60
DCP ATS 3 DCs
90
DC Utilization % at Active Periods
PS ETS 3 DCs
PS ETS AR 3 DCs 50
80 DCP ATS 2 DCs
Daily Overhead %
70
40
60
50 30
40
20
30
20
10
10
0 0
1 2 3 DCP-ATS PS-ETS PS-ETS AR DCP-ATS
DC ID 3 DCs 3 DCs 3 DCs 2 DCs
Figure 9 shows the resulted total transport network load compared to the DC
power consumption for all 4 cases. The power consumption is defined as number
of active DCs multiplied by active time daily. The comparison is shown as daily
overhead %, where the transport load has the reference of DCP-ATS with 3 DCs
as it achieves the minimum transport load while for power consumption has the
reference of DCP-ATS with 2 DCs as it intuitively results in the minimum power
consumption according to the aforementioned definition.
The trade-off between the resulting network load overhead compared to the
power consumption overhead can be noted, as model PS-ETS with 3 DCs results
in 6 % load overhead while it offers power savings of 10 %, compared to DCP-
ATS with 3 DCs, respectively. Regarding model PS-ETS-AR with 3 DCs, it
shows an increase in the transport load of 37 % while it offers in return power
savings of 25 %, compared to DCP-ATS with 3 DCs, respectively. Hence, an
operator would adopt either of the 4 cases depending on the cost resulted from
increasing the transport network load compared to costs endured due to power
consumption.
6 Conclusion
In this paper, we introduce an architecture that supports the virtualization of
the mobile core network gateways, where the gateways are realized by software
SDN and NFV Dynamic Operation of LTE EPC Gateways 75
instances hosted in datacenters and SDN based network elements at the trans-
port network. We have formulated a model for the time-varying traffic patterns
that can be observed within the mobile network core according to the user pop-
ulation and traffic intensity changing with time.
For the introduced virtualized architecture, a model has been presented to
find the optimal datacenter placement which minimizes the transport network
load given a number of DCs under a data-plane delay budget, namely datacenter
placement over all time slots (DCP-ATS). To exploit the dynamic flexibility
offered by virtualization and SDN and the variation of the traffic over time, the
time frame is split into time slots. Two further models have been formulated
which minimize the transport load however with the possibility of operating
fewer number of DCs at each time slot for power saving purposes, namely power
saving at each time slot (PS-ETS) and power saving at each time slot with
additional DC resources (PS-ETS-AR).
The three models have been solved for an exemplary US core gateways net-
work under a delay budget of 15 ms. The DCP-ATS with 3 DCs results in the
minimum transport network load and the highest power consumption. While the
DCP-ATS with 2 DCs shows the least powers consumption but with the max-
imum transport network load. The power saving models with 3 DCs show the
trade-off between transport network load and power consumption which shows
the advantage of considering the time-varying property of the traffic for net-
work dimensioning. Additionally, it shows quantitative gains obtained from the
flexibility of virtualization and SDN in mobile core networks.
For future work, further mobile core network components such as the MME
could be considered in the placement model, with control-plane delay budget
taken into consideration as well.
References
1. Nokia Solutions and Networks, Enabling Mobile Broadband Growth, white paper,
December 2013. https://nsn.com/system/files/document/epc white paper.pdf
2. Feamster, N., Rexford, J., Zegura, E.: The road to SDN. Queue - Large-Scale
Implementations 11, 20–40 (2013)
3. Jain, S., Zhu, M., Zolla, J., Hölzle, U., Stuart, S., Vahdat, A., Kumar, A., Mandal,
S., Ong, J., Poutievski, L., Singh, A., Venkata, S., Wanderer, J., Zhou, J.: B4:
experience with a globally-deployed software defined WAN. In: Proceedings of the
ACM SIGCOMM 2013 Conference, New York, USA, pp. 3–14 (2013)
4. Wang, G., Ng, T.E., Shaikh, A.: Programming your network at run-time for big
data applications. In: Proceedings of the First Workshop on Hot Topics in Software
Defined Networks - HotSDN 2012, New York, USA, pp. 103–108 (2012)
5. Blenk, A., Kellerer, W.: Traffic pattern based virtual network embedding. In: Pro-
ceedings of CoNEXT Student Workhop (2013)
6. Nokia Solutions and Networks, Technology Vision for the Gigabit Experi-
ence, white paper, June 2013. https://nsn.com/file/26156/nsn-technology-vision-
2020-white-paper?download
7. Banerjee, A., Chen, X., Erman, J., Gopalakrishnan, V., Lee, S., Van Der Merwe,
J.: MOCA: a lightweight mobile cloud offloading architecture. In: Proceedings of
the Eighth ACM International Workshop on Mobility in the Evolving Internet
Architecture - MobiArch 2013, New York, New York, USA, p. 11 (2013)
76 A. Basta et al.
8. Hampel, G., Steiner, M., Bu, T.: Applying software-defined networking to the
telecom domain. In: INFOCOM, WKSHPS (2013)
9. Basta, A., Kellerer, W., Hoffmann, M., Hoffmann, K., Schmidt, E.-D.: A virtual
SDN-enabled LTE EPC Architecture: a case study for S-/P-Gateways functions.
In: IEEE SDN for Future Networks and Services (SDN4FNS) (2013)
10. Basta, A., Kellerer, W., Hoffmann, M., Morper, H.-J., Hoffmann, K.: Applying
NFV and SDN to LTE mobile core gateways; the functions placement problem. In:
4th Workshop on All Things Cellular, SIGCOMM (2014)
11. Erman, J., Ramakrishnan, K.: Understanding the super-sized traffic of the super
bowl. In: Proceedings of the 2013 Conference on Internet Measurement Confer-
ence - IMC 2013, pp. 353–360 (2013)
12. Gehlen, V., Finamore, A., Mellia, M., Munafò, M.M.: Uncovering the big players
of the web. In: Pescapè, A., Salgarelli, L., Dimitropoulos, X. (eds.) TMA 2012.
LNCS, vol. 7189, pp. 15–28. Springer, Heidelberg (2012)
13. Qian, L., Wu, B., Zhang, R., Zhang, W., Luo, M.: Characterization of 3G data-
plane traffic and application towards centralized control and management for soft-
ware defined networking. In: 2013 IEEE International Congress on Big Data,
pp. 278–285, June 2013
14. Rossi, C., Vallina-Rodriguez, N., Erramilli, V., Grunenberger, Y., Gyarmati, L.,
Laoutaris, N., Stanojevic, R., Papagiannaki, K., Rodriguez, P.: 3GOL: power-
boosting ADSL using 3G onloading. In: Proceedings of the ninth ACM Confer-
ence on Emerging Networking Experiments and Technologies - CoNEXT 2013,
New York, NY, USA, pp. 187–198 (2013)
15. Zhang, Y., Arvidsson, A.: Understanding the characteristics of cellular data traffic.
ACM SIGCOMM Comput. Commun. Rev. 42, 461 (2012)
16. LTE Coverage Map. http://www.mosaik.com/marketing/cellmaps/, http://plat
form.cellmaps.com/
17. Gurobi Optimizer. http://www.gurobi.com/products/gurobi-optimizer/
18. UMTS Forum Report 44 Mobile traffic forecasts 2010–2020, pp. 63, January
2011. http://www.umts-forum.org/component/option,com docman/task,doc down
load/gid,2537/Itemid,213/
19. Annual estimates of the resident population: April 1, 2010 to july 1, 2013 -
united states - metropolitan and micropolitan statistical area; and for puerto rico
(2014). http://factfinder2.census.gov/faces/tableservices/jsf/pages/productview.
xhtml?src=bkmk. Accessed 05/09/2014
20. MaxMind, MaxMind GeoLocations, 2014. http://dev.maxmind.com/geoip/
legacy/geolite/. Accessed 05/09/2014
Towards a High Performance DNSaaS
Deployment
1 Introduction
With the current trend towards Cloud Computing, more and more applications
are moving towards the cloud environment as Services or as Virtualized Network
Functions (VNF). The possibility of having fully elastic services, capable of scal-
ing on-demand, both horizontally and vertically, is very attractive for today’s
Internet services and Domain Name System (DNS) is no exception.
Being an extensible and hierarchically distributed naming system, used for
resources connected to the Internet or to private networks, the employment of
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 77–90, 2015.
DOI: 10.1007/978-3-319-16292-8 6
78 B. Sousa et al.
DNS on the cloud would allow a cost and time efficient solution for transla-
tion of well-formed domain names into IP addresses. Indeed such translation is
fundamental in many Internet services.
Nowadays solutions such as OpenStack [1] allow the deployment of differ-
ent services in the cloud. Services such as IP Multimedia Subsystem (IMS) or
Content Distribution Networks (CDNs) are currently being deployed as services
in the cloud [2], leading to the coining of the term anything or everything as a
Service (XaaS).
DNS is typically used in parallel with other services and has also evolved into
this paradigm. As previously mentioned, employing DNS as a Service (DNSaaS)
in the cloud environment allows a more efficient management of the resources
required to support DNS across different services and data centres, being flexible
enough to scale appropriately, on-demand and according to the current needs of
the service, being more resilient and fault tolerant. However, multi-tenancy sup-
port, as well as a seamless integration with the cloud infrastructure, for instance
with an adequate monitoring system suitable for the cloud, is still a challenge.
Common DNS solutions, widely used in traditional systems, such as BIND [3]
or PowerDNS [4] do not foresee these possibilities and lack support for a cloud
paradigm.
DNSaaS solutions are already in place, such as Designate [5]. Indeed, Desig-
nate is a DNSaaS frontend for integration with OpenStack which, by maintaining
a dedicated database, allows the creation of on-demand DNS servers capable of
being configured appropriately through a well-defined Application Programming
Interface (API). Moreover, Designate resorts to reliable DNS systems, such as
BIND and PowerDNS, as backends for the whole operation.
The contribution of this paper includes an objective evaluation of DNSaaS,
which besides characterising the performance of frontend and backend DNSaaS
components highlights the impact that virtual machines hosting DNSaaS have
in the overall DNS performance.
Despite the ongoing efforts on allowing a cloud-based DNS, its performance
is still unknown and there is a lack of proper knowledge about the actual advan-
tages of operating DNS as a Service in the cloud. Section 2 presents an analysis
of existing assessments of DNS and of the cloud paradigm specifically. Section 3
focuses on the details of the architecture of DNSaaS. A proper methodology and
thorough evaluation of DNSaaS, using a real testbed with different emulated
scenarios is presented in Sect. 4, followed by the discussion of results in Sect. 5
and paper concludes with Sect. 6.
2 Related Work
While the movement towards the cloud is extremely significant, there is not still
a full understanding of all the possible advantages and disadvantages it brings.
Bearing this in mind, several works aim at assessing the benefits of cloud com-
puting and provide comprehensive analysis of existing frameworks and services.
For instance, OpenStack is evaluated in different works. Lasze et al. [6]
introduce a framework to compare different cloud frameworks, which performs
Towards a High Performance DNSaaS Deployment 79
also interfacing mechanisms with all the relevant components of the cloud, and
with other services.
For this purpose the presented DNSaaS is inspired in Designate, a DNSaaS
implementation for the OpenStack framework [5]. Its architecture consists of three
main components, the Frontend, the Central Agent and the DNS Backend. Each of
these components is fundamental for the configuration, adaptation and execution
of the service. Figure 1 depicts the main modules comprised in DNSaaS.
Applications
DNSaaS
API
Frontend
Queue
DNS
DNS
Database queries
DNS
Backend
3.1 Frontend
The frontend component includes all the interfacing elements for applications
to perform Create, Read, Update or Delete (CRUD) operations related with
DNSaaS (e.g., new DNS records). Through the Application Programming Inter-
face (API) in the frontend, applications are able to configure the operations of
DNSaaS and maintain DNS information (e.g., creation of records). The use of the
different functionalities provided by the API may require validation, to assure
that the information to create or update a certain record is correctly filled. For
instance, when creating an A record, besides providing the name, the respective
IPv4 address must also be provided.
All the valid requests received by the API are sent to the central agent
for further processing. Message queues are used to establish the communica-
tion between the API and central agent. This approach enables the support of
several APIs or different versions, allowing an extensible approach and added
flexibility. Moreover the API relies on Representational state transfer (REST)
technology, due to its associated performance and scalability, which supports
JavaScript Object Notation (JSON) to allow data interchange between clients
and DNSaaS API.
Towards a High Performance DNSaaS Deployment 81
The central agent is responsible for handling the communication between the
frontend and the backend. For such, all the requests coming from the frontend
API are parsed from the message queue for further validations. In this sense,
the central agent has an interface with Keystone [17], which is the identity
management software in OpenStack, to assure that requests are authenticated,
guaranteeing that a certain user/tenant does not perform operations on data
that it does not hold, or to which it has no access authorization.
The central agent also coordinates the persistent storage (e.g., id of tenants)
and DNS backend information. Besides, the central agent also integrates with
other cloud services, such as monitoring to feed such service with monitored data
(e.g., number of records created by a tenant).
The DNS backend includes the traditional DNS servers, such as BIND [3], Pow-
erDNS [4] and Name Server Daemon (NSD) [18]. These DNS servers have their
own specific data storage mechanisms, for instance they can rely on flat files
or on database storage mechanisms. The central agent in its persistent storage
function, assures that DNS data storage are filled according to the chosen DNS
backend server.
PowerDNS server is employed by default as the DNS backend solution, due
to its efficient support in using database storage mechanisms, and performance.
Moreover, PowerDNS supports authoritative server functionality (i.e. controls
name resolution inside a domain) and recursor operation, when there is no knowl-
edge of domains other authoritative servers are used to provide a resolution for
the request performance. For instance, NSD only supports the authoritative
server functionality. BIND natively supports both functionalities, but only the
more recent versions added support for storing and retrieving zone data infor-
mation on database.
4 Evaluation Methodology
4.1 Scenario
The evaluation scenario is depicted in Fig. 2 and includes multiple clients con-
nected via Gigabit links to the Havana OpenStack cloud platform [1].
The clients are configured in order to have two distinct roles and will be
mentioned as n cli designate and as n dns clients. The first role aims at deter-
mining the impact of having a number of n clients using the Designate API to
Create (C), Get (G), Update (U), and Delete (D) DNS records. The second role
82 B. Sousa et al.
OpenStack &
DNS Servers
DNSaaS Clients
considers typical clients performing DNS queries directly to DNS backends (e.g.,
PowerDNS). This second approach aims at assessing how DNSaaS performs with
standard DNS-related operations, for instance translating the host name into IP
information. The number of clients is configured according to the findings in [8],
as summarised in Table 1.
CPU and RAM usage metrics are obtained with the collectl tool [19] that
allows to monitor the performance of DNSaaS servers, regarding resources con-
sumption in terms of memory, CPU usage and others such as disk usage, network
interface receive and transmission packet ratios.
A client application has been developed using the Designate client API [20], to
perform requests on the frontend components of DNSaaS. The Designate client
API allows to abstract from all the details of the DNSaaS API. These serve
multiple purposes, where the Update is used to modify the Time To Live (TTL)
field of domains and records, while Get operations retrieve all the information
of domains/records, such as creation and modification dates, TTL values for
domains and records, as well as record data, record type, record names and
record priorities (e.g., only for MX and SRV records).
The implemented client application supports the operations of the CRUD
model. The Create(C) operation introduces information for n domains (creation
of domain and respective A, AAAA, TXT, MX, NS, SRV and PTR records)
according to the number of records n records. The Update (U) operation mod-
ifies information for the n domains with the respective n records. The Get (G)
and Delete (D) operations are also performed with the same logic, to print out
information and to delete all the records associated with a domain previously
introduced, respectively. The number of clients performing the same operation
simultaneously is controlled through the parameter n cli designate. No mixture
of operations between several clients is considered, for instance ones perform-
ing Create(C) and others performing Update(U), to have more control on the
DNSaaS performance assessment. Moreover, each client has associated specific
record names to avoid overlapping information in the DNS backend.
In addition to the assessment of service-related functionalities of DNSaaS,
DNSPerf [21] was employed to evaluate the performance of PowerDNS, and
Towards a High Performance DNSaaS Deployment 83
therefore determines its feasibility as a backend for DNSaaS. As such, the evalu-
ation with DNSPerf tool determines the query throughput, query loss ratios and
query processing time given a certain number of DNS records n records. More-
over, the number of concurrent clients performing query operations is evaluated
according to n dns clients.
Different experiments were performed according to the values depicted in
Table 1 for the configuration parameters. The values in some parameters were
related to each other. For instance, the 500 k n records require 1000 n domains.
5 Results
The presented results are discussed considering a 95 % of confidence interval
achieved from the 10 runs executed for each test case. The Processing Time,
Queries Latency and Queries Throughput metrics are presented in box plot
graphics to depict the variation in performance. Other metrics, such as Queries
Lost, CPU and RAM usage, are presented in ratios with bar-plots and error-
plots. The variation in performance is depicted in error-plots, by considering the
minimum and maximum ratio values achieved in each test, within the confidence
interval.
500 5k
2.8
300
74
2
3
1.4
750
9.4
73
22
1
7.3
3
8.6
200
2
46
2.4
5
2
500
3
Response Time (ms) of Designate
42
3.6
8.9
7.5
14
5
35
12
3.8
34
.06
.2
10
95
90
100
1
7
250
5.5
4.1
.66
.94
11
11
29
28
e_C e_D e_G e_U n_C n_D n_G n_U e_C e_D e_G e_U n_C n_D n_G n_U
50k 500k
4000 20000
.37
.14
09
35
24
6
5.5
3000 15000
33
.03
46
.17
.51
10
81
28
63
19
.22
.72
81
75
2000 10000
.84
00
42
86
.98
67
71
82
18
.48
83
50
10
81
5000
12
1000
3
8.8
2.3
3.3
3
8.3
23
19
93
53
0 0
e_C e_D e_G e_U n_C n_D n_G n_U e_C e_D e_G e_U n_C n_D n_G n_U
The response time with the enhanced server is lower in all the test cases in
comparison to the normal server. For instance, with 500 records the processing
time of Create is ≈90 ms and ≈129 ms in enhanced and normal server configura-
tions, respectively. This is an expected behaviour as the memory and CPU con-
figuration differs between servers. In fact, such difference is more evident when
the number of clients and the overall number of records increases. With 500 k
records the processing time has the worst performance due to the high number
of records and high number of simultaneous clients (i.e., 50). For instance, the
Towards a High Performance DNSaaS Deployment 85
500 5k
%
%
%
60
.1 2
.0 0
.3 1
%
%
80
76
.6 7
.7 8
%
50
73
.6 7
.0 5
%
46
62
.0 0
43
64
39
60
40
%
40
%
.2 6
%
%
.6 1
%
%
.5 9
.1 4
.0 0
18
.0 0
.8 3
17
20
23
22
13
%
16
12
.6 4
20
10
0 0
e_C e_D e_G e_U n_C n_D n_G n_U e_C e_D e_G e_U n_C n_D n_G n_U
50k 500k
%
%
.5 0
.8 4
100
92
86
%
%
.33
%
.50
.59
%
.73
%
75
75
.50
68
66
.56
75
63
61
55
%
%
50
.25
.00
50
%
%
39
35
.39
.94
%
26
.50
24
0%
25
.55
5%
25
14
8.5
11
8.9
0 0
e_C e_D e_G e_U n_C n_D n_G n_U e_C e_D e_G e_U n_C n_D n_G n_U
Update operation achieves times of ≈7142 ms and ≈10465 ms for enhanced and
normal servers, respectively. The Create operation in this case has ≈5083 ms
and ≈7564 ms for enhanced and normal servers, respectively, which is ≈60 times
worst then with 500 records. This fact is justified by the overhead that is intro-
duced due to the high number of clients and record size.
The type of requested operation also impacts the performance of DNSaaS as
depicted in Fig. 3 with the processing time, in Fig. 4 with CPU usage ratio and
Fig. 5 with memory usage. The Update operation is the one that introduces more
overhead in terms of memory, CPU and consequently, has the highest response
time. This operation involves getting the record (locate it in the database) and
a modification of data (in this case refreshing the TTL of the several records).
500 5k
%
%
%
.65
.53
50
.36
%
.47
61
42
59
%
60
%
.71
.54
39
.17
.47
34
40
50
49
34
Memory usage ratio (%) in Designate
30 40
20
1%
7%
9%
6%
20
2%
8%
7%
0%
7.4
6.7
7.1
6.9
10
8.9
8.7
7.4
7.1
0 0
e_C e_D e_G e_U n_C n_D n_G n_U e_C e_D e_G e_U n_C n_D n_G n_U
50k 500k
%
%
%
%
%
.00
.85
.81
.53
.40
%
80
78
82
83
77
.00
68
62
%
75
%
.30
.72
58
60
54
50
%
40
%
%
.84
.70
.54
%
.01
27
17
25
%
21
%
25
20
.98
3%
.71
3%
11
11
9.2
8.1
0 0
e_C e_D e_G e_U n_C n_D n_G n_U e_C e_D e_G e_U n_C n_D n_G n_U
The overhead in terms of CPU and memory usage ratios is quite notorious
in the normal server, as Figs. 4 and 5 depict. The update operation can lead
to ≈93 % CPU usage ratio and ≈83 % for 500 k records. With such number of
records and with more then 50 simultaneous clients the replies in the normal
server exceed the acceptable levels [8], as they were in the order of minutes.
The results of the DNS Backend assessment are based on the evaluation of
PowerDNS and include Queries Throughput, Queries Latency and Queries Lost
metrics for the different test cases. Query Throughput, representing the number
of queries per second, is depicted in Fig. 6 for the variable number of records
and simultaneous clients (i.e., 001, 050, 100, 200, 500) in normal (n X ) and
enhanced (e X ) server configurations. Moreover, query throughput is only con-
sidered in clients having a completion rate ≥97.5 %, which means that almost
all the requested DNS queries receive a reply. In other words, high query losses
are disregarded as they imply that the server is not able to cope with the corre-
sponding load.
500 5k
60000
4
1
53
39
40000
36
35
6
68
36
36
40000
Queries Throughput (qps) in DNS Backend
35
30000
4
32
20000
0
51
14
15
36
20000
34
14
62
03
22
25
90
87
03
00
40
77
97
33
71
10
70
85
63
58
10000
70
46
60
39
53
48
39
0 0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
00
05
10
20
50
00
05
10
20
50
00
05
10
20
50
00
05
10
20
50
n_
n_
n_
n_
n_
n_
n_
n_
n_
n_
e_
e_
e_
e_
e_
e_
e_
e_
e_
e_
50k 500k
60000
89
31
3000
40000
2000
2
75
03
14
23
20000
1000
46
80
2
7
7
27
24
91
4
33
10
11
0 0
01
0
1
0
05
00
05
00
05
10
00
05
0
n_
n_
n_
n_
e_
e_
e_
e_
e_
As with the DNSaaS response time, the total number of records has an impact
on the DNS server performance. In fact, the number of queries per second (qps)
for 500 k records is the lowest when compared with the 500, 5 k and 50 k cases.
For instance, within a single client the query throughput drops from ≈36500 to
≈3200 qps, in the enhanced server. The configuration of records and simultaneous
clients also impacts negatively query throughput, as less queries per second are
supported and query losses increase. Moreover, the response time of queries also
Towards a High Performance DNSaaS Deployment 87
500 5k
40 60
30
40
4
6
4
7
5.9
9
3
5.4
4.8
7
4.7
4.5
4.4
4.2
3.9
20
8
2
9
2.4
6.2
6
6.0
8
5.8
2
7
4.6
5
3
4.2
3.9
3.7
3.3
10
2.4
2.4
0 0
01
01
0
1
0
05
10
20
50
05
10
20
50
00
05
10
20
50
00
05
10
20
50
0
0
n_
n_
n_
n_
n_
n_
n_
n_
n_
n_
e_
e_
e_
e_
e_
e_
e_
e_
e_
e_
50k 500k
200
8
1
9.4
5.4
86
18
750
150
500
4
100
2.2
41
.99
50 250
34
2
6
.31
4
5.8
5.6
.87
4.2
66
0
30
0
1
0
1
0
00
05
00
05
00
05
10
00
05
n_
n_
n_
n_
e_
e_
e_
e_
e_
Fig. 7. Query latency (in ms) per number of clients
500 5k
%
.27
%
.08
21
75
83
20
%
7%
.75
.4
.65
%
50
11
CPU usage ratio (%) in DNS Backend
42
.05
4%
11
32
7.2
%
10
9%
5%
.87
9%
25
9%
3.1
9%
4.0
14
2%
3.3
8%
2%
6%
9%
9%
7%
2.3
6.2
4.1
1.3
3.2
0.9
2.4
1.9
1.5
0 0
1
0
00
0
00
1
0
00
00
05
10
20
50
00
05
20
00
05
10
20
50
00
05
10
20
_1
_5
_5
n_
n_
n_
n_
n_
n_
n_
n_
e_
e_
e_
e_
e_
e_
e_
e_
e_
% n
% n
e
50k 500k
.68
%
%
.00
80
.77
.97
.89
.80
%
84
83
.80
.15
74
75
75
81
8%
75
68
70
%
.5
%
60
%
52
.42
.96
.03
45
44
44
50
3%
%
40
3%
.18
.8
27
.0
23
25
20 25
0 0
1
50
01
0
1
0
00
10
20
50
05
00
05
10
20
50
00
05
10
0
0
n_
n_
n_
n_
n_
n_
n_
e_
e_
e_
e_
e_
e_
e_
e_
500 5k
%
%
100 100
.01
.47
.86
%
93
93
92
.44
74
75 75
%
%
%
%
.16
.81
.11
.07
.03
.92
50
Memory usage ratio (%) in DNS Backend
49
47
47
47
46
50 50
3%
%
%
25 25
.73
.3
.85
1%
7%
2%
2%
5%
1%
2%
15
12
10
6.8
6.7
6.8
6.8
6.7
6.8
6.7
0 0
01
50
00
01
50
00
0
1
0
20
50
20
50
00
05
10
20
50
00
05
10
20
50
0
1
n_
n_
n_
n_
n_
n_
n_
n_
n_
n_
e_
e_
e_
e_
e_
e_
e_
e_
e_
e_
50k 500k
%
%
%
80
%
%
.48
.14
%
.74
.77
.28
.99
74
74
64
60
55
55
%
53
.61
44 60
40
40
%
%
.97
7%
.69
%
.60
19
3%
.3
6%
20
1%
3%
18
20
13
13
8.4
8.3
8.3
7.9
0 0
1
0
1
0
00
05
10
20
50
00
05
00
05
10
20
50
00
05
10
n_
n_
n_
n_
n_
n_
n_
e_
e_
e_
e_
e_
e_
e_
e_
Fig. 9. Memory usage ratio
clients. Nonetheless, in the 50 k and 500 k record cases, the query throughput
drops to unacceptable levels of performance, as response time is quite high [8]
and the overhead in both servers is also more evident is terms of used CPU and
memory.
The CPU usage ratios is higher in the normal server due to the existence of a
single processor. Thus, the CPU usage increases linearly in the enhanced server
with the increase in records, in the normal server, the CPU is always “busy”
processing requests, even with low number of records, as depicted in Fig. 8.
Memory usage follows the same trend as CPU usage. The higher number of
clients leads to an higher usage of memory in the backend server, as depicted
in Fig. 9. The difference in the server configurations is also noticeable, as the
normal server requires more memory to process requests, in terms of ratios. For
instance, even for single clients, memory usage achieves ratios of ≈50 %, which
reveals that the configurations of normal server are not satisfactory and can lead
to performance gaps.
claim that this paper establishes the first steps towards a high performance
DNSaaS deployment by a concrete and objective performance evaluation, that
fully characterises the baseline performance of DNSaaS in the cloud.
Our next steps include the analysis of the impact of distributing the DNSaaS
through several virtual machines and better understanding how automatically
triggered scaling operations can improve the overall performance of DNSaaS,
regarding the number of queries per second and of simultaneous clients that can
be accommodated.
Acknowledgments. This work was carried out with the support of the MobileCloud
Networking project (FP7-ICT-318109) funded by the European Commission through
the 7th ICT Framework Program.
References
1. OpenStack: Openstack cloud software. https://www.openstack.org. Last Visit 08
August 2014
2. Lu, F., Pan, H., Lei, X., Liao, X., Jin, H.: A virtualization-based cloud infrastruc-
ture for ims core network. In: Cloud Computing Technology and Science (Cloud-
Com), 2013 IEEE, vol. 1, pp. 25–32, December 2013
3. I.S.Consortium: BIND - The most widely used Name Server Software. https://
www.isc.org/downloads/bind/. Last Visit 08 August 2014
4. P. Bv, PowerDNS. https://www.powerdns.com. Last Visit 08 August 2014
5. OpenStack, Designate, a DNSaaS component for OpenStack. http://designate.
readthedocs.org/en/latest/. Last Visit 08 August 2014
6. von Laszewski, G., Diaz, J., Wang, F., Fox, G.C.: Comparison of multiple cloud
frameworks. In: 2012 IEEE 5th International Conference on Cloud Computing
(CLOUD), pp. 734–741. IEEE (2012)
7. Ju, X., Soares, L., Shin, K.G., Ryu, K.D., Da Silva, D.: On fault resilience of
openstack. In: Proceedings of the 4th Annual Symposium on Cloud Computing,
p. 2. ACM (2013)
8. da Mata, S.H., Magalhaes, J.M., Cardoso, A., Guardieiro, P.R., Carvalho, H.A.:
Performance comparison of enum name servers. In: Computer Communications
and Networks (ICCCN), IEEE 2013, pp. 1–5 (2013)
9. Yu, Y., Wessels, D., Larson, M., Zhang, L.: Authority server selection in DNS
caching resolvers. ACM SIGCOMM Comput. Commun. Rev. 42(2), 80–86 (2012)
10. Migault, D., Girard, C., Laurent, M.: A performance view on dnssec migration. In:
Network and Service Management (CNSM), IEEE 2010, pp. 469–474 (2010)
11. Rudinsky, J.: Private enum based number portability administrative system evalu-
ation. In: International Conference on Ultra Modern Telecommunications & Work-
shops, ICUMT 2009, IEEE 2009, pp. 1–7 (2009)
12. Celesti, A., Villari, M., Puliafito, A.: A naming system applied to a reservoir cloud.
In: 2010 Sixth International Conference on Information Assurance and Security
(IAS), pp. 247–252, August 2010
13. Berger, A., Gansterer, W.: Modeling DNS agility with DNSMap. In: 2013 IEEE
Conference on Computer Communications Workshops (INFOCOM WKSHPS),
pp. 387–392, April 2013
90 B. Sousa et al.
14. Huang, C., Maltz, D., Li, J., Greenberg, A.: Public DNS system and global traffic
management. In: INFOCOM, 2011 Proceedings IEEE, pp. 2615–2623, April 2011
15. Lee, B.-S., Tan, Y.S., Sekiya, Y., Narishige, A., Date, S.: Availability and effec-
tiveness of root dns servers: a long term study. In: Network Operations and Man-
agement Symposium (NOMS), 2010 IEEE, pp. 862–865, April 2010
16. Casalicchio, E., Caselli, M., Coletta, A.: Measuring the global domain name system.
IEEE Netw. 27(1), 25–31 (2013)
17. OpenStack, Keystone, The OpenStack Identity Service! http://keystone.
openstack.org. Last Visit 08 August 2014
18. Labs, N.: NSD: Name Server Daemon. http://www.nlnetlabs.nl/projects/nsd/.
Last Visit 08 August 2014
19. Seger, M.: Collectl. http://collectl.sourceforge.net/. Last Visit 08 August 2014
20. OpenStack, python-designateclient. http://python-designateclient.readthedocs.
org/en/latest/index.html. Last Visit 08 August 2014
21. Nomium, Network measurement tools. http://nominum.com/support/
measurement-tools. Last Visit 08 August 2014
Network Configuration in OpenFlow Networks
1 Introduction
Packet-switched computer networks are based on network elements which run
distributed control software that is complex to configure. While network oper-
ators ought to maintain a complete view of the actual network state, in prac-
tice, they have only coarse-grained tools at their disposal. For instance, network
device configuration can often require human intervention based on command
line interface (CLI) interaction. CLIs are cumbersome to use, error-prone, and
may vary widely across different vendors, so management complexity increases
even more. Network administration might lead to configuration errors, which
are difficult to detect. But more in the interest of this work is that the current
network configuration paradigm is not really programmable.
The emergence of software-defined networking (SDN) [1] introduced new
opportunities in network research [2]. SDN advocates a logically centralized con-
trol plane with advanced programming capabilities based on a control-data plane
separation. By breaking the tight coupling between the control and data plane
both can evolve independently. Programmability fosters the development of soft-
ware that can dynamically alter network-wide behavior, thus enabling testing of
research ideas in a speedier manner without having to always resort to simula-
tion tools. In particular, a programmable control plane based on OpenFlow [3] is
expected to accelerate network innovation and the rollout of new services. Open-
Flow per se, however, is not well-suited for the management plane. To address this
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 91–104, 2015.
DOI: 10.1007/978-3-319-16292-8 7
92 A. Zaalouk and K. Pentikousis
2 Related Work
OpenFlow deployments require management just like traditional networks. But,
given their edge in programmability, and the fact they would be controlled by
software, management should also follow along, thus enabling programmability
both in the control plane and the management plane. The ONF-standardized
OF-CONFIG protocol takes advantage of NETCONF as a configuration protocol
and YANG [9] for OpenFlow switch data modeling.
Prior to its adoption by ONF, NETCONF was studied and compared against
other popular configuration management protocols. Hedstrom et al. [10] com-
pare the performance of NETCONF with SNMP in a testbed, considering pro-
tocol bandwidth use, number of packets, number of transactions, operations
time, and so on. They conclude that NETCONF is much more efficient for
configuration management than SNMP (e.g., requires much less transactions
over managed objects). Another empirical study [11] compares NETCONF and
Network Configuration in OpenFlow Networks 93
Arguably, the Internet has become extremely difficult to evolve both in terms of
its physical infrastructure and network protocols. As the Internet was designed
for tasks such as sending and receiving data with best effort guarantees only,
there has been continuous interest to evolve the current IP packet-switched net-
works to address the new challenges including Quality of Service (QoS), multi-
casting, and network security. One of the first steps towards making networks
more programmable was the introduction of Active Networking (AN) [20]. The
94 A. Zaalouk and K. Pentikousis
main idea behind AN was to enable network devices to perform custom com-
putations on packets. To do so, two different models were introduced [1]. First,
in the “packet capsules model”, network programs were attached to (possibly
each) packet and sent across the network to target devices such as ANTS [21].
Second, in the “programmable network devices model” [22], network devices
were pre-configured with several service-logic modules. When a packet arrives,
its headers are matched and sent to the appropriate module. Largely, the AN
vision did not come to pass, for various reasons [1]. For instance, a clear migra-
tion path was not evident. Further, at that point of time, no operator pressing
need was practically addressed by AN. That said, at the core, AN networks
aimed for having programmability in the data plane. This concept evolved over
the years and took hold in network devices that are known as middleboxes
now. The concept of middlebox programmability has received quite some atten-
tion recently. For example, xOMP [23], an eXtensible Open MiddleBox software
architecture allows for building flexible, programmable, and incrementally scal-
able middleboxes based on commodity servers and operating systems. Similarly,
SIMPLE [24], a programmable policy enforcement layer for middlebox-specific
traffic steering, allows network operators to specify middlebox routing policies
which take into account the physical topology, switch capacities and middlebox
resource constraints.
Another important step that has been taken towards network programma-
bility is the separation of the control and data planes. This decoupling enables
the two planes to evolve separately, and allows new paradigms where logically
centralized control can have a network-wide view making it easier to infer and
direct network behavior. Examples of such separation have been proposed earlier
in Routing Control Point (RCP) [25] and, of course, ForCES [26]. So far, the
OpenFlow approach towards control and data plane separation focused on con-
trol plane rather than data plane programmability as compared to AN. Despite
the intellectual contributions that resulted from such separation, the AN deploy-
ment strategy was not pragmatic, e.g. required network-wide deployment of new
hardware. OpenFlow defines a standard interface between the control and the
data plane that goes hand-in-hand with the concept of Network Operating Sys-
tems (NOS) [27] with the goal of providing an abstraction layer between network
state awareness and control logic.
Haleplidis et al. [8] provide a detailed description of the SDN layers architec-
ture, drawing a clearer picture of the emerging paradigm, which is often cluttered
with marketing terms. By dividing the SDN architecture into distinct planes,
abstraction layers and interfaces, [8] clarifies SDN terminology and establishes
some commonly accepted ground across the SDN community.
As shown in Fig. 1, the Forwarding Plane represents parts of the network
device which are responsible for forwarding traffic. The Operational Plane is the
part of the network device responsible for managing device operation. The Con-
trol Plane instructs the network device(s) forwarding plane on how to forward
traffic. The Management Plane is responsible for configuring and maintaining
one or more network devices. The draft also defines three layers, as follows.
Network Configuration in OpenFlow Networks 95
The Device and resource Abstraction Layer (DAL) provides a point of reference
for the device’s forwarding and operational resources. The Control Abstraction
Layer (CAL) provides access to the control plane southbound interface. Finally,
the Management Abstraction Layer (MAL) provides access to the management
plane southbound interface. Figure 1 illustrates all functional components of the
SDN architecture and provides a high-level overview of the SDN architecture
abstractions including control and management plane abstractions. The archi-
tecture visibly decouples management, control and forwarding functions includ-
ing their interfaces. Of course, this is an abstract model. In practice, the entities
providing these functions/planes could be collocated. In this paper, we focus on
the management and control southbound interfaces, as we explain next.
We map the ALIEN HAL to the SDN layers in Fig. 3. Essentially, DAL in the
SDN architecture is realized by the HAL in ALIEN, which handles the transla-
tion of OpenFlow messages to device-specific messages. Generic network devices
are mapped to ALIEN devices, which do not support OpenFlow natively. The
control plane is realized using OpenFlow controllers and the OpenFlow protocol.
The management plane is implemented using NETCONF. The application plane
is mostly outside the scope of the ALIEN architecture.
Network Configuration in OpenFlow Networks 97
Using OpenFlow, the control plane can communicate with the data plane to per-
form several functionalities such as adding or removing flow-rules and collecting
per-flow, per-table statistics. However, this assumes that the OpenFlow switches
are already configured with various parameters such as the IP address(es) of the
controller(s). Here it is important to distinguish time-sensitive control functional-
ities for which OpenFlow was designed (e.g., modifying forwarding tables, match-
ing flows) from non-time-sensitive management and configuration management
functionalities which are essential for the operation of the OpenFlow-enabled
device (e.g., controller IP assignment, changing switch ports administrative sta-
tus, configuring datapath-ids, etc.).
In principle, SNMP could be used for such configuration tasks. However, as
per [9], SNMP has several drawbacks, including unreliable transport of manage-
ment data (e.g., UDP); no clear separation between operational and configura-
tion data; no support for roll-backs in case of errors/disaster; lack of support for
concurrency in configuration (i.e., N:1 device configuration); and no distinction
between transaction models (e.g., running, startup, and candidate). To address
such shortcomings, the NETCONF protocol was developed and defined in RFC
6241 [5]. NETCONF provides several key features such as the ability to retrieve
configuration as well as operational data, rich configuration management seman-
tics including validation, rollbacks and transactions, and configuration extensi-
bility based on the capabilities exchange that occurs during initiating the session
initiation. Furthermore, NETCONF’s transactional models constitutes candi-
date, running and startup data-stores.
98 A. Zaalouk and K. Pentikousis
6 Software-Defined Configuration
client; and, finally, Application & Services represented as the logical entities that
make use of the underlying management and control planes as per [8].
First consider the case where a network application employs NETCONF to
configure the sFlow agents monitoring parameters such as sampling rate, sFlow
collector IP, etc. This workflow is shown in Fig. 6. First, the application specifies
the configuration parameters and sends it to the NETCONF client. In turn, the
NETCONF client will form an edit-config message with the specified parameters
and will send it to the NETCONF server running on the target OF switch(es).
Once the NETCONF server receives the message, it will update the switch con-
figuration and will send a reply back to the NETCONF client regarding the
success or failure of the operation. Subsequently, upon the instruction of the
application, the OF controller will interact with the underlying switches using
OpenFlow to control the flow of traffic.
is formed and sent to the NETCONF server embedded in the HAL implementa-
tion of the ALIEN device. In turn, the NETCONF server will apply the command
and send the reply back to the client. Furthermore, the NETCONF client will
follow the same steps when instructed to disable a switch port. In addition to
configuring the underlying ALIEN devices using NETCONF, OpenFlow will be
used to control the forwarding behavior for the ALIEN devices by installing the
appropriate flow rules according to the logic running on-top of the controller.
In addition to using OpenFlow for network control, NETCONF automates con-
figuration management functions thus making network management tasks much
simpler.
Acknowledgment. This work was conducted within the framework of the FP7 ALIEN
project, which is partially funded by the Commission of the European Union.
References
1. Feamster, N., Rexford, J., Zegura, E.: The road to SDN. Queue 11(12), 20:20–20:40
(2013)
2. John, W., Pentikousis, K., et al.: Research directions in network service chaining.
In: 2013 IEEE SDN for Future Networks and Services, pp. 1–7, November 2013
3. McKeown, N., Anderson, T., et al.: Openflow: enabling innovation in campus net-
works. ACM SIGCOMM Comput. Commun. Rev. 38(2), 69–74 (2008)
4. ONF: OF-CONFIG 1.2. OpenFlow Management and Configuration Protocol ver-
sion 1.2 (2014)
5. Enns, R., Bjorklund, M., Schoenwaelder, J.: NETCONF configuration protocol.
IEEE Network (2011)
6. Parniewicz, D., Doriguzzi Corin, R., et al.: Design and implementation of an open-
flow hardware abstraction layer. SIGCOMM DCC 2014, 1–6 (2014)
7. Ogrodowczyk, L., Belter, B., et al.: Hardware abstraction layer for non-OpenFlow
capable devices. In: TERENA Networking Conference, pp. 1–15 (2014)
8. Haleplidis, E., Pentikousis, K., et al.: SDN layers and architecture terminology.
Internet Draft: draft-haleplidis-sdnrg-layer-terminology (work in progress) (2014)
9. Schönwälder, J., Björklund, M., Shafer, P.: Network configuration management
using NETCONF and YANG. IEEE Commun. Mag. 48(9), 166–173 (2010)
10. Hedstrom, B., Watwe, A., Sakthidharan, S.: Protocol Efficiencies of NETCONF
versus SNMP for Configuration Management Functions. p. 13 (2011)
11. Yu, J., Al Ajarmeh, I.: An empirical study of the NETCONF protocol. In: 2010
Sixth International Conference on Networking and Services (ICNS), pp. 253–258.
IEEE (2010)
12. Bhushan, S., Tran, H.M., Schönwälder, J.: NCClient: a python library for NET-
CONF client applications. In: Nunzi, G., Scoglio, C., Li, X. (eds.) IPOM 2009.
LNCS, vol. 5843, pp. 143–154. Springer, Heidelberg (2009)
13. Krejci, R.: Building NETCONF-enabled network management systems with libnet-
conf. In: 2013 IFIP/IEEE International Symposium on Integrated Network Man-
agement (IM 2013), pp. 756–759. IEEE (2013)
14. Tran, H.M., Tumar, I., Schönwälder, J.: NETCONF interoperability testing. In:
Sadre, R., Pras, A. (eds.) AIMS 2009 Enschede. LNCS, vol. 5637, pp. 83–94.
Springer, Heidelberg (2009)
104 A. Zaalouk and K. Pentikousis
15. Munz, G., Antony, A., et al.: Using NETCONF for configuring monitoring probes.
In: 10th IEEE/IFIP Network Operations and Management Symposium, NOMS
2006, pp. 1–4. IEEE (2006)
16. Xu, H., Wang, C., et al.: NETCONF-based integrated management for internet of
things using RESTful web services. Int. J. Future Gener. Commun. Netw. 5(3),
73–82 (2012)
17. Zhu, W., Liu, N., et al.:Design of the next generation military network management
system based on NETCONF. In: Fifth International Conference on Information
Technology: New Generations, ITNG 2008, pp. 1216–1219, April 2008
18. Lu, H., Arora, N., et al.: Hybnet: network manager for a hybrid network infrastruc-
ture. In: Proceedings of the Industrial Track of the 13th ACM/IFIP/USENIX
International Middleware Conference, p. 6. ACM (2013)
19. Sonkoly, B., Gulyás, A., et al.: Openflow virtualization framework with advanced
capabilities. In: 2012 European Workshop on Software Defined Networking
(EWSDN), pp. 18–23. IEEE (2012)
20. Smith, J.M., Nettles, S.M.: Active networking: one view of the past, present, and
future. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 34(1), 4–18 (2004)
21. Wetherall, D.J., Guttag, J.V., Tennenhouse, D.L.: ANTS: a toolkit for building
and dynamically deploying network protocols. In: 1998 IEEE Open Architectures
and Network Programming, pp. 117–129. IEEE (1998)
22. Samrat, B., Calvert, K.L., Zegura, E.W.: An architecture for active networking. In
IEEE Communications Magazine, pp. 72–78 (1997)
23. Anderson, J.W., Braud, R., et al.: xOMB: extensible open middleboxes with com-
modity servers. In: Proceedings of the Eighth ACM/IEEE Symposium on Architec-
tures for Networking and Communications Systems, ANCS ’12, pp. 49–60. ACM,
New York (2012)
24. Ayyub Qazi, Z., Tu, C., et al.: Simple-fying middlebox policy enforcement using
sdn. In: Proceedings of the ACM SIGCOMM 2013 Conference on SIGCOMM,
SIGCOMM ’13, pp. 27–38. ACM, New York (2013)
25. Feamster, N., Balakrishnan, H., et al.: The case for separating routing from routers.
In: Proceedings of the ACM SIGCOMM Workshop on Future Directions in Net-
work Architecture, FDNA ’04, pp. 5–12. ACM, New York (2004)
26. Yang, L., Dantu, R., et al.: Forwarding and control element separation (ForCES)
framework, RFC 3746, April 2004
27. Gude, N., Koponen, T., et al.: NOX: towards an operating system for networks.
SIGCOMM Comput. Commun. Rev. 38(3), 105–110 (2008)
28. Wang, M., Li, B., Li, Z.: sFlow: towards resource-efficient and agile service federa-
tion in service overlay networks. In: Proceedings of the 24th International Confer-
ence on Distributed Computing Systems, pp. 628–635 (2004)
29. Zaalouk, A., Khondoker, R., et al.: OrchSec: an orchestrator-based architecture for
enhancing network-security using network monitoring and SDN control functions.
In: IEEE SDNMO, Krakow, Poland, pp. 1–8 (2014)
A Novel Model for WiMAX Frequency
Spectrum Virtualization and Network
Federation
1 Introduction
The emergence of new generations of wireless and mobile technologies has increased
the demand for advanced telecommunication infrastructure coupled with the need for
radio frequency spectrum that has the capacity to support high-speed transmissions
with extremely huge data content. Radio spectrum is a finite resource and its demand is
constantly increasing, most especially, by mobile network operators and this constitutes
a global challenge [1]. To illustrate the increasing demand for radio spectrum, at
the ITU World Radiocommunication conference 2007 [2], members considered the
expansion of radio spectrum for 4G networks also known as IMT-Advanced. The
expansion was made to cater for the spectra needs of the new emerging 4G technol-
ogies. Further expansions was also made at the 2012 edition of the aforementioned
conference for IMT-Advanced and other telecommunication technologies.
The limited nature of radio spectrum accounts for it being very expensive especially
the licensed portions. Some of the factors that affect the high costs of radio spectrum
includes: (a) Propagation range (b) In-building penetration; and (c) Capacity i.e. the
bandwidth available in the band. In the implementation of mobile technologies, low
© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 105–117, 2015.
DOI: 10.1007/978-3-319-16292-8_8
106 B.S. Ogunleye and A. Murgu
frequencies (below 1 GHz) have a longer propagation range and are more suited to
deployments having few base stations which obviously means lesser cost and this is
more viable for low density areas. As for the higher frequencies (between 2 and 3 GHz
and above) they have shorter propagation ranges, higher capacities (bandwidth) and
very effective for high density regions with closely packed cells and high density of
traffic [3]. Because of the varied uses of these portions of the radio spectrum there is
very high competition amongst mobile network operators to secure them which
accounts for its very expensive nature. The number of base stations in a cellular or
mobile network is a major factor that affects its cost which ultimately affects the capital
expenditure (CAPEX) and operating cost (OPEX) of the network provider. These
underlying issues of radio spectrum have stimulated the need to develop methods and
systems for optimizing radio spectrum and a considerable amount of research in this
area is being conducted.
One of the new and exciting approaches to future network design is network
virtualization. Network virtualization is a networking concept that enables the
deployment of customized services and resource management solutions in isolated
slices (groups or portions) on a shared physical network [4]. The idea of virtualization
is not so new. It was first introduced by International Business Machines (IBM) in the
1970s and the technology provided a way of separating computer physical hardware
and software (operating systems & applications) by emulating hardware using a soft-
ware program. Essentially, it involves installing a software program (known as a
hypervisor) on a physical computer. This software or hypervisor, in-turn then installs
files that define a new virtual computer otherwise known as a Virtual Machine
(VM) [5].
There are several approaches to virtualization currently used on computer systems
and they include: Bare Metal Virtualization, Hosted Virtualization etc. [4, 6]. These
approaches are being closely studied by researchers and attempts are being made to
mirror their application into network virtualization. One major stride in developing an
architecture for network virtualization was done by a European initiative known as the
4WARD Project. The 4WARD Project’s main objective was to develop an architectural
framework for network virtualization in a commercial setting. This project has since
ended in 2010 but ongoing feasibility test is being done on its architecture [7]. To fully
harness the benefits of network virtualization, the inclusion of the concept known as
Network Federation into its overall design and implementation is beginning to capture
the interest of many researchers.
WiMAX which is a leading 4G mobile telecommunication technology has been
chosen as case study in this paper, whereby a novel model for generating virtual
WiMAX radio spectrum is developed for WiMAX network virtualization and
federation.
2 Related Work
Currently, quite a number of research works have gone into developing architectures
for wireless network virtualization with emphasis on WiMAX. Others have also looked
at the possibility of developing more flexible ways of spectrum access through
A Novel Model for WiMAX Frequency Spectrum Virtualization 107
few that practically described how radio spectrum could be virtualized and dynamically
accessed.
Yasir Zaki et al. [13] implemented their ideas of network virtualization using the
Long Term Evolution (LTE) as their case study. They narrowed their perspective of
network virtualization into two sections: (1) virtualization of the physical nodes of the
network and (2) virtualization of the air-interface (spectrum) with the latter section
being the main focus of their research. The approach they used in the LTE air-interface
virtualization, involved using a hypervisor which they termed and called the LTE
Hypervisor. This LTE Hypervisor is responsible for virtualizing the enhanced Node B
(eNodeB) i.e. the LTE basestation into a number of virtual eNodeBs. The LTE hy-
pervisor is also responsible for scheduling the air-interface resources between the
virtual eNodeBs. They stated that there are already solutions for building virtualized
base stations identifying the VANI MultiRAN solution which supports multiple virtual
base stations all running a single physical infrastructure.
Similar to [13], this research paper centers more on the air-interface virtualization
of WiMAX, especially for WiMAX networks existing in a federated arrangement.
Wireless broadband technologies are growing currently at a very high rate, having a
major influence on how people communicate. This has resulted in an inexhaustible
need for radio spectrum and ultimately bandwidth by network operators and WiMAX
is not left out in this struggle. According to recent WiMAX market report analysis, the
global WiMAX equipment market is expected to grow from $1.92 billion in 2011 to
$9.21 billion in 2016, while the service market is expected to grow form $4.65 billion
in 2011 to $33.65 billion in 2016 [14]. Reports like this further stress the need and
importance to efficiently optimize and utilize scare wireless resources such as spectrum
which can be adequately achieved through spectrum virtualization.
Spectrum virtualization for WiMAX will greatly reduce the number of base stations
needed for deployment and overall energy usage. With WiMAX virtual networks
operating in a federated arrangement, it will further enhance the sharing of network
resources and exponentially improve the efficient use of spectrum. This will in the
long-run attract smaller network providers to come into the market to provide better
services, hence creating a richer experience for end-users.
The concept of network federation is a new and exciting idea that it is being currently
proposed by many researchers as part of the building blocks for the future internet.
Federation in the network domain is a model for establishing very large scale and
diverse infrastructure for the purpose of interconnecting independent network domains
in order to create a rich environment with increased benefits to users of the independent
domains [15]. This concept can easily be expressed as shown in Fig. 1 where inde-
pendent Network providers (NP) existing in an interconnected framework with each of
A Novel Model for WiMAX Frequency Spectrum Virtualization 109
them having their own Network Operators (NO). The NOs are dependent on the NPs
for the use of their network infrastructure and the federated network setting allows for
resource sharing amongst all the various NPs.
WiMAX is defined under the IEEE 802 family as IEEE 802.16. For the IEEE 802.16,
the physical layer was defined for a wide range of frequencies from 2 to 66 GHz. Sub
frequency range of 10–66 GHz is essentially for line-of-sight (LoS) propagation,
whereas for 2–11 GHz bands, communication can be achieved for licensed and un-
licensed bands and they are also used for non-line-of-sight (NLoS) communication.
WiMAX uses a number of legacy technologies amongst which are: Orthogonal
Frequency Division Multiplexing (OFDM), Time Division Duplexing (TDD) and
Frequency Division Duplexing (FDD). OFDM is a multiplexing technique that sub-
divides the bandwidth of a signal into multiple frequency sub-carriers. These multiple
frequency subcarriers are modulated with data sub-streams and then transmitted.
OFDM modulation is realized with efficient Inverse Fast Fourier Transform (IFFT) that
generates large number of subcarriers (up to 2048) with minimal complexity.
In an OFDM system, resources (e.g. spectrum) are available in the time domain by
means of OFDM symbols and in the frequency domain by means of subcarriers. These
resources in the time and frequency domain can be rearranged into sub-channels for
allocation to individual users. An offshoot of OFDM is the Orthogonal Frequency
Division Multiple Access (OFDMA) which is a multiple access/multiplexing scheme
that provides multiplexing operation of data streams from multiple users both on the
downlink and uplink sub-channels. The OFDMA subcarriers are shown in Fig. 2.
WiMAX IEEE 802.16e-2005 otherwise known as mobile WiMAX is based on scalable
110 B.S. Ogunleye and A. Murgu
OFDMA (S-OFDMA) which supports a wide range of bandwidths enabling the need
for flexible various spectrum allocations. The scalability is achieved by adjusting the
Fast Fourier Transform (FFT) size and the same time fixing the sub-carrier frequency at
10.94 kHz. S-OFDMA supports bandwidth ranging from 1.25 MHz to 20 MHz [18].
The process of virtualizing the WiMAX radio spectrum or air-interface will require that
the basestation hardware components must also be virtualized. As similarly proposed
by [13] for LTE eNodeB virtualization which follows the principle of node virtual-
ization already done in the field of computer systems we propose a near similar
architecture for the virtualization of WiMAX basestation with emphasis on spectrum
virtualization. As previously discussed, the general approach for computer system
virtualization involves the use of a hypervisor. In similar fashion, our proposed model
for WiMAX basestation visualization is shown in Fig. 3. Considering that our emphasis
is solely on spectrum, the generic hypervisor has been renamed Virtual Spectrum
Hypervisor (VS-Hypervisor).
6.1 VS-Hypervisor
The VS-Hypervisor is the entity responsible for virtualizing the air interface and
ensuring proper management of spectrum allocation. Its primary job is the scheduling
of spectrum resources amongst the virtual networks to meet their individual bandwidth
requirements. WiMAX resources in the frequency domain are represented as S-OF-
DMA subcarriers and the numbers of subcarriers are directly proportional to the
bandwidth size. The VS-Hypervisor works by receiving bandwidth estimates from the
individual virtual WiMAX networks done by a bandwidth estimation unit. The esti-
mated bandwidth values are then mapped unto the appropriate number of S-OFDMA
subcarriers or sub-channels (grouped sub-carriers) and a scheduling algorithm sched-
ules these subcarriers/spectrum resources to the appropriate virtual networks. The
amount of spectrum allocated will be based on contracts or strictly on need and request.
Figure 4 depicts the VS-Hypervisor having access to the entire spectrum channel
bandwidth of a physical WiMAX basestation while it schedules the spectrum to the
virtual networks.
In this paper we focus more on the basic description of the VS-Hypervisor and its
functionality. We also look at this functionality in terms of handling requests from
VNOs using metrics such as time delay and request rejection rate.
Where tiD is the time stamp at which the request Di is issued, BWiD is the amount of
bandwidth that is requested or demanded by the VNOi at time ti, PD i is the current
D
priority/rank of VNOi in terms of bandwidth allocation and Ai is the antecedent pri-
ority/rank of VNOi with regards to BW allocation in the previous allocation cycle of the
VS-Hypervisor.
Signal Yi(t) has the following structure is expressed in Fig. 7 below:
Where tiA is the time stamp at which the allocation Yi is issued, BWiA is amount of
bandwidth that is allocated by the VS-Hypervisor to VNOi, PAi is the priority/rank of
VNOi in the next cycle of spectrum allocation by the VS-Hypervisor, FiA is the fre-
quency plan (that is, indexes of frequency channels to be used in the next allocation
cycle to implement BWiA ), TiA is the time plan (that is, indexes of time slots to be used
A Novel Model for WiMAX Frequency Spectrum Virtualization 113
in the next allocation cycle to implement BWiA ), SAi is the start time of the next
allocation cycle of BW (spectrum) addressing the VNOi demand currently processed
and EiA is the end time of the next allocation cycle of BW addressing the current VNOi
demand. Formally the VS-Hypervisor time flow can be expressed as shown in Fig. 8.
Where Hi ½: is the VS-Hypervisor mapping function corresponding to VNOi which can
be represented as shown in Fig. 9 below:
The specific definition for tiA , BWiA , PAi , …, EiA are part of the VS-Hypervisor design
process to be implemented on the spectrum virtualization. Their definitions will form
part of the future work of this paper.
The available bandwidth BWiA for the VS-Hypervisor to allocate can be expressed as:
BWiA ¼ BWiD BWpredict tiD þ SAi ð2Þ
BWpredict can be defined as the anticipated BW needed at the moment of request i.e.
tiD þ SAi . Fundamentally, the map function Hi ½: will be controlled by a state machine
containing external events D(t), having an internal state evolution subject to system
capability functionality constraints which can be expressed as:
(That is the sum of BW demanded by the VNOs cannot exceed system capacity C)
ðM N ÞkPN
Pl ðq; M; N Þ ¼ PN ð5Þ
i¼0 ðM iÞkPi
Where M, N are the input requests and granted requests by the VS-Hypervisor
respectively, q is the ratio between rate of requests (kÞ are made by a VNO to the rate at
which a request granted (l), i.e. q ¼ k=l. The probability is expressed as the ratio of
the lost stream of requests into the VS-Hypervisor and to the requests that where
granted. Where the state i is the state of the VS-Hypervisor when it has the capacity to
allocate spectrum and state N is the VS-Hypervisor reaches it maximum capacity.
Where PN is the probability that a request will be granted and Pi is the probability of the
state at which the VS-Hypervisor can grant requests.
The establishment of a virtualized and federated mobile network will essentially consist
of the following key participants:
Virtual Network Provider (VNP). The VNP is the owner of the network physical
infrastructure that has the ability to provide the existence of virtualized mobile
networks.
Virtual Network Operator (VNO). The VNO is a mobile operator that operates a
leased virtual network infrastructure and in turn hosts their network services on these
virtual mobile infrastructure provided by a VNP.
Diagrammatically, the federation of a virtualized WiMAX network can be mod-
elled using the Fig. 10. The figure shows three virtual network operators (VNO1, VNO2
& VN03) that are separately hosted by four virtual network providers (VNP1, VNP2,
VNP3 & VNP4). The allocation and management of resources (spectrum) in this fed-
erated setup is coordinated by the VS-Hypervisor.
A Novel Model for WiMAX Frequency Spectrum Virtualization 115
Fig. 10. Virtualized Federated Mobile Network Implemented Using the VS-Hypervisor
The aggregation of all demands of the individual VNOs can be expressed as:
D ¼ ½D1 ; D2 ; D3 T ð6Þ
Where the demand D is the transpose of the vectors D1, D2 and D3. The capacity C is
the aggregate capacity in terms of spectrum for all the network providers. It can also be
expressed as:
T
C ¼ C1 ; C2 ; C3; C4 ð7Þ
The total capacity C of the entire federated network is represented as the transpose of
the vectors C1, C2, C3 and C4. The VS-Hypervisor’s cumulative magnitude in terms
of the amount of spectrum requests/demand it can handle can be represented with the
vector Ƶ which is defined as:
(8)
Where Ƶ1, Ƶ2, Ƶ3 and Ƶ4 are the basis vectors which are linearly independent vectors
representing the VS-Hypervisor for the individual VNOs as shown in Fig. 10. The real
numbers a1 ; a2 ; a3 ; and a4 are the coefficients of Ƶ.
0 a1 þ a2 þ a3 þ a4 1 ð9Þ
Irrespective of how the federated networks are designed, the total number of demand
made by the VNOs must never be more that the available bandwidth B. This is
represented as a dot product between Ƶ and demand D as show in Eq. 10 below:
(10)
To assess the efficiency of the VS-Hypervisor within the federated network, a math-
ematical expression showing the ratio between the total bandwidth and capacity of the
federated network can be described as show in the Eq. 11 below.
116 B.S. Ogunleye and A. Murgu
B
gVSH ¼ ; 2 ½0; 1 ð11Þ
C
In summary, these equations in general try to express the basic behavior of the VS-
Hypervisor in a federated network setting describing how the VS-Hypervisor should be
able reliably allocate virtual spectrum for VNOs within a VNP and enable spectrum
sharing between VNPs.
In our research so far, we have been able to provide a basic view about the concept of
network virtualization existing within a federated arrangement using WiMAX as our
case study. Our emphasis on WiMAX spectrum virtualization was to develop a system
and framework where WiMAX spectrum can be fully utilized and better harnessed for
current network operators and to encourage the entry of new players into the WiMAX
market. We described the workings of the VS-Hypervisor which is an innovative
system tailored for the virtualization of the WiMAX air-interface.
This work is still at its infancy stage and a lot of issues are yet to be addressed. The
issues waiting to be resolved includes: The bandwidth estimation algorithm needed for
proper fair allocation of the virtual spectrum, developing a scheduling algorithm for the
VS-hypervisor on how spectrum resources will be allocated to the virtual networks
either based on contracts or service level agreements (SLAs) and running tests simu-
lations to evaluate the overall performance of the entire federated virtualized WiMAX
network.
References
1. MacKenzie, R., Briggs, K., Gronsund, P., Lehne, P.: Spectrum micro-trading for mobile
operators. IEEE Wirel. Commun. 20(6), 6–13 (2013)
2. International Telecommunication Union: Final Acts WRC-07: World Radiocommunication
Conference, Geneva (2007)
3. Martyn, R.: Spectrum for Mobile Broadband in the Americas: Policy Issues for Growth and
Competition. GSMA, Atlanta (2011)
4. Jorge, C., Javier, J.: Network Virtualization – A View from the Bottom. MobiCom, Chicago
(2010)
5. David, B.: Virtualization - Is it Right for You?: LAD Enterprizes, Inc. (2008)
6. Susanta, N., Tzi-cker, C.: A Survey on Virtualization Technologies: SUNY. Stony Brook,
New York (2005)
7. The FP7 4WARD Project, The FP7 4WARD Project: The Project. http://www.4ward-
project.eu/index.php?s=overview&c=project
8. Ravi, K., Rajesh, M., Honghai, Z., Sampath, R.: NVS: A Virtualization Substrate for
WiMAX Networks. MobiCom, Chicago (2010)
9. Gautam, B., Ivan, S., Dipankar, R.: A Virtualization Architecture For Mobile WiMAX
Networks. ACM SIGMOBILE Mob. Comput. Commun. Rev. 15(4), 26–37 (2011)
A Novel Model for WiMAX Frequency Spectrum Virtualization 117
Wolfgang Hahn(&)
Abstract. This paper discusses long term architectural evolution options of the
3GPP Evolved Packet Core (EPC). It is focused to the question which functions
of the existing EPC are addressed or impacted when introducing Software
Defined Networking (SDN). Several aspects are discussed showing the benefits
of a strict separation of mobile specific gateway functions from SDN functions.
In that aspect the suggestions made in this paper differs from existing SDN
based concepts. This paper analyses how the SGW/PGWs of the EPC can be
decomposed in different ways. A reduction of mobile specific network elements
and a separation into network independent administrative domains may help to
reduce the cost for network operation in future.
1 Introduction
Operators are faced with a continuously increasing demand for mobile data traffic at
moderate cost. Managing the total cost (TCO) is one challenge - the other is to become
more flexible in reacting to future network needs. In this context new technologies have
received attention in the Telecommunication sector: Cloud computing and Software
Defined Networking (SDN), see NSN view on Technology Vision for 2020 [1]. SDN
originated in the data centre networking business. The technology virtualizes the
switching and forwarding resources in the data plane and makes them accessible for
higher layer software through the introduction of a central controller [2]. For tele-
communications networks it promises better utilization of transport resources and
delivery of a higher degree of automation in the management.
The principles introduced by SDN are considered to be useful also in areas beyond
the pure routing and switching in transport networks. It is the “centralization of net-
work control” that allows for a number of benefits: centralized Software can run in
cloud environment on standard computer hardware, it makes the network more flexible
and easily “programmable” and in consequence allows for simplified service intro-
duction, management and increased level of service automation.
An important question is how those SDN principles and technologies will impact
the mobile network architecture, especially the evolved packet core (EPC) defined by
3GPP. Operators are already asking 3GPP to study requirements for virtualization and
© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 118–131, 2015.
DOI: 10.1007/978-3-319-16292-8_9
Mobile Network Architecture Evolution Options 119
programmability and rethinking the current architecture [3]. This will become even
more relevant for the next generation of mobile networks – 5G. The EU commission
has initiated the 5G Infrastructure Public Private Partnership [4] to drive the devel-
opment of 5G standards.
In this context the paper sketches potential evolution options of the 3GPP archi-
tecture on a high level.
Two approaches are outlined in Sects. 2 and 3 that differ in the kind how SDN
technology is applied. The first approach aims to enhance the current EPC architecture
with SDN and is already subject to particular research and standardization activities.
The second approach presented here introduces bigger changes and might be subject to
further research especially in the 5G context. After an evaluation the second new
approach is investigated in more detail in the following sections.
Starting point is the 3GPP EPC architecture for LTE according to [5], see a sim-
plified depiction in Fig. 1. Mobile devices connect via the air interface with base stations
(eNB). The MME as central Mobility Management Entity authenticates the users,
establishes the connectivity in the network and tracks the mobile devices locations.
SGW and PGW as gateways in the user data plane path that provide more centralised
processing for the user traffic like the provisioning of a mobility anchor point, QoS
authorization and charging functions, this will be further detailed in Sect. 4.1.
Mobile
Access
Domain
MME PCRF
C Plane
Access
Independent
Service
Domain
S1-C S11 Gx
S5
Network
SGW PGW Edge
eNB
Service
U Plane
Functions
Transport
X X X X Plane
Figure 1 also includes the Policy control and Charging Rule Function (PCRF) what
is omitted in the other figures for simplicity. Furthermore a number of domains (Mobile
Access, Access Independent Service Functions and Transport Network) are introduced
for elaborating operation and management aspects later. And the network functions are
allocated to a layered plane structure for control (CP), mobile network user data (UP)
and transport.
120 W. Hahn
Basic principles of SDN concepts are the separation of control and data plan of a
network, an open interface between both layers (often referred to OpenFlow protocol),
the centralization of the network control functions programmable by networking
applications through APIs and a simplification of nodes in the user plane are expected
including its universal applicability in fixed, enterprise and mobile networks.
What has been proposed for the transport network was then also discussed for the
mobile network user plane (UP), mainly for the gateway (GW) functions, which are the
Serving GW SGW and Packet Data Network GW PGW in the EPC. In the Open
Networking Foundation (ONF) white paper [6] OF enabled GWs are suggested.
Recently ONF has chartered a Wireless and Mobile Working Group [8]. It aims for the
development of SDN Enhanced Distributed S/P-GW as discussed in this section.
A prototype implementation is described in [7]. In a wider context relationships
between SDN, network virtualization and cloud computing are shown in [10].
The basic assumption is that considerable control functions of the GWs can be
separated from the UP and centralized. Aspects of this centralization strategy were
studied e.g. in [9].
The remaining 3GPP network specific user plane (UP) functions are then merged with
SDN based network packet forwarding nodes as a “fast path” that potentially allows
hardware optimized wire speed processing. The suggested architecture is depicted in
Fig. 2 below. As the main aspect is the decomposition of 3GPP GWs into CP and UP
vertical to the layers the architecture is termed Vertical GW split architecture (V-GW
split).
Mobile
Access
Domain
S11
MME SGW PGW
(Control) (Control)
Transport C Plane
Domain
S1-C
SDN
Controller
Network
SGW PGW Edge
eNB (UP) (UP) Service
U Plane
Functions
Transport
X X X X
Plane
The SDN switches in the transport plane representing the forwarding nodes pro-
viding basic switching and routing capability. The suggested merging of GW-UP
functions and SDN switches as preferred deployment is indicated within the above
figure by dotted boxes. It neglects to some extend that the mobile network GW pro-
vides complex UP functions especially related to policy control and charging. As a
consequence the transport nodes are required to provide those functions and the SDN
control must control them.
A comprehensive study of different implementation options for network elements
(called enhanced network elements NE+) and the different function split options
between CP and UP are elaborated in [10]. This includes also an investigation of the
implementation of UP functions related to QoS and charging.
Claimed benefits for the architecture are that both CP and UP functions can scale
independent of each other (CP computing resources adapt based on signaling load, UP
based on traffic throughput). Further Control Plane functions may converge, for
example MME with GW-Control and with SDN control.
Some benefits materialize in case dedicated physical nodes based GWs are dis-
tributed within a certain network topology. (The platform flexibility allows CP and UP
entities to be deployed independently.) The distribution of GWs (UP) is simplified:
only one control interface is needed to be managed and secured at different GW
locations. For current not decomposed GW a number of interfaces (S11, Gx, Rx, Ro)
would need to be maintained for many distributed GWs.
But the vertical GW decomposition also gives some drawbacks: For example it
requires a new interface between the decomposed GW functions which requires
additional resources and increased processing. On top of that - communication via
external interfaces are usually less efficient than communications within integrated
nodes. Also more messages (and delays) are required when looking to call flows
involving a central control unit compared with e.g. direct SGW-PGW communication.
It introduces additional nodes or Virtual Network Function (VNF) types that need to be
managed and orchestrated.
Looking from higher abstraction level further issues of the architecture are iden-
tified that result from the application of the SDN principle (separation of GW CP and
UP) in connection with an SDN based implementation (introducing an SDN controller
with two interfaces, merging GW UP and SDN switching functions):
The vertical GW decomposition introduces several dependencies between the
mobile access and transport network domain. For the control plane this can be seen in
Fig. 2. Here the SDN controller needs enhancements on both interfaces (Northbound
and Southbound) to process mobile GW specific actions.
It mixes also the layered plane concept of UP and Transport plane. E.g. 3GPP
specific bearers need to be handled in the same node as packet forwarding and is
controlled by the same protocol. 3GPP specific bearer handling does not relate to GTP
encapsulation only, it includes policy enforcement and charging functions as well.
From operator point of view it is advantageous to limit the interactions and
dependencies between domains as much as possible (to well defined/standardized
interfaces) to allow an independent vendor selection, deployment and management of
the domains.
122 W. Hahn
Compared with the cost of introducing this new architecture the new functions itself
require also new investments for products and inter-operability testing effort. This will
not lead to new EPC functionality because it provides a redesign of existing functions
only. This means the costs need to be paid back by better resource utilization only.
The analysis of the vertical GW split given in Sect. 2 highlights some drawbacks of that
architecture. Within this section an architecture is suggested which targets to avoid
some negative impacts, especially the domain dependencies.
The starting point to develop a novel architecture is a local IP access introduced in
3GPP Releases 10 for Femto BS (LIPA, [5]), i.e. the concept of co-locating a local GW
(LGW) with a Femto BS. To extend and generalize this concept the restriction of LIPA
as the lack of support for mobility needs to be overcome. It should be noted that future
developments may simplify the deployment of core network functions even more by
introducing those functions into base stations: the so-called cloud RAN concept pro-
vides base band processing in local data centers (DC) as well, or the other way around a
base station is enhanced by cloud servers which allows localized functions like for
content storage for very low latency applications and for saving backhaul transport for
content delivery networks (CDN).
Figure 3 shows the architecture introduced in this section that enhances the LIPA
solution.
Mobile
Access
Domain
S11
MME SGW
(Control)
Transport C Plane
Domain
S1-C
S5 SDN
Controller
Mobility
Anchor Network
Edge U Plane
eNB LGW X Service
Functions
Transport
X X X X Plane
From the LIPA architecture it is inherited that the SGW works as a controller only
and can be placed within the control plane. The LGW can be controlled like the PGW
with a S5 interface, which avoids the need to add mobile network specific functions
Mobile Network Architecture Evolution Options 123
into the SDN controller and OF protocol as outlined for the Vertical-GW-split in
Sect. 2. For a horizontal GW functions distribution - also termed H-GW split - it is
further investigated how functions of the PGW and the UP part of the SGW can be
distributed within the network.
Potential candidates are three allocations or network nodes, marked with the circles
in Fig. 3. (Particular locations in the architecture might be preferred for certain
functions.):
– In a network edge
Parts of the GW functions can be implemented as centralized UP processing and
performed in a mobile access network independent fashion. This may allow to include
functions as legal interception, charging, and services that are currently under inves-
tigation for the so-called “Service Function Chaining” (e.g. NAT, Firewall, Deep
Packet Inspection, Content optimization etc.).
Functions allocated this way benefit from convergence with fixed networks and the
economy of scale. From the operator management point of view this allows own
domains for network and IT services.
– In an SDN controlled transport network
Especially for mobility related functions SDN based switches or forwarding elements
can be used. As described in Sect. 4 they may serve as mobility anchors and trigger
functions for the activation of traffic paths for IDLE users. This can be achieved by
functions available in all transport switches (it should be noted that mobile network
specific enhancements are avoided). Please also take a look to OF related activities later
in this text to allow tunneling which can be applied for a mobile anchor function.
– In the BS of the mobile access network
The local GW (LGW) function performs the missing mobile network specific UP
related functions. This allows QoS functions and allocation of traffic classes to radio
bearers.
Before going into the details of the architecture in the following a short evaluation
of the Horizontal-GW split advantages is provided (especially in relation to the Ver-
tical-GW split) for different aspects:
– Number of mobile network specific nodes
Integrating the mobile network specific UP functions in the BS removes a mobile
network specific node/function from the architecture. This may contribute to TCO
reduction. (The other candidate locations are not used in a mobile network specific
way.)
– High level comparison of number of managed network nodes/functions
Just counting network functions is a quite simple approach to evaluate the management
effort of certain network architecture. Those functions might be implemented by various
products coming from different vendors and each may have its own management envi-
ronment, requiring inter-operation tests etc. For the network function virtualization sce-
nario this might be the number of virtual network functions and type of virtual machines
124 W. Hahn
an operator needs to provide for life cycle management and other functions. So reducing
the numbers of entities might result in a direct contribution to TCO reduction.
It can be seen that the V-GW split increases the number of functions compared to EPC.
(Please note that this might not only result in increased management effort but also
introduces longer messaging flows when communicating via central controllers com-
pared to direct SGW-PGW communication.)
– Separation of administrative domains
The differences of the V-GW split and H-GW split implementation became already
visible by comparison of Figs. 2 and 3. The H-GW split performs better to keep
different administrative domains as much as possible independent (mobile access,
transport and service/IT functions) and makes use of SDN w/o modifications for mobile
specific handling (e.g. avoid carrying mobile specific information in OF, for policy
enforcement, charging etc.)
– Network Flatness
The V-GW split keeps the number of EPS/EPC nodes in the user plane path high
whereas the H-GW split reduces it resulting in a more flat architecture. This is mainly
achieved by 3GPP Specific GW function running in the BS. SDN switches used for e.g.
mobility anchor functions are assumed to add only little delay as running with wire
speed.
– Network Migration and Roaming support
The GW split into GW control and GW user plane (switch) requires a redesign of the
current 3GPP SGW and PGW. This introduces a barrier for introducing the concept in
real products. The H-GW split starts with existing components that can be evolved step
by step depending on network requirements. E.g. with limited PCC functions only no
enhancements for the S5 i/f are needed. Also the SGW (Control) is nearly unchanged.
For traffic allocation to radio bearers in the LGW it can use the functions introduced for
the PMIP alternative of the EPC (like the interface to the PCRF). On the other hand a
prerequisite is the assumption of deployed SDN technology that can talk over some
GW functions.
Mobile Network Architecture Evolution Options 125
The centralized SGW can also serve home routed traffic for roaming subscribers
operating in the “old mode”. Nevertheless in can be assumed that in future high
bandwidth networks reasons for home routing will disappear and the home operator
can allow local breakout what would allow to use the new proposed architecture. Hence
in future only a small portion of traffic might use the SGW for traffic routing and
interfacing with other networks whats allows to centralize the SGW functions.
The comparison above provides a promising number of advantages for the H-GW
split. For this reason details of the architecture are elaborated in the next section.
(Continued)
PGW functionality SGW-C LGW Mobility
anchor
DL rate enforcement based on the accumulated maximum x
bit rate (MBRs) of the aggregate of SDFs with the same
GBR QCI
Packet screening, usage threshold enforcement policy x
rules
UL and DL bearer binding and verification e.g. as in [11] x
IPv4 address assignment signalling via mobility manager x x
function (e.g. MME) to the UE
IPv4 address allocation, IPv6 prefix allocation for the UE, x
and DHCPv4 (server and client) and DHCPv6 (client
and server) functions. Option 1: LGW acts as first hop
router to assign an IP Address,
IPv4 address allocation, IPv6 prefix allocation UE, and x
DHCPv4 (server and client) and DHCPv6 (client and
server) functions: Option 2: Mobility anchor or network
edge acts as first hop router, especially for tunnel based
mobility solution
Handover preparation,
transfer local GW context
Handover execution
Forwarding of data
5. End marker
6 Path Switch Request Ack
7 Release Resource
S1-C
S5
UE-HO
OF
Control
eNB LGW X
Mobility Internet
Anchor 1
X
eNB LGW X
Network
X X edge
eNB LGW X
Mobility
Anchor 2
Also related to traffic routing is the concept of Access Point Names (APN).
Conventionally, the APN describes the external network which the user wants to reach
and is used to determine the PGW at the edge of the network. The APN might be a
criteria for the Network Edge located Service Function selection.
Mobile Network Architecture Evolution Options 129
One solution is to terminate tunnels and locate mobility anchors at the network
edge. But this may result in the need to establish a lot of (virtual) connections between
that point and all Base Stations. If the mobility anchors are located somewhere dis-
tributed in the network topology as shown in Fig. 5 there would be a need to provide
traffic UL routing to the right network edge. This could be achieved by a per path
establishment for each end user connection.
A solution to reduce such dynamic signaling effort could be the use of predefined
routes: i.e. for uplink routing the LGW may set the header information according to the
APN indicating the external network. The header information fields used may be
DSCP, an IPv6 flow label, or an Ethernet VLAN option. Figure 6 down below shows a
solution which describes the above. There are two external networks, the Internet via IP
router 1 and the IMS via IP router 2. The mobility anchor or a node between mobility
anchor and the edge routes the traffic according to the header information towards IP
router 1 or IP router 2. QoS requirements and markings may be used in addition, e.g.
the IMS traffic might get a dedicated DiffServ Core Point.
Central
control S11 SGW SDN
MME (Control) Control
OF
S1-C Control
IP Internet
X Router 1
S5
Mobility
Anchor
X
eNB LGW X
UE
IP IMS
X X Router 2
Optional header
marking according Path selection
“APN 1” or “APN 2” according “APN 1”
or “APN 2”
The last concept that is described here is the support of UE Idle Mode. If UE Idle
Mode is supported by the network the user plane path to the eNB will be interrupted (in
EPC at the SGW). In case downlink data arrives at the SGW the MME pages the UE
and a new UP path is established (probably towards a new eNB). An implementation
option within a SDN controlled domain could be as following: when a terminal moves
into idle mode the SGW Control and/or SDN control may decide to select a node
within the network to terminate the DL packet routing. For this the SDN control is
informed about the IP address of the terminal. The SDN control calculates based on its
knowledge of the network topology and taking into account the last known position of
the terminal (i.e. eNB address) and target network (APN) for this specific terminal
traffic the node which shall perform the downlink routing of UP packets. This could be
130 W. Hahn
the IP edge or a Mobility anchor. The SDN control updates the routing table in that
node so that packets have no routing entry. The first arriving packets will then be sent
to the SDN control that initiates the downlink data notification procedure towards the
SGW control and MME in order to start paging the UE.
An evolved mobile network architecture has been outlined. Its characteristic is that
functions of the mobile specific SGW and PGW are allocated to the Base Stations, the
SDN controlled transport network and to Network Edge Service Functions. This
reduces complexity by separating the network into independent domains for access,
transport and services, reducing mobile network specific functions and build on fixed
mobile convergence. All this contributes directly to TCO reduction.
The proposals can be mapped in parts to the vertical GW split architecture and
materialize benefits claimed for that architecture as well: the SGW enhanced with
PGW/LGW control function map to a combined SGW-C and PGW-C controller and
can contribute to a control plane consolidation. SDN switches used in the horizontal
function distribution to provide mobility related functions map to some extend to the
SGW-U of the vertical GW split and can implement efficient traffic routing topologies
needed for latency requirements etc.
The horizontal distribution of GW functions has been chosen as base line with the
intent to keep the different network layers as much as possible independent. This does
not exclude a further vertical shift of LGW/PGW control functions to the control plane
like the described termination of charging and policy control interfaces, but still allows
that mobile network specific functions can be kept fully in the mobile access network
domain.
Related to this is a positional optimization of the LGW control and an obvious area
for further research activities. Since the LGW is co-located with the eNB a single
signaling interface might be sufficient. An enhancement of the S1-C interface could
serve all need for LGW control.
The vertical GW decomposition schema outlined in Sect. 2 targets to redesign the
existing 3GPP functions by using SDN principles. The new proposed horizontal
function distribution introduces a bigger architectural change as it replaces 3GPP bearer
concepts in the core network by SDN based flow handling.
This way a considerable amount of current 3GPP UP functions has been shifted to
the SDN controlled transport plane. Hence the future deployment of the presented
concepts may depend on the adoption of SDN principles in telecommunication net-
works and their capability to scale according to mobile network requirements.
Not all issues of the architecture developed can be discussed and evaluated in the
scope of this paper nor they are already investigated. Under the Horizon 2020 program
the EU commission has called for 5G research projects [4]. Under the topic “Advanced
5G Network Infrastructure for the Future Internet” [14] areas like Radio network
architecture, convergence beyond last mile, network management and network virtu-
alization and software networks are scoped. It can be anticipated that projects when
Mobile Network Architecture Evolution Options 131
established will look deeper into details of the architecture partly outlined here and
proposed for a potential NW evolution.
References
1. Technology Vision for the Gigabit Experience, NSN White paper June 2013. http://nsn.com/
futureworks-publications
2. Software-Defined Networking: The New Norm for Networks, white paper, ONF, April 2012
3. China Telecom. 3GPP SA1 document S1-135118. http://www.3gpp.org/ftp/tsg_sa/WG1_
Serv/TSGS1_64_SanFrancisco/docs/. New study on Requirement for Virtualization and
Programmable of Mobile Networks
4. 5G Infrastructure Public Private Partnership, Vision and Mission. http://5g-ppp.eu/our-
vision/
5. 3GPP TS 23.401: GPRS Enhancements for E-UTRAN Access
6. OpenFlow™-Enabled Mobile and Wireless Networks, ONF Solution Brief, 30 September
2013. https://www.opennetworking.org/images/stories/downloads/sdn-resources/solution-
briefs/sb-wireless-mobile.pdf
7. Mueller, J., Chen, Y., Reiche, B., Vlad, V., Magendanz, T.: Design and implementation of a
carrier grade software defined telecommunication switch and controller. In: 1ST IEEE / IFIP
International Workshop on SDN Management and Orchestration, Krakow, Poland, 9th May
2014. http://clayfour.ee.ucl.ac.uk/sdnmo2014/
8. ONF Wireless&Mobile Working Group Charter. https://www.opennetworking.org/working-
groups/wireless-mobile
9. Hahn, W., Sanneck, H.: Centralized GW control and IP address management for 3GPP
networks. In: Timm-Giel, A., Strassner, J., Agüero, R., Sargento, S., Pentikousis, K. (eds.)
MONAMI 2012. LNICST, vol. 58, pp. 13–27. Springer, Heidelberg (2013)
10. Basta, A., Kellerer, W., Hoffmann, M., Hoffmann, K., Schmidt, E.-D.: A virtual SDN-
enabled LTE EPC architecture: a case study for S-/P-Gateways functions. In: IEEE
SDN4FNS Workshop (2013)
11. 3GPP TS 23.203: Policy and charging control architecture
12. Liu, D., Deng, H.: China Mobile, Internet-Draft SDN Mobility, 08 July 2013. http://tools.
ietf.org/html/draft-liu-sdn-mobility-00
13. Hampel, G., Steiner, M., Bu, T.: Applying software-defined networking to the telecom
domain. In: Proceedings of the 16th IEEE Global Internet Symposium in Conjunction with
IEEE Infocom (2013)
14. EU commission: Advanced 5G Network Infrastructure for the Future Internet in ICT 2014
(2013). http://ec.europa.eu/research/participants/portal/desktop/en/opportunities/h2020/
topics/77-ict-14-2014.html
Self-Organizing Networks
A Post-Action Verification Approach
for Automatic Configuration Parameter Changes
in Self-Organizing Networks
1 Introduction
SONs are seen today as a key enabler for automated network management
in next generation mobile communication networks such as Long Term Evo-
lution (LTE) and LTE-Advanced. SON areas include self-configuration, self-
optimization and self-healing [1]. The first area typically focuses on the initial
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 135–148, 2015.
DOI: 10.1007/978-3-319-16292-8 10
136 T. Tsvetkov et al.
Response
SON Coordinator
Request SON Function
Coordination PM / CM / FM
configuration Data
CM Data
Function configuration
Operator
learns the faultless behavior of the network. The gained knowledge is used at
a later point in time as a basis of comparison to identify significant deviations
from the usual behavior. There are several ways of how the required performance
data can be collected. For instance, each NE can monitor its own operation by
measuring several types of performance indicators and upload the result to the
Operations Support System (OSS) database. This stored data can be then fed
into the anomaly detector. In order to provide the corrective action the diagnosis
part may learn the impact of different faults on the performance indicators. For
example, it may employ a scoring system that rewards a given action if it has
had a positive effect on the network.
Inspired by the ideas of anomaly detection and diagnosis we have developed
the SON verification function. The purpose of our function is to assess the impact
of SON-induced CM changes and provide the corrective action in case they have
caused an undesired or unusual network behavior. When action requests get
acknowledged, the SON coordinatordelegates the task of observing the perfor-
mance impact of those changes to our verification function. The coordinator sends
for this purpose a verification request message identifying the cells that have been
reconfigured by a SON function instance. Furthermore, the coordinator informs
our function about the area influenced by that reconfiguration as well as the time
the change has an effect on other running function instances. Based on this infor-
mation, our verification function determines where to look for an anomaly and find
the changes responsible for an undesired behavior to occur. In this document, we
classify the workflow of our function as post-action verification.
The rest of the paper is organized as follows. In Sect. 2 we give a general
overview of coordination and verification in SON. In Sect. 3 we present our SON
verification function, including all main building blocks. In Sect. 4 we outline
the results from our experimental case study as well as include a description of
the used simulation system. Our paper concludes with the related work and a
summary.
Since SON functions do not exchange context information, there will be always
an uncertainty when a function performs a CM undo on its own. Typically, func-
tions are considered as black-boxes which means that no one except the vendor
is able to perform changes to the function itself (e.g., adding an interface for
such context information exchange).
3 Concept Overview
Our post-action verification approach involves a tight integration with SON coor-
dination. The SON verification function we propose analyzes the network perfor-
mance for acknowledged action requests of SON function instances. In case the
activity of a given instance causes an undesired network behavior, our function
requests a CM undo from the SON coordinator for the affected area. To achieve
its task, though, the SON verification function makes use of four helper func-
tions: (1) an anomaly level, (2) a cell level, (3) an area resolver, and (4) an area
analyzer function. The anomaly level function allows us to differentiate between
normal and abnormal cell KPI values. The cell level function creates an overall
performance metric of individual cells. The area resolver function defines the
spatial scope we are going to observe for anomalies. The area analyzer function
determines whether the cells within that scope are showing abnormal behavior,
identifies the responsible CM changes for that to happen and sends a request
to the SON coordinator to undo these changes. In the following, we are going
to describe each function in detail and provide information of how they interact
with each other.
Just monitoring a given cell KPI and observing whether it is above or below
a threshold is not sufficient to determine whether it is showing anomalous val-
ues. Usually, a supervised anomaly detection technique requires the training
and computation of a classifier that allows us to differentiate between a normal
and abnormal state. If we take the KPI terminology used in this paper, such a
method would allow us to compute a reference state from which a given cell
KPI may deviate. In this way, we can analyze whether a given KPI data set is
conform to an expected pattern or not.
The anomaly level function presented in this section is responsible to calculate
this difference. Its output is a KPI anomaly level which depicts the deviation of a
KPI from its expectation. To do so, we standardize a KPI dataset by taking the
z-score of each point. A z-score is a measure of how many standard deviations
a data point is away from the mean of a given KPI data set. Any data point
that has a z-score lower, for example, than −2 or higher than 2 is an outlier,
and likely to be an anomaly. The actual process of computing an anomaly level
consists of two steps.
First, we collect samples X1 , . . . , Xt during the verification training phase for
each KPI. Here, we use t to mark a training period. Depending on the granularity
140 T. Tsvetkov et al.
at which we are able to get PM data from the network, a training period may
correspond to an hour, a day, a week and so on. Second, we compute the z-score
for each KPI sample X1 , . . . , Xt , Xt+1 . Note that Xt+1 corresponds to the KPI
sample from the current (non-training) sampling period. Let us give an example
of how this may look like when we observe the Handover Success Rate (HOSR)
for a given cell. Suppose that a cell has reported a success rate of 98.1 %, 97.6 %,
and 98.5 % during the training phase. Furthermore, let us assume 95.2 % is the
result from the current sampling period. The normalized result of all four samples
would be 0.46, 0.21, 0.78, and −1.46. The HOSR anomaly level equals to −1.46,
which is the z-score of the current sampling period.
Furthermore, it should be noted that during the verification training phase
our verification function collects KPI data and does not verify any CM changes.
It is of high importance for the verification function to be supplied with faultless
data during this phase, i.e., the network must show an expected behavior.
Research has shown that there are several other ways of designing an anomaly
level function. For instance, we may use a two-sample Kolmogorov-Smirnov test
to compare the distributions of two sets of KPI samples [8]. Another example is
the approach followed in [9] where an ensemble method is suggested to calculate
KPI degradation levels.
CM change(s). The output of this function is a target tuple (Σ, Ω). It consists of
a set Σ that includes the cells that have been reconfigured by a SON function
instance and a set of cells Ω that have been possibly influenced by that recon-
figuration process. In this paper, we call Σ the CM change base and Ω the CM
change extension area. Altogether (i.e., Σ ∪ Ω) they compose the verification
area V which is the spatial scope we observe for anomalies.
Our area resolver function performs the computation based on the impact
area of the SON function instance whose activity has triggered the verification
process. As mentioned in [3], the impact area of a SON function instance provides
us information about which cells are affected after its execution. In a similar
manner as described in [10], we compute the CM change base by taking the
function area. We see the cells that have been reconfigured by a SON function
instance as most prone for experiencing anomalies. Furthermore, we compute the
CM change extension area by taking the effect area and the safety margin. The
main reason why we consider the effect area is because it includes all cells that
are supposed to experience side-effects after the execution of a SON function
instance. For instance, the load of a cell may change if the transmission power of
a neighboring cell has been adjusted. However, the effect area can differ from its
original definition. For example, due to an increased network density the effect
area can be much larger than assumed. This is why we take the safety margin
as well.
MRO Function
TXP Function
Execution ACK/NACK SON Verification Function
RET Function
Execution Request
SON Coordinator
Verification Request
Verification Response
CM Data PM / CM Data
Fig. 2. S3 overview
have had a positive impact, i.e., it is not responsible for the degradation of that
cell. In our concept we call such a situation a verification collision. Our proposal
is to stepwise revert the changes made by the SON function instances. We first
start to undo the CM changes triggered by the instance whose impact time has
been most recently completed. Then, we observe the impact of the undo opera-
tion. Should the result lead to an improvement but still indicate that the target
tuple is showing anomalous behavior, we continue by undoing the changes made
by the next function instance that has been most recently executed. The undo
process terminates as soon as we fall again in the acceptable range, as defined by
cmin and cmax .
4 Evaluation
In this section we present the results of our experimental case study with the
presented SON verification approach. We also give an overview of the simulation
system we use for evaluation.
SON Function Engine. The SFE is a runtime environment for SON functions
which handles their communication and configuration. Every time the LTE net-
work simulator completes a round, i.e., it exports new PM data, the SFE trig-
gers the monitoring phase of all SON functions. Should a CM change request be
generated, it is immediately forwarded to the SON coordinator. Based on the
coordinator’s decision, the SFE deploys the requested CM parameter changes to
the simulator. For all simulation test runs, we employ our verification function
as well as three optimization functions: the MRO, Remote Electrical Tilt (RET),
and Transmission Power (TXP) function. Note that the latter two are a special
CCO type, as defined in [1]. The RET function adapts only the antenna tilt
whereas TXP adjusts solely the transmission power within a cell.
Furthermore, an instance of MRO, RET and TXP is running on each cell in
the network. The function and input area of every function instance is set to a
single cell. The impact time of every instance is set to one simulation round.
Each scenario consist of 5 test runs each of which is lasting 18 simulation rounds.
The 70 training rounds have been recorded beforehand. Each test run starts
with a standard setup, as defined by the network planning phase. Furthermore,
we define the effect area of each function instance to include only the cell on which
the instance is running, i.e., the effect area equals the function area. In addition,
we allow only one CM parameter per cell to be changed at the same time, i.e.,
only one function instance is allowed to adjust the configuration of a cell during
a simulation round. The initial function coordination priority P is set as follows:
PRET > PT XP > PM RO .
99.5
99
98.5
HOSR 98
97.5
97
96.5
96 With SON verification fuction
Without SON verification fuction
95.5
0 2 4 6 8 10 12 14 16 18
Simulation Rounds
0.85
0.8
0.75
CQI
0.7
0.65
0.6
With SON verification fuction
Without SON verification fuction
0.55
0 2 4 6 8 10 12 14 16 18
Simulation Rounds
1
0.5
0
-0.5
-1
Cell Level
-1.5
-2
-2.5
-3
-3.5
With SON verification fuction
-4 Without SON verification fuction
-4.5
0 2 4 6 8 10 12 14 16 18
Simulation Rounds
mechanism. At some point in time, the coordinator starts to reject the requests of
a function instance if it has frequently been executed.
350
With SON verification fuction
Without SON verification fuction
300
Number of RLFs
250
200
150
100
50
0 2 4 6 8 10 12 14 16 18
Simulation Rounds
0.5
0
-0.5
-1
Cell Level
-1.5
-2
-2.5
-3 With verification collision resolver
Without verification collision resolver
-3.5
0 2 4 6 8 10 12 14 16 18
Simulation Rounds
we force RET to change the tilt of one cell (denoted as A) and TXP to adjust
the transmission power of another one (denoted as B). Note that we do the latter
change in a way that it negatively impacts the performance of cell C which is a
common neighbor of cells A and B. The tilt change is done in round 4 on cell A
whereas the transmission power change is triggered in round 5 on cell B.
Figure 7 shows the cell level at cell C, the cell that has triggered the creation
of the two target tuples. If we simply disable the verification collision resolving
capability, i.e., our function does not consider verification collisions, we will undo
all changes made after simulation round 3. This would mean that we will revert
a CM change that has had a positive impact (the tilt adjustment on cell A) and
one that has had a negative influence (the power change on cell B) on cell C.
As a result, cell C returns to the state before the tilt change was triggered which
leads to a much lower cell level compared to the results from the test runs where
we have enabled the verification collision resolver. In the latter case, our function
reverts only the changes made by the TXP function instance running on cell B.
5 Related Work
Within the SOCRATES project [4] ideas have been developed about how
undesired behavior can be detected and resolved in a SON-enabled network.
The authors introduce a so-called Guard function whose purpose is to detect
A Post-Action Verification Approach for SON 147
6 Conclusion
impact time, we are able to prevent positive (i.e., such having a positive effect
on the network) CM changes from being undone.
Our future work will be devoted to further evaluation including more KPIs
and more complex fault cases. We also plan to study alternative anomaly detec-
tion and diagnosis techniques. The link between several CM undo operations
and performing the corrective action if they start repeating will be also one of
our future research topics.
References
1. Hämäläinen, S., Sanneck, H., Sartori, C. (eds.): LTE Self-Organising Networks
(SON): Network Management Automation for Operational Efficiency. Wiley,
Chichester (2011)
2. 3GPP: Telecommunication management; Self-Organizing Networks (SON) Policy
Network Resource Model (NRM) Integration Reference Point (IRP); Information
Service (IS). Technical specification 32.522 v11.7.0, 3rd Generation Partnership
Project (3GPP), September 2013
3. Bandh, T.: Coordination of autonomic function execution in Self-Organizing Net-
works. Ph.D. thesis, Technische Universität München, April 2013
4. Kürner, T., Amirijoo, M., Balan, I., van den Berg, H., Eisenblätter, A., et al.: Final
Report on Self-Organisation and its Implications in Wireless Access Networks.
Deliverable d5.9, Self-Optimisation and self-ConfiguRATion in wirelEss networkS
(SOCRATES), January 2010
5. Tsagkaris, K., Galani, A., Koutsouris, N., Demestichas, P., Bantouna, A., et al.:
Unified Management Framework (UMF) Specifications Release 3. Deliverable d2.4,
UniverSelf, November 2013
6. Romeikat, R., Sanneck, H., Bandh, T.: Efficient, dynamic coordination of request
batches in C-SON systems. In: IEEE Vehicular Technology Conference (VTC
Spring 2013), Dresden, Germany, June 2013
7. Szilágyi, P., Nováczki, S.: An automatic detection and diagnosis framework for
mobile communication systems. IEEE Trans. Netw. Serv. Manag. 9(2), 184–197
(2012)
8. Nováczki, S.: An improved anomaly detection and diagnosis framework for mobile
network operators. In: 9th International Conference on Design of Reliable Com-
munication Networks (DRCN 2013), March 2013
9. Ciocarlie, G., Lindqvist, U., Nitz, K., Nováczki, S., Sanneck, H.: On the feasibility
of deploying cell anomaly detection in operational cellular networks. In: IEEE/IFIP
Network Operations and Management Symposium (NOMS 2014), May 2014
10. Tsvetkov, T., Nováczki, S., Sanneck, H., Carle, G.: A configuration management
assessment method for SON verification. In: International Workshop on Self-
Organizing Networks (IWSON 2014), Barcelona, Spain, August 2014
11. 3GPP: Evolved Universal Terrestrial Radio Access (E-UTRA); Physical layer pro-
cedures. Technical specification 36.213 v12.1.0, 3rd Generation Partnership Project
(3GPP), March 2014
12. Frenzel, C., Tsvetkov, T., Sanneck, H., Bauer, B., Carle, G.: Detection and reso-
lution of ineffective function behavior in self-organizing networks. In: IEEE Inter-
national Symposium on a World of Wireless Mobile and Multimedia Networks
(WoWMoM 2014), Sydney, Australia, June 2014
Operational Troubleshooting-Enabled
Coordination in Self-Organizing Networks
1 Introduction
The Self-Organizing Network (SON) paradigm is an automated network opera-
tions approach which provides self-configuration, self-optimization, and self-
healing capabilities for next generation mobile communication networks including
Long Term Evolution (LTE) [6]. This is achieved by a collection of autonomous
SON functions, each observing Performance Management (PM), Fault Manage-
ment (FM), and Configuration Management (CM) data and changing network
parameters in order to achieve a specific operator given objective or target [2],
e.g., the reduction of the number of Handover (HO) failures between two network
cells. However, the SON function objectives are connected and, so, the SON func-
tions can interfere with each other at run-time, e.g., by contrary adjustments of
the same network parameters. Such conflicts hamper seamless SON operations
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 149–162, 2015.
DOI: 10.1007/978-3-319-16292-8 11
150 C. Frenzel et al.
and can lead to inferior performance, hence, they are prevented before or resolved
after they happen by SON coordination [3]. Among the numerous approaches for
SON coordination, run-time action coordination [11] is very common. It requires
all SON functions to request for permission to change some network parameter at
a SON coordination function. This function then determines conflicting requests,
e.g., contrary changes of the same network configuration parameter, computes a
set of non-conflicting SON functions that can be executed at the same time in the
same area, and triggers their execution. Thereby, the conflict resolution may be
based on operator priorities [3].
There can be network conditions and situations in which a SON function
might not be able to achieve its targets [5]. Although, the reasons for this are
often outside of the objective scope of the particular SON function, the problems
that cause the function failure can often be resolved by another SON function.
For instance, on the one hand, an Mobility Robustness Optimization (MRO)
function running in an area with a coverage hole might not be able to reduce
HO failures, however, on the other hand, the coverage problem can be handled
by a Coverage and Capacity Optimization (CCO) function. The main problem
of an ineffective SON function is that it may affect other SON functions due
to the coordinated execution. An example for such an negative effect on other
SON functions is network monopolization: a high priority function is constantly
running because it encountered an unresolvable problem and, thereby, blocks the
execution of other functions leading to a deadlock. In such problem situations, a
SON function might need assistance by another SON function or the operator.
Currently, the operation of SON functions is usually not monitored, thus, leaving
such problems unnoticed and making the affected SON functions valueless.
In order to overcome these problems, we have sketched a preliminary concept
in [5] that is able to detect conditions in which a SON function cannot achieve
its objectives and mitigates this problem. Thereby, a SON function, namely
the SON Operational Troubleshooting (SONOT) function, is proposed that can
analyze the problem using a network-wide view and determine possible remedy
actions, e.g., blocking functions that cannot achieve their objectives as well as
triggering other functions that might resolve the problem.
In this paper, we present an approach for troubleshooting a SON that is based
on the concept presented in [5]. In contrast to the previous work, we discuss the
approach and its design aspects, especially the detection of ineffective functions,
in much more depth including a detailed evaluation of the approach using an
LTE network simulator. Additionally, we have extended the initial concept with
the ability to exploit SON functions as probes, thereby, increasing accuracy and
decreasing delay of problem detection. We present results of simulations which
show the positive impact of the approach on network performance.
Alarm Alarm
Generation Analysis and
Remedy
Knowledge Operator Escalation Action
Policy
SON Operational Troubleshooting Function
Resolution
Monitoring Alarm Context Context
Detection Alarm
Component Resolver Data DB
PM / FM / CM Network
Data Parameters
Fig. 1. The SONOT function and its interaction with other SON functions
function ran into some problematic state which it cannot handle by itself and
raises an alarm. Second, this alarm is evaluated and a corrective action is taken
in the alarm analysis and remedy step.
Alarms may be generated by two different sources: the normal SON functions
and the SONOT function. In the first case, each function is extended with an
alarming component which raises an alarm if it encounters a problem during
execution. In the second case, the SONOT function, more specifically its moni-
toring component, continuously analyses the network and generates an alarm if
it encounters undesired behavior, controlled by the operator through a policy.
The alarms trigger the analysis and remedy of the problem which produces a
corrective action, e.g., a coordination action, a management action, or an esca-
lation action. This process is based on a policy which captures the operational
experience of the operator and the SON vendor. During execution, the SONOT
function can access comprehensive contextual information about the operational
status of the network, e.g., the current date and time, CM data like the network
topology, PM data like Key Performance Indicators (KPIs), and FM data like
technical alarms from the Network Elements (NEs).
network states and allows time series-based analyses of the system behavior.
Hence, the latter can monitor the impact of network parameters changes, ana-
lyze performance trend in the network, and make predictions about the ability
of a SON function to satisfy its objective. For instance, a history-based method
can detect SON functions that are continuously modifying parameters without
performance improvements, i.e., configuration oscillations.
Alarming Component. The primary idea behind the alarm generation within
the SON functions is to reuse their existing sophisticated monitoring and analysis
capabilities, i.e., they are exploited as probes. On the one hand, this avoids the
collection and transfer of PM, FM, and CM data that is used by the SON
function anyway, thus, leading to less management overhead in the network and
less complex processing of the data. On the other hand, SON functions are
usually self-contained entities which are provided without any specification of
the internal algorithms, i.e., like black boxes. Hence, the function itself may be
the only entity which can directly detect a problem during the execution of the
algorithm.
Usually, a SON function is designed for the detection of a specific problem
based on KPIs from the network and the determination and execution of cor-
rective actions in form of changes of network parameters. Nevertheless, it is
often able to also detect problematic situations that it cannot correct by itself.
Thereby, state-based detection approaches should be preferred because they are
less complex. In that way, a SON function might detect a problem from anom-
alous PM data from a NE that is not foreseen by the algorithm so that it runs
into an exception. An example for such an error state is when the MRO function
monitors a lot of too late and too early HOs or when it attempts to set the value
of an HO parameter outside a limit defined by the operator.
However, in some cases history-based detection methods may also be possible
to use. A SON function can learn the effects of network parameter changes on
network performance and, if the reaction differs significantly from the learned
behavior, it raises an alarm. The reason for a significant deviation from the
normal network behavior can be numerous, e.g., a severe hardware fault.
The alarming component extends a SON function by allowing it to inform
the SONOT function about an issue through an alarm. However, if an alarm is
raised, the SON function should try to continue its operation. This is because a
SON function typically has a limited view on the network and, therefore, is not
able to make informed decisions about the remedy of the problem.
regarding PM, FM, and CM data. SON functions often focus on single NEs and
solely monitor data that is necessary for their task due to time and memory con-
straints. In contrast, the SONOT function can collect and accumulate a broad
range of data that is not accessible to regular SON functions, e.g., the perfor-
mance of a group of NEs in a specific area, system-level KPIs or Minimization
of Drive Test (MDT) [6] data.
The disadvantage of external monitoring is that the SONOT usually has no
information about the algorithm or the internal status of the SON function.
Hence, its analysis must always be based on assumptions about the logic of the
functions. If these are not correct then this can lead to false inferences and,
consequently, false alarms or unnoticed problems. For instance, a continuously
running CCO function might indicate a coverage hole produced by a broken NE.
Conversely, if an Mobility Load Balancing (MLB) function is often executed, this
does not necessarily indicate a problem because MLB is heavily dependent on
user behavior which might change often. Therefore, the monitoring component
needs to be configured for a concrete SON. As depicted in Fig. 1, this configura-
tion is given in form of a policy.
The SONOT function should employ complex, history-based approaches since
the indirect identification of problems requires sophisticated, knowledge-based
inference mechanisms. In this way, the monitoring component can detect oscilla-
tions produced by an ineffective SON function through a statistical, time-series
analysis. However, notice that even complex detection approaches need to be
configured for a specific SON. For example, oscillations can also be caused by a
SON function that attempts to find an optimal value for a network parameter
using some hill climbing algorithm.
The alarm resolver component performs an analysis of the alarms and determines
suitable countermeasures. For example, the analysis of an CCO function alarm
that indicates an unrecoverable coverage hole by the alarm resolver can produce
an equipment failure as the root cause. As a result, the SONOT function blocks
the execution of the CCO and triggers a self-healing Cell Outage Compensa-
tion (COC) function. For a comprehensive analysis and well-informed decision
making, the SONOT function can draw on contextual information about the
current status of the network. Thereby, it is possible to employ simple reason-
ing approaches like production or fuzzy rule systems, or sophisticated Artificial
Intelligence (AI) systems like influence diagrams or planners [10].
Remedy Actions. The actions that the alarm resolver might execute can be
classified into three categories:
– SON coordination actions are directly interfering with the execution of SON
functions. Examples are the blocking or preempting of the execution of a SON
function, or the active requesting of the execution of a SON function. Thereby,
154 C. Frenzel et al.
it is also imaginable to not just request the execution of a single function but
to carry out a complex workflow with several functions.
– SON management actions are changing the configuration of the SON system
itself, e.g., by changing the configuration of the SON functions such that their
objectives change.
– Escalation actions are triggered if the SON troubleshooting function cannot
or should not perform an action. For instance, an alarm can be escalated to a
human operator as a trouble ticket for further inspection. Then, the operator
can decide for a remedy. By utilizing machine learning techniques, it is possible
to extend the expert knowledge of the SON troubleshooting function based
on the operator response.
3 Evaluation
– MRO function: Its objective is to minimize HO problems, e.g., too late and
too early HOs [6], by altering the HO offset parameter between a pair of cells.
– CCO(RET) function: As a CCO function, it aims to maximize a cell’s coverage
and capacity. It adapts the antenna tilt in order to minimize interference.
– CCO(TXP) function: It is also a CCO function which adapts the transmission
power.
1 r u l e ”RETAlarm+TXPAlarm−>R e s e t ”
when
3 $ a l a r m : RETAlarm ( )
$ t x p : S o n F u n c I n s t ( ) from new S o n F u n c I n s t (TXP ( ) , $ a l a r m . t a r g e t C e l l )
5 eval ( context . isAlarmInCurrentRound ( $txp ))
$ n e i g h b o r : C e l l ( ) from c o n t e x t . g e t N e i g h b o r s O f C e l l ( $ a l a r m . t a r g e t C e l l )
7 $ r e s e t : S o n F u n c I n s t ( ) from new S o n F u n c I n s t ( S o n F u n c t i o n . RESET ( ) , $ n e i g h b o r )
then
9 sfe . requestSonFunction ( $reset ) ;
s o n c o . addAlarm ( $ a l a r m , $ r e s e t ) ;
11 end
3 47
46
2
45
1
44
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 43
-1
42
-2
41
Antenna Tilt (with SONOT) Antenna Tilt (without SONOT)
TX Power (with SONOT) TX Power (without SONOT)
-3 40
Simulation Round
1600 25
RLFs (with SONOT) RLFs (without SONOT)
Throughput (with SONOT) Throughput (without SONOT)
1400
Number of Radio Link Failures
20
1200
Throughput in GBit/S
1000
15
800
10
600
400
5
200
0 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Simulation Round
Fig. 2. Comparison between SONOT function and SON coordination with dynamic
priorities for scenario “Inability of CCO(RET)”
if it is not doing any good leading to the case that the function actually decreases
performance. This trend can be also be seen in the performance measurements
from all cells affected by the coverage hole as shown in Fig. 3(a).
12000 100
RLFs (with SONOT) RLFs (without SONOT)
Throughput (with SONOT) Throughput (without SONOT) 90
Number of Radio Link Failures 10000
80
Throughput in GBit/S
70
8000
60
6000 50
40
4000
30
20
2000
10
0 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Simulation Round
12000 100
RLFs (with SONOT) RLFs (without SONOT)
Throughput (with SONOT) Throughput (without SONOT) 90
Number of Radio Link Failures
10000
80
Throughput in GBit/S
70
8000
60
6000 50
40
4000
30
20
2000
10
0 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Simulation Round
12000 100
RLFs (with SONOT) RLFs (without SONOT)
Throughput (with SONOT) Throughput (without SONOT) 90
Number of Radio Link Failures
10000
80
Throughput in GBit/S
70
8000
60
6000 50
40
4000
30
20
2000
10
0 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Simulation Round
Fig. 3. Overall performance with the SONOT function and SON coordination with
dynamic priorities
160 C. Frenzel et al.
constant and do not improve although the MRO function is always changing the
HO offsets. In the SONOT case, however, the SONOT function detects in round
4 that the performance drop is not related to mobility and generates an alarm.
This triggers the CCO(TXP) function which changes the transmission power
similar to the previous scenario, leading to a significant decrease of the RLFs
and an increase of the throughput. In contrast to that, the coordination with
dynamic priorities behaves very similar to the previous scenario and produces a
non-optimal situation.
Scenario “Sleeping Cells”. The third scenario investigates the case where
both the CCO(RET) function and the CCO(TXP) function are not able to close
a coverage hole caused by a sleeping cell and, therefore, an alarm is generated
which triggers a reset function. Sleeping cells are a serious problem in mobile
networks since they are performing poorly without generating any failure alarms
[11]. Hence, they often remain undetected for hours or even days. Sleeping cells
can be caused by software failures, in which case the remedy can be the reset
of the cell’s software configuration. In this scenario, two sleeping cells, cause
by a software upgrade, produce a coverage hole. Moreover, the SON function
priorities P are set as follows: PRET > PTXP > PMRO . The resolution policy
triggers a reset function, i.e., the restoration of a previous software version, if an
CCO(RET) alarm and a CCO(TXP) alarm occur together (cf. List. 1).
The results of this experiment are shown in Fig. 3(c). In case the SONOT
function is employed, the same positive impact on the performance as shown
in the previous two scenarios can be observed. Since the CCO(RET) and the
CCO(TXP) functions are not able to achieve their objective, an alarm is gener-
ated in round 3 which leads to the execution of the reset function that restores the
sleeping cells. In the coordination with dynamic priorities case, the CCO(RET)
and CCO(TXP) functions are continuously adapting the tilts and transmission
powers of the neighbors of the two sleeping cells which does not lead to the
desired effect.
4 Related Work
5 Conclusion
This paper presented the details of a new operational troubleshooting approach
for a Self-Organizing Network (SON). It allows the detection of situations in
which a SON function is not able to achieve its objectives. This monitoring is
performed by the SON functions themselves as well as the monitoring component
of the SON Operational Troubleshooting (SONOT). This approach allows, on the
one hand, the exploitation of the sophisticated detection methods employed by
the SON functions and, on the other hand, the utilization of complex algorithms
and network-wide data in the SONOT function. Based on the detected problems,
the alarm resolver can determines possible countermeasures like the preemption
and triggering of SON functions. In simulations it has been shown that the
presented approach remarkably improves the network performance in terms of
Key Performance Indicators (KPIs) like Radio Link Failure (RLFs) and cell
throughput, and outperforms traditional coordination approaches like a batch-
based coordination scheme with dynamic priorities.
162 C. Frenzel et al.
References
1. 3GPP: Telecommunication management; Fault Management; Part 1: 3G fault man-
agement requirements. Technical specification 32.111-1 v12.0.0, 3rd Generation
Partnership Project (3GPP), June 2013
2. 3GPP: Telecommunication management; Self-Organizing Networks (SON) Policy
Network Resource Model (NRM) Integration Reference Point (IRP); Information
Service (IS). Technical specification 32.522 v11.7.0, 3rd Generation Partnership
Project (3GPP), September 2013
3. Bandh, T.: Coordination of autonomic function execution in Self-Organizing Net-
works. Ph.D. Thesis, Technische Universität München, April 2013
4. Ben Jemaa, S., Frenzel, C., Dario, G., et al.: Integrated SON Management -
Requirements and Basic Concepts. Deliverable d5.1, SEMAFOUR Project, Decem-
ber 2013
5. Frenzel, C., Tsvetkov, T., Sanneck, H., Bauer, B., Carle, G.: Detection and reso-
lution of ineffective function behavior in self-organizing networks. In: Proceedings
of 15th IEEE International Symposium on a World of Wireless, Mobile and Mul-
timedia Networks (WoWMoM 2014), Sydney, Australia, June 2014
6. Hämäläinen, S., Sanneck, H., Sartori, C. (eds.): LTE Self-Organising Networks
(SON): Network Management Automation for Operational Efficiency. Wiley,
Chichester (2011)
7. JBoss Community: Drools Expert. http://www.jboss.org/drools/drools-expert
8. Kousaridas, A., Nguengang, G.: Final Report on Self-Management Artefacts.
Deliverable d2.3, Self-NET Project, April 2010
9. Romeikat, R., Sanneck, H., Bandh, T.: Efficient, dynamic coordination of request
batches in C-SON systems. In: Proceedings of IEEE 77th Vehicular Technology
Conference (VTC Spring 2013), Dresden, Germany, pp. 1–6, June 2013
10. Russell, S.J., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn.
Prentice Hall, Upper Saddle River (2003)
11. Schmelz, L.C., Amirijoo, M., Eisenblaetter, A., Litjens, R., Neuland, M., Turk, J.:
A coordination framework for self-organisation in LTE networks. In: Proceedings
of 12th IFIP/IEEE International Symposium on Integrated Network Management
(IM 2011) and Workshops, Dublin, Ireland, pp. 193–200, May 2011
12. Szilágyi, P., Nováczki, S.: An automatic detection and diagnosis framework for
mobile communication systems. IEEE Trans. Netw. Serv. Manage. 9(2), 184–197
(2012)
13. Tsagkaris, K., Galani, A., Koutsouris, N., et al.: Unified Management Framework
(UMF) Specifications Release 3. Deliverable d2.4, UniverSelf Project, November
2013
14. Tsvetkov, T., Nováczki, S., Sanneck, H., Carle, G.: A post-action verification app-
roach for automatic configuration parameter changes in self-organizing networks.
In: 6th International Conference on Mobile Networks and Management (MONAMI
2014), Würzburg, Germany, September 2014
Anomaly Detection and Diagnosis for Automatic
Radio Network Verification
1 Introduction
Modern radio networks for mobile broadband (voice and data) are complex and
dynamic, not only in terms of behavior and mobility of users and their devices,
but also in terms of the many elements that make up the network infrastructure.
Network degradations that cause users to experience reduced or lost service
could have serious short- and long-term impact on the operator’s business, and
must therefore quickly be resolved as part of network management. Effective
management of complex and dynamic networks requires some form of automated
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 163–176, 2015.
DOI: 10.1007/978-3-319-16292-8 12
164 G.F. Ciocarlie et al.
1.2 Contributions
This paper proposes a novel SON verification framework using anomaly detection
and diagnosis techniques that operate within a spatial scope larger than an indi-
vidual cell (e.g., a small group of cells, a geographical region like a town section, an
existing administrative network domain, etc.). The aim is to detect any anomaly
which may point to degradations and eventually faults caused by an external or
system-internal condition or event. CM changes (which reflect actions, e.g., by
human operators or SON functions or optimization tools), and KPIs (which reflect
the impact of the SON actions on the considered part of the network) are analyzed
together to characterize the state of the network and determine whether it was
negatively impacted by CM changes. Main contributions include:
Topic modeling is applied to the training KPI data from all the cells in scope,
leading to the construction of topic modeling clusters1 (indicators of the state
of the network), which are further classified by semantic interpretation as either
1
Given that we apply topic modeling to KPI data, for clarity, we will refer to topics
as clusters.
166 G.F. Ciocarlie et al.
normal or abnormal. Using the labeled clusters, the KPI data under test (i.e.,
subject to detection) leads to the mixture of weights for the different clusters
indicating the overall state of the network. For real deployment, testing data
can span any time or geographic scope (as a subset of the training geographic
scope), and may or may not overlap with the training data. This component is
well suited for this application domain, as we do not have any a priori notion
of what the different network states could be or how many types of states there
could be. Here, we generalize the concept of topic modeling to identify the states
(topics) of a system (i.e., the radio network). Furthermore, using semantic infor-
mation, states can be interpreted as either normal or abnormal, enabling the
detection of degradation in system’s state (see Sect. 2.3). Moreover, incremental
topic modeling can be used to capture new states over time [8].
In case of abnormal behavior, the diagnosis component is triggered. The
MLN inference is achieved by using CM change history information or any other
external information in form of an event sequence, along with the MLN rules and
their associated weights. The MLN rules are specific to this application domain
and are typically generated based on human expert knowledge. Rule weights can
be estimated or learned during operation, as new cases arise.
Figure 1 presents the detailed verification approach:
– Initially, for a given period of time, the KPI measurements of the group of
cells/network in scope are selected as the training dataset (D1) for generating
the topic modeling.
– The topic modeling clustering (M1) is applied to the training dataset (D1).
– The result of (M1) is a set of clusters representing the different states in which
the network can exist (D2). Each cluster has an associated weight correspond-
ing to the percentage of the network in the state represented by that cluster.
– Given the set of clusters (D2), the semantics of the KPIs are used to further
interpret the semantics of each cluster (M2).
– The result of (M2) is a set of labeled clusters that indicate if the network state
is either normal or abnormal (D3).
– The labeled clusters (D3) and the KPI measurements for test for the group of
cells in scope are used in a testing phase against the clusters (M3) to generate
the weight mixture indicating how normal or abnormal the network is.
– The result of (M3) is the weight mixture (D4) indicating the current state of
the network.
– The diagnosis component is triggered only if the cells in scope are abnormal.
Principal Components Analysis (PCA) (M4) is applied to the training dataset
(D1) to generate similar groups of cells. The result of (M4) contains groups of
cells (D5) that behave similarly; MLN inference is applied primarily on these
groups. Cell grouping is used to reduce MLN complexity.
– The groups of cells along with the CM change information (D5), external
events (e.g., weather event feeds) and MLN rules (generated either manually
based on human expert knowledge or automatically from other sources) are
used to generate the diagnosis information based on the MLN inference (M5).
– The result of (M5) is the diagnosis information for the abnormal cells within
the scope (D6).
Anomaly Detection and Diagnosis for Automatic Radio Network Verification 167
Legend
D1 KPI data for cells in scope under training data
method
M1 Generate topic modeling clusters using HDP M4 Generate groups of similar cells using PCA
D2 D5
Clusters representing different network states CM Groups of cells for MLN inference
changes
M5
Weather
M2 Interpret the semantics of the different clusters Generate diagnosis information using MLNs
events
MLN
D3 Rules
Labeled clusters representing network states D6 Diagnosis information for anomalous cells
Fig. 1. Overall approach of the SON verification method applied to the group of cells
in scope. Data is depicted in blue and methods in pink. The dashed lines indicated
that an event is triggered in the presence of new evidence/data.
solutions exist for approximating the inference, including variational inference [4]
and Gibbs sampling [10]. We considered the Gibbs sampling.
The inputs to LDA models are a collection of vectors containing feature values
derived from KPI data of all cells. The outputs of LDA models are a codebook
for K clusters, and a set of cluster mixture weights θ for each timestamp m.
By default, LDA can only be applied to a single KPI feature (i.e., it considers
only one KPI feature value from cells in the network, and does clustering based on
it). However, our framework needs to consider multiple KPIs as a whole to deter-
mine network status. We extend the model to accommodate multiple features by
replacing the single feature w with multiple features wi , and associate each feature
to a codebook βi . We denote this model as multi-variate LDA (m-LDA). Note that
each cluster contains a histogram for every feature (KPI). The histogram repre-
sents the expected histogram of that feature value for the given scope (network
or group of cells) under one cluster. The major difference between LDA and HDP
is that for LDA both the number of topics at each timestamp and the number of
profiles are fixed, while for HDP they are automatically determined. The inputs
and outputs for HDP are the same as for LDA.
where τ1 , τ2 ∈ [0, 1] are the thresholds that determine the classification and are
empirically determined such that the quality of the (VERY) GOOD clusters is
high (i.e., τ1 = 0.05 and τ2 = 0.15). A cluster that has at least one BAD (any
type) histogram is considered an abnormal centroid; otherwise it is normal.
Anomaly Detection and Diagnosis for Automatic Radio Network Verification 169
Each clause is an expression in first-order logic. MLNs search for the most
likely explanation for the knowledge base in terms of the assignments of variables
to predicate arguments. MLNs accumulate the probabilities that each clause is
true given a particular variable assignment. One can also query the knowledge
base and ask for the probability that a specific predicate is true under a specific
variable assignment or ask how often a predicate is true in general.
Figure 2 presents an example of a PCE input specification, which includes
three types (sorts), called T ime t, Group t and M ag t. In particular, the Group t
sort refers to PCA-derived groups of cells. The const declaration defines 486 cell
groups that we can reason over. We also have anomaly conditions derived from
the network-level anomaly detection component described above. In the MLN
excerpt here, we see two out of several hundred anomaly conditions. These two
declare that anomalies were observed in groups G39 and G46 at time interval
T 2. Finally, we see three add statements. These are rules that link weather,
anomaly, and configuration information with hypotheses about the reasons for
network degradation. The final statements in the PCE input are ask statements
that query the state of the network for the probabilities of different hypotheses.
When applying MLNs to temporal data, decomposition of the timeline into
intervals or atomic units (individual timestamps or samples) is generally useful.
In some cases, rules might be needed to define temporal order, especially for
attempts to represent causality and delayed response to disruptive events. Time
(or sample number) can be applied as an extra argument in certain predicates.
In general, MLN solution times depend polynomially on the number of observa-
tions in the data, but will grow exponentially with the number of arguments to
170 G.F. Ciocarlie et al.
predicates. Therefore, the argument count should be kept low for all predicates
and, if necessary, the problem should be decomposed to maintain a low argument
count for all predicates used by the knowledge base.
When specifying an MLN, we normally start with rules and weight estimates
that represent a subject matter expert’s (SME’s) understanding of causes and
effects that determine network behavior. Moreover, the weights associated with
the MLN rules can be learned over time to provide a more accurate probabilistic
model for the observed situations. Several weight learning schemes exist, most of
which take the form of maximum-likelihood estimation with respect to a training
set. As more relevant training data is available, the MLN weights can be modified
to tune the probabilistic knowledge base to generate the best answers for a given
situation. Realistically, a SME might not be able to account for all possibilities
in advance. As unexpected cases arise, there will be a need to add rules during
the training phase to accommodate these cases. The new rules’ weights will also
need to be trained. Weight adjustment can be an ongoing process, but weight
stability will be the primary indicator that the MLN is working as expected.
3 Performance Evaluation
This section analyzes the performance of our framework applied to a real network
dataset. The experimental corpus consisted of KPI and CM data for approxi-
mately 4,000 cells, collected from 01/2013 to 03/2013. The KPIs have differ-
ent characteristics; some of them are measurements of user traffic utilization
(e.g. downlink or uplink data volume or throughput), while others are measure-
ments of call control parameters (e.g. drop-call rate and successful call-setup
rate).
Anomaly Detection and Diagnosis for Automatic Radio Network Verification 171
time is shown in Fig. 4, which is generated by running the HDP model of Fig. 3
on all cells from the whole time span.
in Fig. 7. The MLN used input regard- 02/01/2013 02/08/2013 02/15/2013 02/22/2013
ing changes in the wcel angle (antenna Timestamp
tilt angle) parameter for approximately
4,000 cells for each day in Febru- Fig. 5. Portion of the network in abnor-
ary, along with weather reports that mal state for February 2013.
Anomaly Detection and Diagnosis for Automatic Radio Network Verification 173
covered the whole area of interest for February 2013. The network found some
anomalies for 10 February 2013. In Fig. 7, we can observe that not all changes
in wcel angle served as anomaly triggers, since only a smaller portion of the cells
were affected. In our investigation, we also noticed a very interesting trend in
the change, which can indicate an automated action (Fig. 6).
MLNs are generated in a semi-
automated fashion for each times-
tamp. Figure 8 presents the input
400
and output of PCE tool for Febru-
ary 10th 2013. We can observe that
300
MLN receives input from topic mod-
Delta
eling regarding the groups of cells that
200
were deemed anomalous as well as
input from the CM data as wcel angle
100
changes. Moreover, the MLN can also
accommodate a visibility delay of
up to n hours for which CM changes
0
100 70 50 30 10 20 40 60 80 120
can propagate and affect cells (n = 48 Values
in our experiments); hence, the time
window in which cells are labeled as Fig. 6. Deltas between the after and before
anomalous in Fig. 7. The output of the wcel angle values for all the cells affected
PCE tool consists of groups of cells
affected by CM changes and normal groups of cells (no cell group was affected
by weather events for that day).
Percentage of cells representing weather_events [%]
Percentage of cells representing cm_events [%]
change in
6
0.3
wcel_angle
for ~2,000
cells
0.2
4
0.1
2
0.0
0
Fig. 7. Percentage of cells diagnosed as anomalous due to wcel angle changes (left) and
weather events (right). The dotted vertical line indicates when changes in wcel angle
started to occur.
174 G.F. Ciocarlie et al.
Fig. 8. Input and output of PCE tool for February 10th 2013
4 Related Work
To the best of our knowledge, there are significant methods available for CM
and PM analysis; however, none of them fully addresses the SON verification
use case. Some of the existent work is only partially automated [2] with tool
support for regular network reporting and troubleshooting, while others use dis-
joint analysis for PM and CM, requiring an additional linking mechanisms which
is normally done manually. In terms of PM analysis, most work relates to detec-
tion of degradations in cell-service performance. If previous research addressed
the cell-outage detection [11], cell-outage compensation [3] concepts and net-
work stability and performance degradation [5,9] without relying on PM data,
more recently, detection of general anomalies has been addressed based on PM
data [6,7,12,17]. For CM analysis, Song et al. [15] propose formal verification
techniques that can verify the correctness of self-configuration, without address-
ing the need for runtime verification.
5 Conclusions
This paper proposed a framework for SON verification that combines anomaly
detection and diagnosis techniques in a novel way. The design was implemented
and applied to a dataset consisting of KPI data collected from a real operational
cell network. The experimental results indicate that our system can automat-
ically determine the state of the network in the presence of CM changes and
whether the CM changes negatively impacted the performance of the network.
We are currently planning to expend our framework to more SON use cases such
as troubleshooting and we are exploring other types of data that can be used in
the diagnosis process. We envision that additional work is needed to integrate
our framework with human operator input.
References
1. Probabilistic Consistency Engine. https://pal.sri.com/Plone/framework/
Components/learning-applications/probabilistic-consistency-engine-jw
2. Transparent network performance verification for LTE rollouts, Ericsson whitepa-
per (2012). http://www.ericsson.com/res/docs/whitepapers/wp-lte-acceptance.
pdf
3. Amirijoo, M., Jorguseski, L., Litjens, R., Schmelz, L.C.: Cell outage compensation
in LTE networks: algorithms and performance assessment. In: 2011 IEEE 73rd
Vehicular Technology Conference (VTC Spring), 15–18 May 2011
4. Blei, D., Ng, A., Jordan, M.: Latent dirichlet allocation. J. Mach. Learn. Res. 3,
993–1022 (2003)
5. Bouillard, A., Junier, A., Ronot, B.: Hidden anomaly detection in telecommunica-
tion networks. In: International Conference on Network and Service Management
(CNSM), Las Vegas, NV, October 2012
6. Ciocarlie, G.F., Lindqvist, U., Novaczki, S., Sanneck, H.: Detecting anomalies in
cellular networks using an ensemble method. In: 9th International Conference on
Network and Service Management (CNSM) (2013)
7. Ciocarlie, G.F., Lindqvist, U., Nitz, K., Nováczki, S., Sanneck, H.: On the feasibility
of deploying cell anomaly detection in operational cellular networks. In: IEEE/IFIP
Network Operations and Management Symposium (NOMS), Experience Session
(2014)
8. Ciocarlie, G.F., Cheng, C.-C., Connolly, C., Lindqvist, U., Nováczki, S.,
Sanneck, H., Naseer-ul-Islam, M.: Managing scope changes for cellular network-
level anomaly detection. In: International Workshop on Self-Organized Networks
(IWSON) (2014)
9. D’Alconzo, A., Coluccia, A., Ricciato, F., Romirer-Maierhofer, P.: A distribution-
based approach to anomaly detection and application to 3G mobile traffic. In:
Global Telecommunications Conference (GLOBECOM) (2009)
10. Griffiths, T., Steyvers, M.: Finding scientific topics. Proc. Natl. Acad. Sci.
101(suppl. 1), 5228–5235 (2004)
11. Mueller, C.M., Kaschub, M., Blankenhorn, C., Wanke, S.: A cell outage detection
algorithm using neighbor cell list reports. In: Hummel, K.A., Sterbenz, J.P.G.
(eds.) IWSOS 2008. LNCS, vol. 5343, pp. 218–229. Springer, Heidelberg (2008)
12. Nováczki, S.: An improved anomaly detection and diagnosis framework for mobile
network operators. In: 9th International Conference on Design of Reliable Com-
munication Networks (DRCN 2013), Budapest, March 2013
13. Richardson, M., Domingos, P.: Markov logic networks. Mach. Learn. 62(1–2),
107–136 (2006)
14. Hämäläinen, S., Sanneck, H., Sartori, C. (eds.): LTE Self-Organising Networks
(SON) - Network Management Automation for Operational Efficiency. Wiley,
Chichester (2011)
15. Song, J., Ma, T., Pietzuch, P.: Towards automated verification of autonomous
networks: A case study in self-configuration. In: IEEE International Conference on
Pervasive Computing and Communications Workshops (2010)
16. Steyvers, M., Griffiths, T.: Probabilistic topic models. In: Landauer, T.,
McNamara, D.S., Dennis, S., Kintsch, W. (eds.) Handbook of Latent Semantic
Analysis, pp. 427–448. Erlbaum, Hillsdale (2007)
17. Szilágyi, P., Nováczki, S.: An automatic detection and diagnosis framework for
mobile communication systems. IEEE Trans. Netw. Serv. Manage. 9, 184–197
(2012)
18. Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Hierarchical dirichlet processes.
J. Am. Stat. Assoc. 101(476), 1566–1581 (2006)
Energy Awareness in Wireless Networks
On the Performance Evaluation of a Novel
Offloading-Based Energy Conservation
Mechanism for Wireless Devices
© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 179–191, 2015.
DOI: 10.1007/978-3-319-16292-8_13
180 C.X. Mavromoustakis et al.
1 Introduction
The recent popularity of smartphones and tablets created a fertile ground of new
application paradigms for mobile wireless communications. This growth has also been
fueled by the numerous applications that run per wireless device, creating the need for a
reliable and high performance mobile computing application environment. All wireless
devices are prone to energy constraints that very often impair the reliability of the
correct execution for the running application on the device. In this context, this paper
proposes a mechanism, which takes into consideration the energy consumption of the
wireless devices, while running an application. This mechanism exploits an offloading
technique, in case where the resources need to be partially or in totally outsourced. This
mechanism is utilized by end-terminals (server rack) and/or run on mobile devices,
where resources are redundant. The offloading methodology is applied as a part of the
application initiation (run start-up), in order to minimize the GPU/CPU efforts and
the energy consumption of the mobile device that is running out of resources.
On the contrary with the utility computing, mobile cloud services should only be
offered in a synchronous mode [1]. To this end, different parameterized metrics of both
wireless devices and availability of offloading by other terminals and/or servers [1]
should be taken into consideration. Traditional cloud computing models are considered
as ‘low offered’ throughput models [2] and [3], more expensive, significantly offering
low Quality of Service (QoS) or Quality of Experience (QoE) to the end-recipients
(i.e. wireless end users). The limited processing power and bounded capacity avail-
ability of wireless devices, aggravate the execution and negatively affect the reliability
offered to the user’s mobile device, by causing capacity-oriented failures and inter-
mittent execution. When there is lack of available resources (processing and/or
memory-oriented), the wireless device may refer to a mobile cloud infrastructure as in
[1] to enable precise execution through the resource/task migration mechanism. Within
this context, a mechanism for ensuring that there is adequate processing power for
executing the application of a wireless device and at the same time allowing the
evaluation of the power consumption through the consideration of an energy-efficient
application offloading, has not yet been extensively examined [4].
In this direction, a dynamic scheduling scheme for offloading resources from a
wireless device to another mobile device is investigated in this work, to enable the
improvement of the response quality and system’s throughput, according to the partial
or in-total execution, as well as the request deadline. The ultimate target of this work is
to prevent long application execution requests, which will result in greater energy
consumption for each wireless device and enable efficient manipulation of local
(device), as well as cloud resources. Within this context, the proposed scheme mini-
mizes the utilization of local (device’s) resources to complete (GPU, CPU, RAM,
battery consumption) and offers at the same time extensibility in the wireless devices
lifetimes. To this end, this work presents an attempt to reduce the computational load of
each wireless device so as to extend the lifetime of the battery. In addition, this study
considers a partitionable parallel processing wireless system, where resources are
partitioned and handled by a subsystem [1] that estimates and handles the resource
offloading process. A certain algorithm is being proposed for the offloading process, in
On the Performance Evaluation of a Novel Offloading-Based Energy 181
It is undoubtedly true that over the past few decades, several research efforts have been
devoted to device-to-device or Machine-to-Machine communication networks, ranging
from physical layer communications to communication-level networking challenges.
Wireless devices can exchange resources on the move and can become data “Pro-
sumers”, by producing a great amount of content [2], while at the same time as content
providers devices can consume the content. The research efforts for achieving energy
efficiency on-the-move for wireless devices, trades-off the QoS [3] offered, by signif-
icantly reducing the performance with energy-hungry applications such as video,
interactive gaming, etc. While energy-hungry applications are widely utilized by
wireless devices, the explicit lifetime of devices should be extended, towards hosting
and running the application in device’s entire lifetime. In order to achieve resource
management in wireless devices within the context of the cloud paradigm, efficient
allocation of processor power, memory capacity resources and network bandwidth
should be considered. To this end, resource management should allocate resources of
the users and their respected applications, on a cloud-based infrastructure, in order
to migrate some of their resources on the cloud [5]. Wireless devices are expected to
operate under the predefined QoS requirements as set by the users and/or the appli-
cations’ requirements. Resource management at cloud scale requires a rich set of
resource and task management schemes that are capable to efficiently manage the
provision of QoS requirements, whilst maintaining total system efficiency. However,
the energy-efficiency is the greatest challenge for this optimization problem [3], along
with the offered scalability in the context of performance evaluation and measurement.
Different dynamic resource allocation policies targeting the improvement of the
application execution performance and the efficient utilization of resources have been
explored in [6]. Other research approaches related to the performance of dynamic
182 C.X. Mavromoustakis et al.
resource allocation policies, had led to the development of a computing framework [7],
which considers the countable and measureable parameters that will affect task allo-
cation. Authors in [8] address this problem, by using the CloneCloud approach [9] of a
smart and efficient architecture for the seamless use of ambient computation to augment
mobile device applications, off-loading the right portion of their execution onto device
clones, operating in a computational cloud. Authors in [9] statically partition service
tasks and resources between client and server portions, whereas in a later stage the
service is reassembled on the mobile device. The spine of the proposal in [9] is based
on a cloud-augmented execution, using a cloned VM image as a powerful virtual
device. This approach has many vulnerabilities as it has to take into consideration the
resources of each cloud rack, depending on the expected workload and execution
conditions (CPU speed, network performance). In addition, a computation offloading
scheme is proposed in [10] to be used in cloud computing environments, towards
minimizing the energy consumption of a mobile device, in order to be able to run
certain/specified and under constrains application. Energy consumption has also been
studied in [11], in order to enable computation offloading, by using a combination of
3G and Wi-Fi infrastructures. However, these evaluations do not maximize the benefits
of offloading, as they are considered as high latency offloading processes and require
low amount of information to be offloaded. Cloud computing is currently impaired by
the latency experienced during the data offloading through a Wide Area Network
(WAN). Authors in [1] and [11], elaborate on issues, where the devices carried by
human beings are always considered as delay sensitive. The variability of this delay in
turn impairs the QoS/QoE of the end-users.
Authors in [12] address the resource processing poverty for ‘hungry’ applications
that require processing resources in order to run on a handheld device, while authors in
[13] provide a resource manipulation scheme as a solution based on the failure rates of
cloud servers in a large-scales datacenters. However, these criteria do not include
servers’ communications diversities in the communication process with mobile users’
claims, as well as the available processing resources, the utilization of the device’s
memory, the remaining energy and the available capacity with the communication of
each of the device with the closest –in terms of latency- cloud terminal. Research
approaches in [14] and [15] have proposed different analytical models to address
offloading computation and elaborate on offloading to offer energy conservation.
Within this context, this paper is making progress beyond the current state-of-the-
art, by proposing an offloading resource mechanism, which is used in collaboration
with an energy-efficient model proposed. The scheme uses an offloading methodology
in order to guarantee that no intermittent execution will occur on mobile devices,
whereas the application explicit runtime will meet the required deadlines to fulfil the
QoS requirements. This paper also elaborates on the development of an offloading
scenario, in which the scheduling policy for guaranteeing the efficiency in the execu-
tion of mobile users’ tasks/applications can be achieved in an energy-efficient manner.
The proposed framework is thoroughly evaluated through event driven simulation
experiments in order to define the efficiency of the proposed offloading policy, in
contrast to the energy consumption of the wireless devices, as well as for the reliability
degree offered.
On the Performance Evaluation of a Novel Offloading-Based Energy 183
[Tns , Tnd ]
[Tns , Tnd ] [Tns , Tnd ] [Tns , Tnd ]
Tns
MN m Tnd
Tt Tt
Fig. 1. Cloud configuration when a mobile device has no remaining resources to run an
application, offloading resources on the cloud to achieve the best effort processing on-device
power.
184 C.X. Mavromoustakis et al.
More specifically, this work considers the network-oriented parameters for band-
width provisioning to achieve an acceptable resource offloading downtime (e.g.,
d 1:6 s as the experimental process validates in [1]). To this end, from the network
perspective, the modelled parameters can be expressed, for an offloading process O for
an executable resource task Oaj , as a 5-tuple given by:
Oaj ðMNÞ ¼ \ns ; Tns ; Tnd ; BW; Tt [ where ns is the devices or cloud terminals that
the aj from MN device will be offloaded, Tns is the source location best effort access
time, Tnd is the destination’s device or cloud’s location best effort access time (time to
access the resource) from the source, BW is the required connection bandwidth, Tt is
the connection holding duration for the aj executable resource task. In essence the work
done in [1] considers the resource transfer time, by taking into account the volume of
traffic that should be transferred to the destination node. The total data volume that will
be transferred, if the request meets the BW criteria can be provided by BW × Tt . In this
work, the typical values ranges that were utilized in our experimental processes were
1 MB BW 15 MB and Tt ¼ 2 s + tx ; s ¼ Tns þ Tnd , where tx is the time to process
x-partitionable parts that are processed during the offloading process. Every executable
resource task may have x-partitions, which in this work are considered as tx partitioning
parts/tasks where 1 ≤ tx ≤ z*P, where z is the number of different devices that the
resource can be offloaded. Therefore, the number of tasks per executable resource task
is limited to the number of terminals in the system. An executable resource can be
shared and partitioned to x1, x2, … xn and can be simultaneously processed with
r sequential partitions, where 0 ≤ r< z*P, if and only if the following relation holds:
X
n
r þ pðxi Þ z P ð1Þ
i¼1
where p(x) represents the number of cloud terminals (mobile and statically located) that
are needed to host the aj, and P is the number of terminals on the cloud that will be
hosting the offloaded resource. The scheduling strategy that was used is based on the
Largest First Served/ Shortest Sequential Resource First Served and the service nota-
tions of [1] with a-priori knowledge of the ½Tn ; Tnd service durations, as shown in
Fig. 1.
On the Performance Evaluation of a Novel Offloading-Based Energy 185
C
Erðaj Þ ¼ Ec ðaj Þ 8C 2 Oaj ; Tns \Tt ; Tnd \Tt ð2Þ
Sa j
Costcðri Þ
Ec ðri Þ ¼ Wc ; 8C 2 Oaj ð3Þ
Scðri Þ
where Sc is the server’s processing instruction speed for the computation resources,
Costc the resources’ processing instruction cost for the computation resources, and Wc
energy consumption of the device or server in mW.
Each mobile device examines if all neighboring 2-hops devices (via lookup table)
can provide information about their offloading capabilities without affecting their
energy status (thus without draining their energy to run other devices’ resources). In
addition, the closest cloud rack is considered if the relations exposed in (4) and (5) are
not satisfied. Hence, for the neighboring devices within 2-hops vicinity coverage (based
on the maximum signal strength and data rate model [1]) should stand:
Costcðri Þ Costcðri Þ
Wc j ri [ Wc j1;2::N ð4Þ
Scðri Þ Scðri Þ
The energy consumption of each device should satisfy the Eqs. (4)–(5) for each of
the resources (executable processes) running onto the device MNm1 hosting the ri ,
where m−1 represents the remaining devices from total m devices. Otherwise, the ri
with the maximum energy consumption is running in a partitionable manner to
186 C.X. Mavromoustakis et al.
minimize the energy consumed by other peer-devices. These actions are shown in the
steps of the proposed algorithm in table I.
The resource allocation will take place, towards responding to the performance
requirements as in [1]. A significant measure in the system is the availability of
memory and the processing power of the mobile-cloud devices, as well as the server-
based terminals. The processing power metric is designed and used to measure the
processing losses for the terminals that the ri will be offloaded, as in Eq. 4, where is an
application and Tkj is the number of terminals in forming the cloud (mobile and static)
rack that are hosting application, and Taj ðrÞ is the number of mobile terminals hosting
process of the application across all different cloud-terminals (racks).
Tkj
Caj ¼ P 8minðEc ðri ÞÞ 2 1; 2; ::N ð6Þ
Taj ðrÞ
k
Equation 4 shows that if there is minimal loss in the capacity utilization i.e. Caj ffi 1
then the sequence of racks Taj ðrÞ are optimally utilized. The latter is shown through the
conducted simulation experiments in the next section. The dynamic resource migration
algorithm is shown in Table 1 with the basic steps for obtaining an efficient execution
for a partitionable resource that cannot be handled by the existing cloud rack and
therefore the migration policy is used to ensure that it will be continuing the execution.
The continuation is based on the migrated policy of the partitionable processes that are
split, in order to be handled by other cloud rack terminals and thus omit any potential
failures. The entire scheme is shown in Table 1, with all the primary steps for off-
loading the resources onto either MNm1 neighbouring nodes or to server racks (as in
[1] and [15]) based on the delay and resources temporal criteria.
Performance evaluation results encompass comparisons with other existing schemes for
offered throughput and reliability degree, in contrast to the energy conservation effi-
ciency. The mobility model used in this work is based on the probabilistic Fraction
Brownian Motion (FBM) adopted in [17], where nodes are moving, according to
certain probabilities, location and time. Towards implementing such scenario, a com-
mon look-up application service for resource execution offloading is set onto each one
of the mobile nodes MNm . Topology of a ‘grid’ based network [1] was modeled, where
each node can directly communicate with other nodes, if the area situated is in the same
(3 × 3 center) rectangular area of the node. For the simulation of the proposed scenario,
the varying parameters described in previous section were used, by using a two-
dimensional network, consisting of nodes that vary between 10–180 (i.e. terminal
mobile nodes) located in measured area, as well as five cloud terminals statically
located on a rack. All measurements were performed using WLAN varying with dif-
ferent 802.11X specifications. During simulation the transfer durations are pre-
estimated or estimated, according to the relay path between the source (node to offload
resources) and the destination (node to host the executable resources).
The dynamic offloading scheme and the instant that takes place, is an important
measure to estimate as well as the effectiveness of the proposed framework and
the impact on the system. To this end, the total failed requests among nodes with the
number of requests and with the number of mobile devices participating in the eval-
uated area, are shown in Fig. 3. Towards examining the impact of the different
capacities, several sets of experiments were conducted using the presented resource
Fig. 3. Number of requests with the number of mobile devices participating in the evaluated area
and throughput response with the mean number of executable resources that are partitioned per
mobile device.
188 C.X. Mavromoustakis et al.
Fig. 4. Packet drop ratio of the proposed scheme for different mobility variations and no-
mobility model over time and average lifetime for both active and idle time with the number of
mobile devices.
offloading scheme. Large memory resources are executable resources/processes that are
between 500 MB–1 GB, whereas small memory resources are executable processes
that are hosting capacities between the range of 10–400 MB. The throughput response
in contrast to the mean number of executable resources that are partitioned per mobile
device is also shown in Fig. 3. The throughput response offered by the proposed
scenario is greater for large files that are offloaded in partitionable parts onto other
terminals on the cloud. Moreover, when utilizing the proposed framework, the small
memory capacity requirements offer almost greater than 90 % for throughput response
measurements. The packet drop ratio of the proposed scheme for different mobility
variations and without mobility over time is shown in Fig. 4. It is important to
emphasize that the proposed scheme scales well in the presence of FBM and even
better when the FBM with distance broadcasting is applied. In addition, Fig. 4 presents
the average lifetime for both active and idle time with the number of mobile devices.
Assuming that game-playing users are participating through their mobile devices
utilizing game playing actions. These game playing actions require resources in GPU/
CPU-level. These resource-constraints can be used as a measure to evaluate the effi-
ciency offered by the proposed scheme under ‘heavy limitations’ and ‘strict latencies’.
Hence, the lifetime of each of the mobile device is an important metric for the eval-
uation of the overall performance of the scheme and the impact on nodes lifetime.
Measurements in Fig. 5 were extracted for the total number of 180 mobile terminals
that are configured to host interactive gaming applications, using Wi-Fi/WLAN access
technology. The overall energy consumption for each mobile device for three different
schemes in the evaluated area (for the interactive game playing draining resources) is
shown in Fig. 5. The proposed scheme shows that it outperforms the scheme proposed
in [1], as well as the scheme in [8] for the Wi-Fi/WLAN connectivity configuration.
On the Performance Evaluation of a Novel Offloading-Based Energy 189
Fig. 5. Overall energy consumption for each mobile device for three different schemes in the
evaluated area and execution time during simulation for nodes with different mobility patterns for
three different schemes.
Fig. 6. Energy Consumption (EC) with the number of mobile users participating during an
interactive game and Energy Consumption (EC) with the number of mobile users utilizing greater
than 80 % of their resources.
When resources are offloaded, a critical parameter is the execution time, while nodes are
moving from one location to another. In Fig. 5 the execution time during simulation for
mobile nodes with different mobility patterns is also evaluated, for GSM/GPRS, Wi-Fi/
WLAN and for communication within a certain Wi-Fi/WLAN to another Wi-Fi/WLAN
remotely hosted. The latter scenario - from a Wi-Fi/WLAN to another Wi-Fi/WLAN- shows
190 C.X. Mavromoustakis et al.
to exhibit significant reduction in terms of the execution time, whereas it hosts the minimum
execution time through the FBM with distance broadcast mobility pattern.
The Energy Consumption (EC) with the number of mobile users participating
during an interactive game (requirements in GPU/CPU) is shown in Fig. 6. During the
interactive game-playing process, the processing requirements of each device dra-
matically increase. This results to the need for some devices to offload processing
power into cloud terminals. In this regards, Fig. 6 presents the evaluation for the energy
consumed (EC) for three schemes including a non-assisted cloud. Measurements were
extracted for 150 mobile terminals that are configured to host interactive gaming
applications. The proposed scheme outperforms the other compared schemes, with the
associated EC to be kept in relatively low levels. In turn, Fig. 6 shows the EC with
the number of mobile users utilizing greater than 80 % of their memory available
resources for three different schemes. It is important to denote that the proposed scheme
with the Wi-Fi/WLAN configuration enables lower EC than the other evaluated
schemes, including the absence of any assistance through cloud. Devices that are
utilizing greater than the 80 % of their computational resources are the best candidates
to offload. Figure 6 shows that the EC is significantly minimized, by using the proposed
scheme that utilizes the Wi-Fi/WLAN configuration, whereas it behaves almost the
same with small number of mobile nodes, as well as with greater number of mobile
nodes with lack of resources in the described scenario.
5 Conclusions
In this work, a novel task outsourcing mechanism using the Mobile Cloud paradigm is
presented in contrast to the Energy consumption of wireless devices. The proposed
scheme encompasses a cooperative partial process offloading execution scheme, aiming
at offering energy conservation. In order to allow Energy conservation, partitionable
resources can be offloaded using a latency-based scheduling scheme as well as by
utilizing the state characteristics of each device (i.e. allowed execution duration). The
offloading mechanism provides efficient cloud-oriented resources’ exploitation and
reliable task execution offered to the mobile end-recipients. The proposed offloading
scheme is thoroughly evaluated through simulation experiments, in order to validate
the efficiency of the offloading policy in contrast to the energy consumption of wireless
devices, as well as for the reliability degree offered by the scheme. Future streams in
our on-going research include the enhancement of an opportunistically formed feder-
ated mobile cloud, which will allow interactive game playing and exchanging of
resources with strict resource constraints and streaming characteristics (delay-sensitive
resource sharing) in a MP2P manner, on-the-move.
Acknowledgment. The work presented in this paper is co-funded by the European Union,
Eurostars Programme, under the project 8111, DELTA “Network-Aware Delivery Clouds for
User Centric Media Events”.
On the Performance Evaluation of a Novel Offloading-Based Energy 191
References
1. Mousicou, P., Mavromoustakis, C.X., Bourdena, A., Mastorakis, G., Pallis, E.: Performance
evaluation of Dynamic Cloud Resource Migration based on Temporal and Capacity-aware
policy for Efficient Resource sharing. In: 16th ACM International Conference on Modeling,
Analysis and Simulation of Wireless and Mobile Systems (MSWiM 2013), Barcelona, Spain
(2013)
2. Abolfazli, S., Sanaei, Z., Ahmed, E., Gani, A., Buyya, R.: Cloud-based augmentation for
mobile devices: motivation, taxonomies, and open challenges. IEEE Commun. Surv.
Tutorials 16(1), 337–368 (2013)
3. Dimitriou, C., Mavromoustakis, C.X., Mastorakis, G., Pallis, E.: On the performance response
of delay-bounded energy-aware bandwidth allocation scheme in wireless networks. In: IEEE
2013 International Workshop on Immersive & Interactive Multimedia Communications over
the Future Internet, organized in conjunction with IEEE International Communications
Conference (ICC 2013), pp. 641–646, Budapest, Hungary (2013)
4. Miettinen, A.P., Nurminen, J.K.: Energy efficiency of mobile clients in cloud computing. In:
2nd USENIX Conference on Hot Topics in Cloud Computing. USENIX Association (2010)
5. Salehi, M.A., Javadi, B., Buyya, R.: Resource provisioning based on preempting virtual
machines in distributed systems. Concurrency Comput. Pract. Exp. 26(2), 412–433 (2013)
6. Slegers, J., Mitriani, I., Thomas, N.: Evaluating the optimal server allocation policy for
clusters with on/off sources. J. Perform. Eval. 66(8), 453–467 (2009)
7. Warneke, D., Kao, O.: Nephele: Efficient parallel data processing in the cloud. In: 2nd
Workshop Many-Task Computing on Grids and Supercomputers, ACM, Portland, OR, USA
(2009)
8. Chun, B., Ihm, S., Maniatis, P., Naik, M., Patti, A.: Clonecloud: Elastic execution between
mobile device and cloud. In: 6th Conference on Computer systems of EuroSys (2011)
9. Chun, B.G., Maniatis, P.: Augmented smartphone applications through clone cloud
execution. In: HotOS (2009)
10. Barbera, M.V., Kosta, S., Mei, A., Stefa, J.: To offload or not to offload? The bandwidth and
energy costs of mobile cloud computing. Im: INFOCOM 2013, Turin, Italy (2013)
11. Cuervo, E., Balasubramanian, A., Cho, D., Wolman, A., Saroiu, S., Chandra, R., Bahl, P.:
MAUI: making smartphones last longer with code offload. In: ACM International
Conference on Mobile Systems, Applications, and Services, pp. 49–62, San Francisco,
CA, USA (2010)
12. Satyanarayanan, M., Bahl, P., Caceres, R., Davies, N.: The case for vm-based cloudlets in
mobile computing. Pervasive Computing 8(4), 14–23 (2009)
13. Vishwanath, V.K., Nagappan, N.: Characterizing cloud computing hardware reliability. In:
1st ACM Symposium on Cloud Computing, pp. 193–204 (2010)
14. Kumar, K., Lu, Y.H.: Cloud computing for mobile users: can offloading computation save
energy? Computer 43(4), 51–56 (2010)
15. Papanikolaou, K., Mavromoustakis, C.X.: Resource and scheduling management in Cloud
Computing Application Paradigm. In: Mahmood, Z. (eds.) Cloud Computing: Methods and
Practical Approaches, pp. 107–132. Springer International Publishing, Heidelberg (2013)
16. Carroll, A., Heiser, G.: An analysis of power consumption in a smartphone. In: 2010
USENIX, Boston, MA, USA (2010)
17. Mavromoustakis, C.X., Dimitriou, C.D., Mastorakis, G.: On the real-time evaluation of two-
level BTD scheme for energy conservation in the presence of delay sensitive transmissions and
intermittent connectivity in wireless devices. Int. J. Adv. Netw. Serv. 6(3&4), 148–162 (2013)
A Traffic Aware Energy Saving Scheme
for Multicarrier HSPA+
1 Introduction
Due to the increase in demand for mobile broadband services, mobile network
vendors are preparing to 1000× mobile data traffic increase between 2010 and
2020 [1]. This is basically nothing new: According to [2] wireless capacity has
already increased more than 106 × since 1957. Whereas 5× comes from improve-
ments in modulation and coding schemes (MCS), 1600× increase is due to the
reduction in cell sizes. It is widely accepted that the new 1000× objective comes
hand in hand with a further reduction in distances between transmission points.
Network densification allows higher spatial reuse and so it allows higher area
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 192–206, 2015.
DOI: 10.1007/978-3-319-16292-8 14
Traffic Aware Energy Saving 193
spectral efficiency [bits/s/Hz/km2 ]. On the other hand, considering that base sta-
tions (BSs) contribute the most to the energy consumption [3–5], future hyper-
dense network deployments may negatively impact on the operational costs and
carbon emissions.
It has become an important goal for industry and academia to reduce the
energy consumption of mobile networks over coming years. Energy efficiency is
one of the key challenges in the evolution towards beyond fourth generation (4G)
mobile communication systems. Yet, focusing in future systems is not enough
since High Speed Packet Access (HSPA) and Long Term Evolution (LTE) will
serve and coexist in the next decade, with probably a more tight integration in
future releases of the standards [6]. In particular, HSPA is currently deployed
in over 500 networks and it is expected to cover 90 % of the world’s population
by 2019 [7]. So it will serve the majority of subscribers during this decade while
LTE continues its expansion in parallel and gain constantly large share of users.
Among the advantages in the latest releases of HSPA (HSPA+), multicarrier
utilization is considered as an important performance booster [8] but it has not
been extensively studied from the energy efficiency perspective so far. Given
this, the focus of our study is in the reduction of energy consumption through
dynamic usage of multiple carriers combined with the BS (node-B) switch-off.
Various BSs turn off strategies have been extensively studied as means for
energy saving. Since cellular networks are dimensioned to correctly serve the traf-
fic at the busy hour, the idea behind these strategies is to manage the activity of
BSs in an energy-efficient manner while simultaneously being able to respond the
traffic needs dynamically. Thus, the focus is on strategies where underutilized
BSs are switched off during low traffic periods [3,9,10]. In order to guarantee
coverage, switch-off is usually combined with a certain power increase in the
remaining cells, but still providing a net gain in the global energy saving. How-
ever, this is not a straight-forward solution from practical perspective: common
control channels also require a power increase and electromagnetic exposure
limits must be fulfilled [11]. Remote electrical downtilt lacks these problems,
it positively impacts the noise rise and received powers, and thus the coverage
for common control channels could be expanded without increasing their power
[12]. More recently, BS cooperation has also been proposed to cover the newly
introduced coverage holes when switch-off is applied [13].
Algorithms that minimize the energy consumption do also have an impact
on the system capacity. The work in [14] studies these conflicting objectives and
investigates cell switching off as a multiobjective optimization problem. This
tradeoff should be carefully addressed, otherwise the applicability of a particu-
lar mechanism would be questionable. Yet, not many works consider the capacity
issue in detail and many of the contributions just introduce a minimum signal-to-
noise plus interference ratio (SINR) threshold, which allows to compute a mini-
mum throughput or outage probability to be guaranteed. Consequently, capacity
does not remain constant before and after the switch off. Indeed [15] strongly
questions the applicability of cell switch off combined with power increases as a
feasible solution for many scenarios.
194 M.U. Jada et al.
Very few works evaluate energy saving gains obtained by advantageous use
of the multi-carrier option. The works [16] and [17] respectively deal with HSPA
and LTE when two carriers are aggregated and evaluate whether the additional
carrier can be de-activated when load decreases and BSs are not powered off.
The current work deals with the reduction of energy consumption in HSPA+
by means of a strategy that combines partial and complete node-B switch off with
antenna downtilt. Utilization of multiple carriers is evaluated as an additional
degree of freedom that allows more energy effective network layouts. The number
of available carriers is dynamically managed in combination with full BS turn
off. This last action provides the highest energy saving. For this reason, instead
of progressive de-activation of carriers until the eventual node-B turn off, we
evaluate the combination (inter-site distance, number of carriers) that gives best
energy saving. The solution exploits the fact that activation of previously shut-
off carriers might permit turning off the BSs earlier at relatively higher load
than existing policies. The new scheme promises significant energy savings when
compared with existing policies - not only for low traffic hours but also for
medium load scenarios.
The paper is organized in five sections. Section 2 discusses about the advan-
tages and possibilities of multicarrier HSPA+. Section 3 describes the system
model. In Sect. 4 we discuss about the BS shut-off scheme and Sect. 5 is devoted
to results and discussion. Conclusions are drawn in Sect. 6.
2 Multicarrier HSPA+
Latest releases of HSPA offer numerous upgrade options with features such as
higher order modulation, multi-carrier operation and multiple input multiple
output (MIMO). Evolution from initial releases is smooth since MCS update
and multicarrier are unexpensive features [18]. These advantages have motivated
65 % of HSPA operators to deploy HSPA+, as recorded December 2013 [7].
HSPA has evolved from a single carrier system to up to 8-carrier aggregation
(8C-HSDPA). So, multicarrier operation can be supported in a variety of sce-
narios depending on the release, indicated in Table 1 for the downlink (HSDPA).
Note that the uplink just allows dual cell since release 9. Multicarrier capability
being important advantage that affect the system performance [8,19]:
– It scales the user throughput with the number of carriers reaching a top the-
oretical speed of 672 Mbps on the downlink when combining 8C-HSDPA with
4 × 4 MIMO.
– It also improves spectrum utilization and the system capacity because of the
load balancing between carriers.
– Multicarrier operation improves the user throughput for a given load at any
location in the cell, even at the cell edge, where channel conditions are not
good. Note that other techniques such as high order modulation combined
with high rate coding or the transmission of parallel streams with MIMO
require high SINRs. Furthermore, it is well known that every order of MIMO
Traffic Aware Energy Saving 195
just doubles the rate only for users with good channel strength and no line of
sight, while on cell edge MIMO just provides diversity or beamforming gain.
3 System Model
Let us assume the downlink of an HSPA+ system. At the link level, 30 modula-
tion and coding schemes (MCS) are adaptively assigned by the scheduler based
on the channel quality indicator (CQI) reported by user equipments (UEs). Given
the channel condition and the available power for the high speed physical down-
link shared channel (HS-PDSCH) PHS-PDSCH , the scheduler selects the MCS that
would guarantee a 10 % block error rate (BLER) for each user per transmission
time interval (TTI).
The CQI reported on the uplink can be approximated using the SINR (γ) at
the UE for the required BLER as [23]:
⎧
⎨0 if γ ≤ −16 dB
γ
CQI = 1.02 + 16.62 if −16 dB < γ < 14 dB (1)
⎩
30 if 14 dB ≤ γ
196 M.U. Jada et al.
where:
– For the sake of clarity, index referring to carrier f has been omitted.
– Lj,i is the net loss in the link budget between cell j and UE i for carrier f .
Note that index s refers to the serving cell.
– Ptot is the carrier transmission power. Without loss of generality, it is assumed
equal in all cells of the scenario.
– Intercell interference is scaled by neighbouring cell load ρ̄ at f (carrier activity
factor).
– PN is the noise power.
– Pcode is the power allocated per HS-PDSCH code. Note that all codes intended
for a certain UE shall be transmitted with equal power [24]. So, considering
an allocation of Ncode codes and a power PCCH for the control channels that
are present in f , then Pcode = PtotN−P CCH
code
.
– The orthogonality factor α models the percentage of interference from other
codes in the same orthogonal variable spreading factor (OVSF) tree. Our
model assumes classic Rake receivers, in case of advanced devices (Type 2
and Type 3/3i) [25], their ability to partially suppress self-interference and
interference from other users would be modelled by properly scaling the inter-
fering power [26].
At the radio planning phase, a cell edge throughput is chosen and the link
budget is adjusted so that the corresponding SINR (CQI) is guaranteed with
a certain target probability pt . Given that both useful and interfering powers
are log-normally distributed, the total interference is computed following the
method in [27] for the summation of log-normal distributions. Coverage can be
computed for any CQI and so, the boundary in which MCS k would be used
with probability pt can be estimated. This allows finding the area Ak in which k
is allocated with probability ≥ pt . Figure 1 shows an example for a tri-sectorial
layout with node-Bs regularly distributed using an inter-site distance (ISD) of
250 m. The shape of the final CQI rings largely depends on the downtilt and
antenna pattern at each carrier. The example considers a multiband commercial
antenna and optimized downtilt to maximize capacity.
Rings distribution will expand or reduce following the load in other cells.
Figure 2 shows the pdf for CQIs 15 to 30 for 2, 4 or 8 carriers and the same
cell load, and so different load per carrier ρ̄(f ). Interference is spread among the
different carriers and so the probability of allocating higher CQIs increases with
the number of carriers. This has an impact on cell capacity and so, the next
subsection is devoted to describe its model.
Traffic Aware Energy Saving 197
0.4
Probability Density Function (PDF)
Fig. 2. CQI pdf for 2, 4 and 8 carriers and same cell load.
where Acell is the cell area, ck is the code rate associated to MCS k, and pk is
the probability of using MCS k, pk = Ak /Acell . Since the cell load is bounded to
one, the maximum throughput that can be served (ρ̄ = 1), or cell capacity is:
30 −1
pk
C̄ = . (4)
ck
k=1
At any given load, the observed throughput (served) would be given by ρ̄× C̄.
198 M.U. Jada et al.
λAk × E(σ)
ρ̄k = . (5)
ck
It is immediate that ρ̄ = k ρ̄k . Given that all users in the cell share the
same scheduler, by using Little law’s the mean flow duration tk for a user in Ak
can be computed tk = Nk /λAk where Nk is the average number of users in Ak .
Then the flow throughput τk for users being served with MCS k is:
E(σ) λAk × E(σ)
τk = = , (6)
tk Nk
Considering the underlaying Markov process [28], it can be found the sta-
tionary distribution of the number of active users in each Ak and its average
ρ̄k
value, Nk = 1− ρ̄ , which yields:
τk = ck (1 − ρ̄), (7)
30
τ̄ = pk ck (1 − ρ̄). (8)
k=1
where ρ̄ captures the own cell load and pk is affected by the load in neighbouring
cells, which modifies SINR values, CQI rings and so Ak values ∀k.
includes the user traffic of the switched off node-Bs, which has to be accommo-
dated by the remaining active ones. Considering υ as the ratio ISDnew /ISDinitial
and that cell area Acell is proportional to the cell square radius, from (3), the
relation between the cell load with new ISD ρ̄new and with initial ISD ρ̄initial is
given by,
ρ̄new = υ 2 × ρ̄initial . (9)
Although the use of more carriers will account for a certain increase in energy
consumption, the saving for switching off some BSs is much higher.
The metric used for the analysis of energy consumption is energy consumed
per unit area (E/A). Assuming an entire parallel system at the node-B to handle
each carrier, the energy consumed per unit area (kWh/km2 ) is given as [4,16,29]:
5 Results
In order to quantify the gains that can be achieved by an intelligent joint man-
agement of carriers and node-Bs, the system performance is evaluated in terms
of average flow throughput (τ̄ ). Three cases have been evaluated: 2, 4 and 8 car-
riers are initially used to serve an aggregated cell load of 1. This load is evenly
distributed among the carriers, ρ̄(f ) = 0.5, 0.25, and 0.125. Given this, three
scenarios are defined considering the τ̄ value to be respected (Table 2).
The reference network has node-Bs regularly deployed and considering an
ISD = 250 m. Therefore after a first shut off, the new ISD would be 500 m, and a
second implies ISD = 750 m. Other network parameters are provided in Table 3.
Figure 3 represents the power consumption per unit area for decreasing cell
load values and showing the transition points that should be used to guarantee
the target user flow throughput after the network update. In each subplot four
cases are represented:
200 M.U. Jada et al.
Parameter Value
Operating bands 2100 MHz, 900 MHz
Inter-Site distances 250 m, 500 m, 750 m
Number of sites, for each ISD 108, 27, 12
Optimum downtilt angles, for each ISD 19.5◦ , 12.5◦ , 11.5◦
Macro BS transmission power 20 W
Transmission power per user 17 W
Control overhead 15 %
BS antenna gain 18 dB
Body loss 2 dB
Cable and connection loss 4 dB
Noise power -100.13 dBm
Propagation model Okumura-Hata
Shadow fading std. deviation 8 dB
Cell edge coverage probability 0.99
Each tag in the plot shows the transition points in terms of (ISD, number
of active carriers). Since the load is progressively reduced, the pictures should
be read from right to left. For example, for the BSO case in Scenario 1, the
transition points evolve as (250, 2) → (500, 2) → (750, 2), note how the last case
can only be implemented for cell loads of 10 %, meaning a 5 % of load per carrier.
The joint management allows earlier BS shut off and transition points fall
below the other options, thus having clearly less power consumption without
performance degradation. It can be seen how JM allows using ISD = 750 m as
soon as the cell load falls below 0.8. For Scenario 2, ISD can be increased from
250 to 500 for high loads, and 750 m can be used once the load falls below
0.5. Scenario 3 is the most restrictive since it starts with the maximum possible
carriers at the current HSPA+ standard. So there is less flexibility with respect to
the other cases and the savings are just slightly better. For illustrative purposes,
it has been included the off-standard case in which up to 10 carriers are used,
Traffic Aware Energy Saving 201
15
10
0
0 0.2 0.4 0.6 0.8 1
(a) Scenario 1
30
20
10
0
0 0.2 0.4 0.6 0.8 1
(b) Scenario 2
40
20
0
0 0.2 0.4 0.6 0.8 1
(c) Scenario 3
Fig. 3. Power consumption per unit area for decreasing cell load values. Transition
points indicate the pair (ISD, number of carriers) to be used.
it can be seen how energy savings are again important. This way, multiaccess
energy saving mechanisms that manage the pool of resources among several
systems would make the most of each system load variations.
202 M.U. Jada et al.
15
10
0
0.2 0.3 0.5 1 2 3 4 5 6
Fig. 4. Transition of cell configuration from initial network setup (scenario 1) to new
setups at specific load values and maintaining the QoS requirements (5.75 Mbps).
0.5
0
0 5 10 15 20 25
It is important to note that the horizontal axis represents the equivalent cell
load that would be obtained if the network remained unchanged. But obviously,
after carrier and/or node-B switch off, the cell load changes. For example, ini-
tially the load is 1 (0.5 per carrier) and it is not until it is reduced to 0.92
that important energy savings are possible, so we transition from (250, 2)@0.92
to (500, 5)@3.7, recall that since the load per carrier is bounded to 1, the final
aggregated cell value can be > 1. Besides, it is clear that the cell load increases
due to its expansion and the new users to be served, but the QoS is respected,
since both (250, 2)@1 and (500, 5)@3.7 provide the same flow throughput.
In order to illustrate how load evolves with every change, Fig. 4 represents
the average flow throughput as a function of the aggregated cell load for each
configuration proposed by JM (solid symbols). Note the logarithmic scale in the
horizontal axis to improve readability. Following Fig. 3a, their evolution is as
follows: (250, 2)@1 → (500, 5)@3.7 → (750, 8)@6.72 → (750, 7)@5.74 and so on.
If no energy savings mechanisms are implemented, in other words if we remain
with the dense node-B deployment, an excess in capacity would be obtained due
to load decrement. These situations are represented by empty symbols.
Traffic Aware Energy Saving 203
100
20
15
10 50
5
0 0
0 10 20 0 10 20
(a) Scenario 1
40 100
30
20 50
10
0 0
0 10 20 0 10 20
(b) Scenario 2
Fig. 6. Comparison between energy savings (%) of BSO, CSO and JM.
6 Conclusions
In this paper we investigated the potential energy savings by shutting off the
BSs through the dynamic use of multiple carriers in HSDPA. We have proposed
an energy saving scheme in which fewer or additional carriers have been used
204 M.U. Jada et al.
depending upon the network traffic variations. This is combined with remote
electrical downtilts to partially cope with the use of a higher number of lower
MCSs. Instead of just guaranteeing a power threshold at the cell edge, or an
outage probability threshold for data traffic, it is more interesting to ensure that
QoS remains unchanged whenever a node-B and/or carrier is shut-off, for this
reason the study considers user flow throughput as the performance metric to
be respected, which is closely affected by load variations due to cell expansions.
Comparison to schemes that progressively shut off network elements (BSO and
CSO) has been done, showing clear energy savings with the JM approach.
The main challenge to make the adaptation efficient and flexible is that
load fluctuations should be correctly followed. Reiterative traffic patterns can
be assessed along time but abnormal temporal or spatial variations could be
included in the system by means of a pattern recognition system, e.g. a fuzzy
logic based system or a neural network. Further efforts are required in this direc-
tion. Moreover, the time for carrier and BS reactivation has not been taken into
account for this case study. This will be considered in the future work.
References
1. Andrew, R.: 2020: The Ubiquitous Heterogeneous Network - Beyond 4G. ITU
Kaleidoscope, NSN, Cape Town (2011). http://www.itu.int/dms pub/itu-t/oth/
29/05/T29050000130001PDFE.pdf
2. Chandrasekhar, V., Andrews, J., Gatherer, A.: Femtocell networks: a Survey. IEEE
Commun. Mag. 46(9), 59–67 (2008)
3. Marsan, M., Chiaraviglio, L., Ciullo, D., Meo, M.: Optimal energy savings in cellu-
lar access networks. In: IEEE International Conference on Communication Work-
shops (ICC Workshops), Dresden, pp. 1–5 (2009)
4. Jada, M., Hossain, M.M.A., Hämäläinen, J., Jäntti, R.: Impact of femtocells to
the WCDMA network energy efficiency. In: 3rd IEEE Broadband Network and
Multimedia Technology (IC-BNMT), Beijing, pp. 305–310 (2010)
5. Jada, M., Hossain, M.M.A., Hämäläinen, J., Jäntti, R.: Power efficiency model for
mobile access network. In: 21st IEEE Personal, Indoor and Mobile Radio Commu-
nications Workshops (PIMRC Workshops), Istanbul, pp. 317–322 (2010)
6. Yang, R., Chang, Y., et al.: Hybrid multi-radio transmission diversity scheme to
improve wireless TCP performance in an integrated LTE and HSDPA networks.
In: 77th IEEE Vehicular Technology Conference (VTC Spring), Dresden, pp. 1–5
(2013)
7. 4G Americas: White Paper on 4G Mobile Broadband Evolution: 3GPP Release 11
& Release 12 and Beyond. Technical report (2014)
8. Johansson, K., Bergman, J., et al.: Multi-carrier HSPA evolution. In: 69th IEEE
Vehicular Technology Conference (VTC Spring), Barcelona (2009)
Traffic Aware Energy Saving 205
9. Gong, J., Zhou, S., Niu, Z., Yang, P.: Traffic-aware base station sleeping in dense
cellular networks. In: 18th International Workshop on Quality of Service (IWQoS),
Beijing, pp. 1–2 (2010)
10. Niu, Z.: TANGO: traffic-aware network planning and green operation. IEEE Wirel.
Commun. 18(5), 25–29 (2011)
11. Chiaraviglio, L., Ciullo, D., et al.: Energy-efficient management of UMTS access
networks. In: 21st International Teletraffic Congress (ITC), Paris, pp. 1–8 (2009)
12. Garcia-Lozano, M., Ruiz, S.: Effects of downtilting on RRM parameters. In: 15th
IEEE International Symposium on Personal, Indoor and Mobile Radio Communi-
cations (PIMRC), Barcelona, vol. 3. pp. 2166–2170 (2004)
13. Han, F., et al.: Energy-efficient cellular network operation via base station cooper-
ation. In: IEEE International Conference on Communications (ICC), Ottawa, pp.
4374–4378 (2012)
14. González G.D., Yanikomeroglu, H., Garcia-Lozano, M., Ruiz, S.: A novel multiob-
jective framework for cell switch-off in dense cellular networks. In: IEEE Interna-
tional Conference on Communications (ICC), Sydney, pp. 2647–2653 (2014)
15. Wang, X., Krishnamurthy, P., Tipper, D.: Cell sleeping for energy efficiency in
cellular networks: is it viable?. In: IEEE Wireless Communications and Networking
Conference (WCNC), Paris, pp. 2509–2514 (2012)
16. Micallef, G., Mogensen, P., et al.: Dual-cell HSDPA for network energy saving. In:
71st IEEE Vehicular Technology Conference (VTC Spring), Taipei, pp: 1–5 (2010)
17. Chung, Y.-L.: Novel energy-efficient transmissions in 4G downlink networks. In:
3rd International Conference on Innovative Computing Technology (INTECH),
London, pp. 296–300 (2013)
18. Borkowski, J., Husikyan, L., Husikyan, H.: HSPA evolution with CAPEX consid-
erations. In: 8th International Symposium on Communication Systems, Networks
& Digital Signal Processing (CSNDSP), Poznan (2012)
19. Bonald, T., Elayoubi, S.E., et al.: Radio capacity improvement with HSPA+ dual-
cell. In: IEEE International Conference on Communications (ICC), Kyoto (2011)
20. 3GPP: RP-140092 - Revised Work Item: L-band for Supplemental Downlink in
E-UTRA and UTRA. Technical report (2014). http://www.3gpp.org/
21. 3GPP: TR 25.701 v12.1.0 (Release 12) - Study on scalable UMTS Frequency Divi-
sion Duplex (FDD) Bandwidth. Technical report (2014). http://www.3gpp.org/
22. NSN: Answering the Network Energy Challenge (whitepaper). Technical report
(2014)
23. Brouwer, F., de Bruin, I., et al.: Usage of link-level performance indicators for
HSDPA network-level simulations in E-UMTS. In: International Symposium on
Spread Spectrum Techniques and Applications (ISSSTA), Sydney, pp. 844–848
(2004)
24. 3GPP: TR 25.214 v11.8.0 (Release 11) - Physical layer procedures (FDD). Tech-
nical Specification (2014). http://www.3gpp.org/
25. 3GPP: TR 25.101 v12.3.0 (Release 12) - User Equipment (UE) Radio Transmission
and Reception (FDD). Technical Report (2014). http://www.3gpp.org/
26. Rupp, M., Caban, S., et al.: Evaluation of HSDPA and LTE: From Testbed Mea-
surements to System Level Performance. Wiley, New York (2011)
27. Beeke, K.: Spectrum Planning - Analysis of Methods for the Summation of Log-
Normal Distributions. EBU Technical Review, no. 9 (2007)
206 M.U. Jada et al.
28. Bonald, T., Proutière, A.: Wireless downlink data channels: user performance and
cell dimensioning. In: Annual International Conference on Mobile Computing and
Networking (MOBICOM), San Diego, CA (2003)
29. Arnold, O., Richter, F., Fettweis, G., Blume, O.: Power consumption modeling of
different base station types in heterogeneous cellular networks. In: Future Network
and Mobile Summit, Florence, pp. 1–8 (2010)
Enabling Low Electromagnetic Exposure
Multimedia Sessions on an LTE Network
with an IP Multimedia Subsystem Control Plane
1 Introduction
Public concerns about the potential health risk of being exposed to radio com-
munication devices are on the rise given that radio waves produced by radio
telecommunication networks are increasingly ubiquitous in peoples daily envi-
ronment. Even though international standards [1] have established thresholds
and guidelines on exposure limits, the debate over the harmfulness of electro-
magnetic exposure is far from being over; for instance, a recent study in [2]
highlighted that several health risk assessments, which were carried out by vari-
ous scientific groups, came up with divergent conclusion regarding the harmful-
ness level of radio telecommunications waves. To respond to the public concerns
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 207–216, 2015.
DOI: 10.1007/978-3-319-16292-8 15
208 J. Penhoat et al.
about the potential health risk due to electromagnetic exposure, the European
Commission launched the LEXNET project [3] whose two main objectives are:
(1) to define a metric for accurately measuring the exposure, as well as; (2) to
study strategies for reducing such exposure without jeopardizing the quality of
experience (QoE) perceived by the end-users.
The electromagnetic exposure can be decomposed into two contributions: the
downlink and uplink exposures. The downlink exposure (far-field) is the exposure
to radio access elements (base stations, access points) and it is linked to the power
density (in W/m2 ) received by the terminals. The uplink exposure (near-field)
comes from the exposure to the terminal itself, which increases with the emit-
ted power. Consequently, there exist two main types of metrics to evaluate the
exposure of a whole population. The first type is related to near-field exposure
(uplink), which focuses on measuring the exposure induced by telecommunica-
tions devices, like laptops, tablets or smartphones, and is expressed in terms of
the specific absorption rate (SAR). The SAR is defined as the absorbed energy
by the human body (in W/kg) and is calculated either on the whole human body
or on ten grams of tissue. They are usually context-dependent, since they depend
on the power emitted by the terminals, the application that is being used, the
power received from the radio access points, the morphology of the users, and
the position of the terminals relative to the users body. The second group of
metrics considers the exposure induced by far-field sources (downlink), mostly
caused by base stations, and are used to measure the intensity of the electric
field (in V /m).
The exposure index that is being proposed within the LEXNET project con-
siders the correlation between the two aforementioned types of metrics, which
reflect the duality between downlink and uplink. Indeed, there is a clear relation-
ship between the power emitted from personal devices and the power received
by them: the closer a terminal is to its base station, the lower the emitted power
and the higher the received power.
In this paper, our objective is to tackle the second main objective of the
LEXNET project, i.e. reducing the exposure induced by wireless networks while
keeping an acceptable QoE, by proposing a novel IP architecture for multimedia
services over long term evolution (LTE) systems. In Sect. 2, existing works on
the reduction of the electromagnetic exposure in wireless systems are first out-
lined. Section 3 introduces our scenario of interest, i.e. multimedia service (video
transmission) over LTE, whereas, Sect. 4, discusses the issues and opportunities
of modifying the existing IP architecture for reducing uplink electromagnetic
exposure. Then, in Sect. 5 we propose a novel cross-layer solution to reduce the
exposure in LTE networks, by categorizing the RLC frames (critical vs. non-
critical) and decreasing the number of retransmissions for non-critical ones. We
also propose an enhanced 3GPP compliant architecture that can be used to
implement our reduced exposure solution. Section 6 draws the main conclusions
of this work and outlines future works, in particular how the proposed scheme
will be evaluated.
Enabling Low Electromagnetic Exposure Multimedia Sessions 209
As was mentioned earlier, LEXNET proposes a metric to assess the total expo-
sure induced by telecommunication networks [4]. The metric, named Exposure
−1
Index (in J·kg h ), gathers both the average power emitted by the personal
devices and the average power density received by these devices. It can be there-
fore seen as the combination of the exposures induced by the access points and
the personal devices of the telecommunication networks located within the same
geographical area. As previously mentioned, the assessment, done at different
periods of the day, takes into account the characteristics of the applications
used by the users, the particularities of the telecommunication networks, the
morphology of the users and the position of the terminals relative to the users.
As far as exposure reduction is concerned, it has been shown in [5] that using
lower frequency bands in universal mobile telecommunication systems (UMTS)
can reduce the EM radiation density of a BS by about 13 dB. In 2010, Kelif
et al. [6] showed that it is possible to decrease the downlink exposure (power
density) by a factor of two without jeopardizing the Quality of Service of a LTE
network, by increasing the number of base stations in an area. Consequently,
exposure to electromagnetic fields can be reduced by deploying small-cells. More
recently, in 2014, Habib et al. [7] have demonstrated that it is possible to reduce
the Exposure Index in a heterogeneous LTE network, comprising both macro-
and small-cells, by extending the coverage of a number of small-cells located at
the edges of a macro-cell. The rationale of their solution is to off-load the users
located at the edge of the macro-cell towards the small-cells, in order to decrease
their uplink transmissions. Other techniques based on SAR shielding have also
been proposed in the literature [8] for reducing the uplink exposure by using
special material inside communication devices. Up to date, most of the works
on exposure reduction has focused on physical layer techniques and solutions
to lower the power received and emitted by the user terminals. In contrast, we
propose here a scheme for reducing the electromagnetic exposure by combining
techniques belonging to the link and transport layers.
3 Scenario of Interest
We consider (see Fig. 1) the typical architecture of a LTE network. The control
plane relies on the IP Multimedia Subsystem [9], which uses session initiation
protocol (SIP) to control the different multimedia sessions [10]. In the control
plane, the service centralization and continuity application server [11] ensures
the continuity of multimedia sessions at the wireless devices.
In the scope of this work, we will focus on a multimedia service; in this sense,
a wireless device sends a video stream to a remote device. The video is encoded
with H.264/AVC, the recommended codec by the 3GPP. The corresponding
video slices are transported over UDP/IP datagrams.
210 J. Penhoat et al.
Remote device
Packet Data
HSS Network
MME
E-UTRAN
P-GW
S-GW
Service Centralization
and Continuity
eNodeB Application Server
IP
UDP
PDCP PDCP
header SDU
Lost bytes
Bytes non-covered
Bytes covered
by the checksum
by the checksum
Bytes non-covered Bytes covered
by the checksum by the checksum
the Quality of Experience (QoE) of the video application, which is defined by the
International Telecommunication Union in [13], could be improved. Here, instead
of increasing the QoE, our objective is to maintain the QoE at a satisfying level,
while decreasing the number of RLC retransmissions. This would in turn bring
about an exposure reduction. In order to achieve it, the following issues must be
tackled:
– At the RLC layer of the eNodeB, the loss of particular RLC frames belonging
to a UDP/IP datagram could cause the loss of critical information, such as
the IP addresses and UDP ports of the connection. To avoid this situation,
those RLC frames carrying critical pieces of information, for instance those
identifying the connection, shall be protected;
– In its regular operation, the UDP receiving entity would discard a datagram
when its Cyclic Redundancy Check (CRC) is wrong and therefore it would
not send it to the decoder. This, for instance, would be the case for incomplete
UDP datagrams. This behavior would have been overcome if the erroneous
chunks of the incomplete datagram had not been included in the CRC com-
putation, as shown in Fig. 3. In this sense, those bytes not covered by the
checksum (within the UDP datagram) must not be critical for the decoder,
and they might be useful just to enhance the perceived QoE. To establish
whether a frame is critical or not, the properties of the application must be
used (for instance, the I-frames for some streaming services);
– At the architectural level, we need to assess the QoE and ensure that it is
kept at an appropriate level, despite the increase in block error ratio (BLER)
due to fewer RLC frame retransmissions.
Fig. 4. New UDP header indicating the bytes covered by the checksum
First of all, we propose the following modification to the 3GPP legacy spec-
ifications of the RLC layer acknowledged mode. When the eNodeB (acting as
a receiver) does not receive a RLC frame, it requests the transmitter, i.e. the
user terminal, to retransmit the frame. When the information carried by the
RLC frame payload is critical for the decoder (for example the IP addresses and
UDP ports of the connection), the maximum number of retransmissions is set
to M axDAT − 1. On the other hand, if the information is not so relevant for
the decoder (we can state that it provides
a higher
QoE), the maximum num-
ber of retransmissions is reduced to M axDAT N
−1
, where N can vary between
[1, M axDAT ]. In this sense, the higher the value of N , the lower the electro-
magnetic exposure. In order to allow for this new functionality, we require some
cross-layer interaction; the RLC layer of the transmitter shall be aware on how
critical/relevant is the information contains within each of the RLC frame pay-
loads, so as to establish the maximum number of retransmissions. We propose to
modify the transport protocol, by adding a new field within the UDP header (see
Fig. 4) to indicate the bytes that are covered by the checksum. When the RLC
layer receives a UDP/IP datagram from the packet data convergence protocol
(PDCP) layer [14], it checks this new field to assess whether or not the payload
contains critical bytes.
Then, at the PDCP layer, the 3GPP specifications can possibly be modified
according to the following rule. When the wireless device receives a PDCP status
report coming from the eNodeB, it reads the payload of this message to know
what are the PDCP frames to be retransmitted. If a PDCP frame does not
carry a critical IP/UDP packet, its number of retransmissions can be decreased,
as long as the QoE requirements are met. For instance, when assuming that each
retransmission utilizes the same amount of power P and that M axDAT = 4,
if most of the frames can be transmitted by using one less retransmission in
comparison with the traditional approach, while keeping an acceptable QoE,
then a reduction of around 25 % of power/energy can be achieved, i.e. by using
3P transmit power instead of 4P .
Last, but not least, we need to include a new monitoring method to periodi-
cally evaluate the QoE of the video application in order to ensure that it remains
at a satisfactory level. We consider here the use of a method, which was originally
proposed by the ADAMANTIUM project [15]. The mean opinion score (M OS),
a numerical evaluation of the QoE, is estimated by using (1) [16], which is based
on the following parameters: the frame rate (F R) of the H.264/AVC codec, the
sender bit rate (SBR) of the codec, the complexity of the video content (CT ),
the BLER, and the mean burst length (M BL).
Enabling Low Electromagnetic Exposure Multimedia Sessions 213
Start
MOS
measurement
timer has Radio network reselection
elapsed
MOS measurement
Yes
Yes Network
MOS > MOSmin
reselection
3
MOS 4
measurement
5
SBR
computation
6
8
Candidate
9 radio networks
list computation
Handover
decision
10
Fig. 6. Messages between the entities involved in the implementation of the algorithm
Remote device
Data path
P-GW
Wireless device
source
eNodeB-SCC AS
eNodeB
interface
IP Multimedia
P-GW Subsystem
target
eNodeB
the Frame Rate (FR) of the codec. Message #2 contains the response sent by
the eNodeB. After having evaluated the MOS, the SCC AS stores the value in a
database (#3). The message flow also shows the information exchange with the
database; in this regard, #4 is sent by the access network discovery and selec-
tion function (ANDSF) [17] to request the MOS values for a particular end user
terminal, while #5 is the response sent by the database. #6 is a SIP message
Enabling Low Electromagnetic Exposure Multimedia Sessions 215
containing the value of the SBR of the codec, computed by the SCC AS. #7
triggers the reselection process, when maintaining the QoE level is not possible
and is sent by the SCC AS to the eNodeB, requesting a handover (the eNodeB is
the entity that initiates the reselection processes [18]). #8 is sent by the eNodeB
to the ANDSF, which can provide a list of candidate radio networks based on
criteria such as the MOS experienced by a wireless device when it is connected
to a particular radio network; this list is included in message #9. Finally, #10 is
sent by the eNodeB to the SCC AS to indicate a handover event. Upon reception
of this message, the SCC AS prepares the new SIP session towards the target
radio network.
The messages are exchanged through the two new interfaces that are high-
lighted in Fig. 7. The first interface is included between the eNodeB and the SCC
AS and is used by the SCC AS for retrieving the information that character-
izes the link layer and requesting a handover. The second interface connects the
eNodeB and the ANDSF, and it is in charge of the network reselection process,
which enables the eNodeB to retrieve information about candidate radio net-
works.
6 Conclusion
The paper proposes a novel solution to reduce the exposure in LTE networks,
by decreasing the number of retransmissions of lost RLC frames. Our proposed
cross-layer scheme has been designed for multimedia (video) services. In this
sense, it is only desirable to reduce the number of retransmissions of the frames
carrying non critical information for the delivery of the service. In order to
implement the solution, the degree of relevance of the information is included in
a new field of the transport layer header. The latter must be checked by the RLC
layer (cross-layer interaction) to adapt the number of retransmissions according
to the significance of the information contains in RLC frame payloads. Given that
the BLER could increase as a result of the reduced number of retransmissions,
a QoE measurement is periodically carried out and actions are established if it
falls below a predefined threshold.
The implementation of the proposed solution involves enhancements in the
E-UTRAN as well as some additional signaling, which could be integrated within
the IMS core network, if it was used. In our future work, we will carry out the
performance analysis of the proposed solution. As this heavily depends on the
chosen QoE model, several models will be required to be implemented. Another
aspect to solve is related to the compression of the UDP header at the PDCP
layer. Even if the header is compressed, the information characterizing the impor-
tance shall be made available for the RLC layer.
References
1. International Commission on Non-Ionizing Radiation: Guidelines for limiting expo-
sure to time-varying electric, magnetic, and electromagnetic fields (up to 300 GHz.
Healt Phys. 74(4), 494–522 (1998)
2. Wiedemann, P.M., Boerner, F., Dnrrenberger, G., Estenberg, J., Kandel, S., van
Rongen, E., Vogel, E.: Supporting non-experts in judging the credibility of risk
assessments (cora). Sci. Total Environ. 463464, 624–630 (2013)
3. Low Electromagnetic Field Exposure Networks (LEXNET) project. http://www.
lexnet-project.eu
4. Conil, E., et al.: LEXNET deliverable D2.4: global wireless exposure metric defin-
ition. Technical report (2013)
5. Derakhshan, F., Jugl, E., Mitschele-Thiel, A.: Reduction of radio emission in low
frequency wcdma. In: Proceedings of Fifth IEE International Conference on 3G
Mobile Communication Technologies (3G 2004), October 2004
6. Kelif, J.M., Coupechoux, M., Marache, F.: Limiting power transmission of green
cellular networks: impact on coverage and capacity. In: 2010 IEEE International
Conference on Communications (ICC), pp. 1–6, May 2010
7. Sidi, H.B.A., Altman, Z., Tall, A.: Self-optimizing mechanisms for emf reduction
in heterogeneous networks. CoRR abs/1401.3541 (2014)
8. Ragha, L.K., Bhatia, M.S.: Evaluation of SAR reduction for mobile phone using
RF shield. Int. J. Comput. Appl. 1(13), 80–85 (2010)
9. Camarillo, G., Garcia-Martin, M.A.: The 3G IP Multimedia Subsystem (IMS):
Merging the Internet and the Cellular Worlds, 3rd edn. Wiley, Hoboken (2008)
10. Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., Peterson, J., Sparks,
R., Handley, M., Schooler, E.: SIP: Session Initiation Protocol. RFC 3261 (Pro-
posed Standard), June 2002
11. 3rd Generation Partnership Project: Technical specification group services and sys-
tem aspects: 3GPP TS 23.237: IP multimedia subsystem (IMS) service continuity
(Release 12). Technical report (2013)
12. 3rd Generation Partnership Project: Technical specification group radio access
network: 3GPP TS 25.322: radio link control (RLC) protocol specification (Release
11). Technical report (2013)
13. ITU-T: Recommendation P.10/G.100: vocabulary for performance and quality of
service. Technical report (2013)
14. 3rd Generation Partnership Project: Technical specification group radio access
network; evolved universal terrestrial radio access (E-UTRA): 3GPP TS 36.323:
packet data convergence protocol (PDCP) specification (Release 11). Technical
report (2013)
15. ADAptative Management of mediA distributioN based on saTisfaction orIented
User Modelling (ADAMANTIUM) project. http://www.ict-adamantium.eu
16. Khan, A., Sun, L., Ifeachor, E., Fajardo, J-O., Liberal, F., Koumaras, H.: Video
quality prediction models based on video content dynamics for H.264 video over
UMTS networks. Int. J. Digit. Multimedia Broadcast. 2010, Article ID 608138, 17
pp. (2010). doi:10.1155/2010/608138
17. 3rd Generation Partnership Project: Technical specification group core network
and terminals: 3GPP TS 24.312: access network discovery and selection function
(ANDSF) management object (MO) (Release 12). Technical report (2014)
18. 3rd Generation Partnership Project: Technical specification group radio access
network; evolved universal terrestrial radio access (E-UTRA) and evolved universal
terrestrial radio access network (E-UTRAN): 3GPP TS 36.300: overall description
(Release 12). Technical report (2013)
Wireless Networks Algorithms
and Techniques
A New Learning Automata-Based Algorithm
to the Priority-Based Target Coverage Problem
in Directional Sensor Networks
1 Introduction
Wireless sensor nodes are electronic devices that are able to collect, store, and
process environmental information, and they can communicate with other sen-
sor nodes through wireless communications. A wireless sensor network (WSN)
consists of a large number of wireless sensor nodes distributed within a region
of interest. Wireless sensor nodes are conventionally assumed to have a disk-
like sensing range [16]. Nevertheless, in real world, sensor nodes may be limited
in their sensing angle and they can sense only a sector of a disk-like region.
These sensors are known as directional sensors (e.g., ultrasound, infrared, and
video sensors) [1] and the networks composed of them are know as directional
sensor networks (DSNs). Sensor nodes are powered by batteries with limited life-
time, which cannot be recharged or replaced in remote and harsh environments.
For this reason, extending the network lifetime is of a great importance in sensor
networks.
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 219–229, 2015.
DOI: 10.1007/978-3-319-16292-8 16
220 S. Salleh et al.
One of the basic problems associated with any sensor network is cover-
age through which different types of data are collected from the environment.
Coverage problem can be classified into two main subcategories: area coverage
and target coverage [8]. In the area coverage, the whole area of interest should
be monitored continuously. Whereas, in the target coverage, only some crucial
points (targets) in the area are needed to be monitored [8]. The target cover-
age problem can be classified into three sub-problems: simple target coverage,
k-coverage, and priority-based target coverage (PTC). In simple coverage, each
target is monitored by at least one sensor node. Simple coverage has a low accu-
racy in its monitoring operation. This drawback shifts the attentions towards
k-coverage wherein each target is monitored by at least k sensor nodes, leading
to an enhancement in the reliability and accuracy of the monitoring operation.
However, in real applications, targets may have different coverage requirements,
which causes k-coverage to be unfit to such situations. This feature of k-coverage
pushes us to take PTC into consideration, in which a target is monitored by
different number of sensor nodes based on its coverage requirement (priority).
The coverage requirement refers to the minimum quality of monitoring that each
target requires. The coverage requirement is denoted by a value that can be set
based on the nature of the problem. Therefore, one of the most important chal-
lenges is to solve the PTC problem and, at the same time, maximize the network
lifetime, which is addressed in the present study.
Due to dense deployment of sensor nodes in most applications, organizing the
sensors into several cover sets and then activating these cover sets successively
is a promising solution to this problem, which is known as scheduling technique.
Many studies have used this technique as solution to the target coverage prob-
lem in DSNs. Ai and Abouzeid [7] have conducted one of the first studies on
the coverage problem in DSNs. They have modeled a Maximum Coverage with
the Minimum Sensors problem to maximize the number of covered targets while
minimizing the number of activated sensors. In [4], the authors have defined the
multiple directional cover sets problem and proved its NP-completeness. They
have proposed several heuristic algorithms to solve the problem of target cover-
age. For solving this problem, Gil and Han [5] have proposed two algorithms: one
based on greedy method and the other one based on genetic algorithm. In [3,13],
Mohamadi et al. have taken the advantage of learning automata in order to find
a near-optimal solution for solving the target coverage problem.
A common assumption in the above-mentioned studies is that the targets
require the same coverage quality. Consequently, the algorithms proposed in
these studies cannot perform well under real scenarios in which the targets are
different in their coverage quality requirements. Wang et al. [2] have introduced
the problem of PTC in the DSNs, which aimed at choosing a minimum subset
of directional sensors that is able to satisfy the prescribed priorities of all the
targets. They have proposed a genetic algorithm for solving this problem and
used a directional sensing model in which more than one direction of a sensor can
work at the same time. Moreover, their algorithm generates only one cover set.
Yang et al. [6] also assumed that the targets require different coverage quality
A New Learning Automata-Based Algorithm to the PTC Problem in DSNs 221
2 Problem Definition
In this study, we investigate the following scenario. Several targets are distributed
within a two-dimensional Euclidean field. A certain coverage quality requirement
is defined for each target, indicating its importance. In this field, a number of
directional sensors are randomly deployed close to the targets to satisfy their cov-
erage quality requirements. All deployed sensors are homogeneous in their initial
energy, sensing range, and the number of directions. Each directional sensor has
several directions; however, at each given time, only one of its directions can
be activated (known as working direction). Each directional sensor can monitor
only one sector of the disk. Consequently, a target is monitored by a directional
sensor only if it is located within both the sensing range and working direction
of the sensor. An important factor that affects considerably the coverage quality
is the distance between the directional sensor and the target. That is, with an
increase in the distance, the coverage quality decreases, and vice versa. Note
that a target may require to be monitored by more than one directional sensor
simultaneously in order that its coverage quality requirement could be fully sat-
isfied. A general assumption is that the coverage quality requirement of a target
that could be satisfied is equal to the sum of coverage provided by the sensor
directions that cover the target. In this paper, we use the following notation [6]:
222 S. Salleh et al.
3 Proposed Algorithm
In this section, we propose a centralized LA-based scheduling algorithm as a solu-
tion to the problem of PTC(See [9,10] for more details about learning automata).
The operation of network in this algorithm is composed of several rounds. At each
round, one cover set is generated, which is able to satisfy the coverage quality
requirement of all the targets. The algorithm consists of two phases: initialization
and sensor direction selection, which are elaborated in the following subsections.
3.1 Initialization
The initialization phase is composed of three steps: generating a network of LA,
defining the action-set of LA, and configuring the action probability vector of LA.
In the first step, in order to generate a network of LA, each target is provided
with a learning automaton. The learning automaton is aimed to select one or
more sensor directions needed for satisfying the coverage quality requirement
A New Learning Automata-Based Algorithm to the PTC Problem in DSNs 223
CF (di,j )
pji (k) = ∀αij ∈ αi and k = 0 (1)
CF (di,j )
where CF (di,j ) signifies the sum of the targets’ coverage quality requirements
satisfied by direction di,j , and CF (di,j ) denotes the sum of the targets’ cov-
erage quality requirements of all the sensor directions that monitor target ti .
From Eq. (1), it is implied that the sensor directions that satisfy more coverage
quality requirements are more likely to be chosen as active sensor direction. Note
that the action probability vector of the LA is updated based on the rewarding
process.
It is noticeable that the action-set and action probability vector of LA change
over the time in two conditions: (i) when a sensor runs out its energy and/or
(ii) when the action-set of a learning automaton is to be pruned. For instance, if
sensor direction (di,j ) becomes disabled at stage k + 1, the action-set of learning
automaton Ai is updated by eliminating the action corresponding to sensor
direction (di,j ). Then, the choice probability of the removed action (αij ) is set to
zero, and that of other actions (αij ) is updated as follows.
pji (k)
pji (k + 1) = pji (k).[1 + ] j = j (2)
1− pji (k)
224 S. Salleh et al.
Up to this part of the paper, three steps regarding the initialization phase,
namely, generating a network of LA, forming the action-set of LA, and config-
uring the action probability vector of LA have been elaborated. The following
subsection explains the sensor direction selection phase of the algorithm.
directions smaller than or equal to those of the cover sets previously gener-
ated. With such a rewarding process, the convergence of the action probability
vector of the learning automaton to the optimal configuration can be guaran-
teed. It should be noted that the action probability vector of the activated LA
is updated after all disabled actions are re-enabled. Dynamic threshold is ini-
tially set to a large value and, at each stage, it is set to the cardinality of the last
rewarded cover set. Then, the k-th stage of the proposed algorithm will be ended.
As the proposed algorithm continues, the LA learn how to choose active sensor
directions in such a way that a cover set with the minimum cardinality could be
generated. The algorithm is terminated once the number of constructed cover
sets reaches a value higher than a predefined threshold. Finally, the cover set
with the minimum number of active sensor directions is returned as the output
of the algorithm.
Afterward, an activation time is assigned to the cover set and added to the
total network lifetime. Based on the activation time, the algorithm updates the
residual energy of the sensors that have a direction in the generated cover set
and eliminates the sensors that have no residual energy from the set of available
sensors. It means a round of the algorithm is terminated and another round is
started. The process of constructing a new cover set continues until the coverage
quality requirements of all the targets become satisfied.
4 Simulation Results
8
Proposed Algorithm
Network lifetime
6
4
50 60 70 80 90
Number of sensors
13
Proposed Algorithm
12
Network lifetime
11
10
7
4 8 12 16 20
Number of targets
12
Proposed Algorithm
11
Network lifetime
10
5
30 35 40 45 50
Sensing range
5 Conclusion
This paper investigated the problem of target coverage in a directional sensor
network in which the sensors were limited in their battery power and sensing
angle, and the targets had different coverage requirements (the problem was
known as priority-based target coverage problem). To solve this problem, we
proposed a learning automata-based scheduling algorithm capable of organizing
the directional sensors into several cover sets in such a way that each cover set
could satisfy coverage requirements of all the targets. Several experiments were
conducted to examine the effect of different parameters such as the number of
sensors and targets and the sensing range on the network lifetime. The obtained
results demonstrated the contribution of the proposed algorithm to solving the
problem. In future studies, we intend to develop an appropriate algorithm for
solving the target coverage problem in cases in which sensors have multiple
sensing ranges.
References
1. Guvensan, M.A., Yavuz, A.G.: On coverage issues in directional sensor networks:
a survey. Ad Hoc Networks. 9, 1238–1255 (2011)
2. Wang, J., Niu, C., Shen, R.: Priority-based target coverage in directional sensor
networks using a genetic algorithm. Comput. Math. Appl. 57, 1915–1922 (2009)
A New Learning Automata-Based Algorithm to the PTC Problem in DSNs 229
3. Mohamadi, H., Ismail, A.S., Salleh, S.: A learning automata-based algorithm for
solving coverage problem in directional sensor networks. Computing 95, 1–24
(2013)
4. Cai, Y., Lou, W., Li, M., Li, M.: Energy efficient target-oriented scheduling in
directional sensor networks. IEEE Trans. Comput. 58, 1259–1274 (2009)
5. Gil, J.M., Han, Y.H.: A target coverage scheduling scheme based on genetic algo-
rithms in directional sensor networks. Sensors 11, 1888–1906 (2011)
6. Yang, H., Li, D., Chen, H.: Coverage quality based target-oriented scheduling in
directional sensor networks. In: Proceedings of international Conference on Com-
munications, pp. 1–5 (2010)
7. Ai, J., Abouzeid, A.A.: Coverage by directional sensors in randomly deployed wire-
less sensor networks. J. Comb. Optim. 11, 21–41 (2006)
8. Wang, B.: Coverage problems in sensor networks: a survey. ACM Comput. Surv.
43, 32 (2011)
9. Najim, K., Poznyak, A.S.: Learning Automata: Theory and Applications. Printice-
Hall, New York (1994)
10. Thathachar, M.A.L., Harita, B.R.: Learning automata with changing number of
actions. IEEE Trans. Syst. Man Cybern. 17, 1095–1100 (1987)
11. Mohamadi, H., Ismail, A.S., Salleh, S., Nodhei, A.: Learning automata-based algo-
rithms for finding cover sets in wireless sensor networks. J. Supercomput. 66,
1533–1552 (2013)
12. Mohamadi, H., Ismail, A.S., Salleh, S.: Utilizing distributed learning automata to
solve the connected target coverage problem in directional sensor networks. Sens.
Actuators A Phys. 198, 21–30 (2013)
13. Mohamadi, H., Ismail, A.S., Salleh, S., Nodhei, A.: Learning automata-based algo-
rithms for solving the target coverage problem in directional sensor networks.
Wirel. Pers. Commun. 73, 1309–1330 (2013)
14. Mohamadi, H., Ismail, A.S., Salleh, S.: Solving target coverage problem using cover
sets in wireless sensor networks based on learning automata. Wirel. Pers. Commun.
75, 447–463 (2014)
15. Salleh, S., Marouf, S.: A learning automata-based solution to the target coverage
problem in wireless sensor networks. In: Proceedings of International Conference
on Advances in Mobile Computing and Multimedia, pp. 185–191 (2013)
16. Yick, J., Mukherjee, B., Ghosal, D.: Wireless sensor network survey. Comput. Netw.
52, 2292–2330 (2008)
Anti-jamming Strategies: A Stochastic
Game Approach
1 Introduction
Due to the fact that wireless networks are built upon a shared and open medium,
wireless networks are susceptible to malicious attacks, especially those involv-
ing jamming or interference. For this reason wireless security has continued to
receive growing attention by the research community. A reader can find com-
prehensive surveys of such threats in [1,2]. In this paper, we focus specifically
on jamming attacks and anti-jamming strategies to cope with such malicious
interference. In such attacks, an adversary (jammer) tries to degrade the signal
quality at the intended receiver (see, for example, a recent book on jamming
principles and techniques [3], on detecting jamming attacks [4], about employ-
ing artificial noise to improve secret communication [5], about defense against
jamming attacks [6], jamming in multi-channel cognitive radio networks [7], jam-
ming of dynamic traffic [8]). These types of attacks can be accomplished by an
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 230–243, 2015.
DOI: 10.1007/978-3-319-16292-8 17
Anti-jamming Strategies: A Stochastic Game Approach 231
adversary by either bypassing the MAC (Media Access Control) layer proto-
col or by emitting radio frequency signals. Here we mention, as examples, only
three type of jamming (malicious) attacks: (a) Constant jammer continuously
emits radio frequency signals and it transmits random bits of data to channel.
(b) Random jammer alternates between period of continuous jamming and inac-
tivity. After jamming, it stops emitting radio signals and enter into sleep mode.
(c) The adversary can even be more sophisticated, such as when a primary user
emulation attack is carried out by a malicious user emulating a licensed primary
user to obtain the resources of a given channel to jam or ward-off the other users
from using the channels. To deal with such problems where users have conicting
interests, game theory is a proper tool [9]. In [10], one can find a structured
and comprehensive survey of research contributions that analyze and solve secu-
rity and privacy problems in computer and wireless networks via game-theoretic
approaches. Here, as examples of game-theoretic approaches, we mention just a
few such works: for modeling malicious users in collaborative networks [11], for
adaptive packetized wireless communication [12], for attack-type uncertainty on
a network [13], for packet transmission under jamming [14], for fight jamming
with jamming [15], for ad hoc networks [16], and for fair-allocation of resources
by a base station under uncertainty [17], for jamming in fast-fading channels
[18]. The applications of stochastic games for modeling network security can
be found in [19–22] and for secret and reliable communication with active and
passive adversarial modes in [23].
In this paper we study anti-jamming strategies versus two type of jamming
attacks: (i) a random jammer, where the malicious user combines jamming modes
with sleep modes, and (ii) a sophisticated jammer, where the malicious user uses
the network for a two-fold purpose: law-obedient communication with other users
and to conduct a jamming attack against a specific (primary) user. Through
simple stochastic game models, we demonstrate that incorporating silent time
periods in the transmission protocol so as to increase the probability of detecting
the jamming source, can increase the reliability of communication and support
jamming-robust operation.
The organization of this paper is as follows: in Sect. 2 and in two its subsec-
tions, we first introduce and solve non-zero sum and zero-sum stochastic games
with a random jammer. In Sect. 3, we formulate and solve a stochastic game
for a sophisticated adversary combining malicious and law-obedient behavior. In
Sect. 4, conclusions are presented. Finally, in Sect. 5, due to restriction on the
paper’s length, to illustrate applied mathematical methods for solving explicitly
the suggested stochastic games, the proof of the theorem about non-zero-sum
stochastic game with random jammer is offered as a supplement.
In this section we deal with the situation where the interferer is a random jammer
that can choose between jamming and sleep modes. Thus, there are two users:
the PU (or user 1) and the jammer (or user 2). The game is played in time slots
232 A. Garnaev and W. Trappe
0,1,. . . . At each time slot user 2 chooses between two modes: (a) a jamming
mode (J), where user 2 tries jam user 1’s communication applying the optimal
strategy for such mode, and (b) a sleep mode (S), where user 2 does not apply
any power at all – perhaps because user 2 is employing some form of intelligent
strategy, choosing how long and when to jam and to sleep.
Beyond the difference in the payoffs in the jamming and transmission modes,
there is another important aspect to consider in the game, namely, in jamming
mode the jamming source can be detected, and perhaps user 2 can be identified
as malicious and his malicious activity can be stopped. We assume that there is
a probability 1 − γ of detecting user 2 in jamming mode by an IDS (Intrusion
Detection System). Note that there is quite an extended literature on detecting
an intruder’s signal or its source (see, for example, books [24–26], and papers on
the energy detection of unknown signals [27,28] and on game-theoretic model of
the optimal scanning bandwidth algorithm [29,30]). Thus, γ is the probability
of not detecting the source of the malicious activity. Of course, in sleep mode
user 2 cannot be detected since he is not active in that case.
At each time slot user 1 also chooses between two modes: (a) a transmission
mode (TJ ), where he transmits a signal optimal under the jamming threat, and
(b) a silent (or quiet) mode (S), where he does not apply any power at all.
When user 1 chooses silent mode, he is not transmitting signals but is trying
to increase the probability to detect the source of jamming by the IDS. So, in
silent mode detection probability 1 − γS > 1 − γ is greater than in transmission
mode. By using silent mode, user 1 may lose some payoff due to the delay in
transmitting signals. However, user 1 can gain due to the earlier detection of
the jamming event, and, hence, earlier resumption of the more efficient regime
of transmission.
There is a discount factor δ on the signals transmitted and the rewards
obtained for jamming. This δ can be interpreted as the urgency in communication,
δ = 0 corresponds to the highest urgency and means that transmission has to
be performed during the current time slot, not later, while increasing δ means
that losing a transmission time slot can be easily compensated in the following
time slots.
In the next two sections we’ll model this situation using non-zero sum and
zero sum scenarios. The zero-sum scenario allows us to find a maxmin transmis-
sion protocol giving the optimal transmission under the worst conditions. The
non-zero sum scenario allows us to find the transmission protocol versus a slightly
more sophisticated malicious user, who wants to jam transmission without being
detected. Such a jammer is inclined to be less risky.
The payoff to user 1 in transmission mode is his throughput, i.e. the payoff
to user 1, if theusers apply powers P 1 and P 2 respectively, is given as follows:
n
vT (P , P ) = i=1 ln 1 + h1i Pi1 /(σ 2 + h2i Pi2 ) , where h1i , h2i are fading channel
1 1 2
We will look for stationary equilibrium (x1 , x2 ), where x1 = (x1TJ , x1S ) is the
stationary mixed strategy of user 1 assigning the probabilities x1TJ and x1S to
using actions TJ and S, so, x1TJ + x1S = 1, x2 = (x2S , x2J ) is the stationary mixed
strategy of user 2 assigning the probabilities x2S and x2J to using actions S and J,
so, x2S + x2J = 1. Recall that a pair of (mixed) strategies (x1 , x2 ) is a stationary
equilibrium if and only if they are the best response strategies to each other, i.e.
they are solutions of the following equations:
xk = arg max (x1 )T A1 (u1 )x2 , k = 1, 2
1T xk =1,xk ≥0
234 A. Garnaev and W. Trappe
such that
uk = (x1 )T Ak (uk )x2 , k = 1, 2,
where
1 2 2
aTJ S + δu1 a1TJ J + δγu1 δu aTJ J + δγu2
A1 (u1 ) = and A2 2
(u ) = .
δu1 a1SJ + δγS u1 δu2 δγS u2
Instead of solving these LP problems directly, it is easier to solve them using an
approach that involves examining their dual LP problems:
U 1 (u1 ) = arg min U 1 (u1 ), (1)
A1 (u1 )x2 ≤1 U 1 (u1 ),1 x2 =1,x2 ≥0
T T
follows:
a1TJ S
x1TJ = 0, x2J = ,
a1SJ + a1TJ S − a1TJ J − δ(γ − γS )u1
a1SJ − a1TJ J − δ(γ − γS )u1
x1S = 1, x2S = , (8)
a1SJ + a1TJ S − a1TJ J − δ(γ − γS )u1
u1 = (−c11 − (c11 )2 − 4c12 c10 )/(2c12 ) and u2 = 0, (9)
with
c10 = a1SJ a1TJ S , c11 = (a1TJ J − a1SJ )(1 − δ) − a1TJ S (1 − δγS ),
c12 = (1 − δ)δ(γ − γS ). (10)
Anti-jamming Strategies: A Stochastic Game Approach 235
Figures 1 and 2 illustrate the impact of the silent mode on the user’s payoffs
and their equilibrium strategies for vT1 (P 1TJ ∗ , P 2J∗ ) = 0.5, vJ2 (P 1TJ ∗ , P 2J∗ ) = 3,
vT1 (P 1TJ ∗ , 0) = 2.5, v̄ 1 = 6 and γ = 0.8. Of course, with increasing discount factor
δ (so, with decreasing urgency in transmission) the user payoffs are increasing
since user 1 intends transmit longer, and so user 2 can longer jam. It is interest-
ing that user 1’s equilibrium strategy has a threshold structure between jamming
and transmission mode and it is the same as if the malicious user is a constant
jammer. The values of user 1’s strategy does not depend explicitly on the detec-
tion probability in silent mode, and only on the domains of applying these values
depends explicitly). The payoff to user 1 depends on this probability continu-
ously. For user 2 this phenomena can be observed in reverse order. The payoff to
user 2 has threshold structure on the probability and its value does not depend
on the probability explicitly, while the equilibrium strategy for user 2 depends
on it continuously. Also, Fig. 1 illustrates the domain, where user 1 gains from
employing silent mode, incorporating in the transmission protocol some form of
ambush mode to help the IDS to detect the jamming source. Of course, user 2
loses in this domain, and this domain essentially depends on the relation between
urgency in transmission and detection probability.
S J
T a1TJ S + δΓ a1TJ J + δγΓ
Γ = J , (11)
S δΓ a1SJ + δγS Γ
236 A. Garnaev and W. Trappe
Fig. 2. Equilibrium probabilities to transmit for user 1 and to jam for user 2.
where val(Γ ) is the value of the game. Since the game is zero-sum, then
maxx1 minx2 coincides with minx2 maxx1 in (12). The following theorem claims
that the game has a unique equilibrium and gives it explicitly.
Fig. 3. (a) The domains where users apply different equilibrium strategies, and
(b) probability to transmit by user 1.
In this section we assume that user 2 has a more sophisticated behaviour, where
he might be a secondary user (SU) in a network and intends to use the net-
work for two purposes: (i) to communicate as a law-obedient user, and (ii) to
jam as a malicious user. Thus, he can choose between two modes: (a) trans-
mission mode T , and (b) a jamming mode J. User 1 can choose between two
modes: (a) transmission mode (TT ), where he transmits by applying the opti-
mal power under assumption that user 2 is law obedient and uses the net-
work the optimal way for transmission; and (b) a silent mode (S) to increase
the probability of detection of the jamming event. The probability of detect-
ing the jamming source is 1 − γ, when user 1 is in transmission mode, and
it is 1 − γS , when user 1 is in silent mode. So, γS < γ. As a basic exam-
ple of the payoff nto user 2 in transmission mode we consider throughput, i.e.
vT2 (P 1 , P 2 ) = i=1 ln 1 + h2i Pi2 /(σ 2 + h1i Pi1 ) . In transmission mode the users
apply strategies composing Nash equilibrium for such a mode [9], i.e. a pair
of strategies (P 1TT ∗ , P 2T ∗ ) that for any (P 1 , P 2 ) the following inequalities hold:
vT1 (P 1 , P 2T ∗ ) ≤ vT1 (P 1TT ∗ , P 2T ∗ ) and vT2 (P 1 , P 2T ∗ ) ≤ vT2 (P 1TT ∗ , P 2T ∗ ).
For example, the equilibrium strategies in transmission mode (P 1T ∗ , P 2T ∗ ) can
be calculated by the results for general models using the Iterative Water Filling
Algorithm (IWFA) [34,35]. For symmetric models the solution can be obtained
explicitly [36,37]. Since user 2 knows that user 1 combines two modes (silent
mode to detect the source of possible malicious activity and the transmission
mode with the optimal power allocation versus law obedient action of the user
2), to gain greater jamming impact, in jamming mode user 2 applies the best
response strategy to P 1TT ∗ , i.e. P 2J∗ = argP 2 max vT2 (P 1TT ∗ , P 2 ). This scenario
can be described by the following non-zero sum stochastic game:
T J
TT (a1TT T + δΓ 1 , a2TT T 2
+ δΓ ) (a1TT J + δγ Γ 1 , a2TT J + δγΓ 2 )
(Γ 1 , Γ 2 ) = , (13)
S (δΓ 1 , a2ST + δΓ 2 ) (a1SJ + δγS Γ 1 , δγS Γ 2 )
238 A. Garnaev and W. Trappe
where
Theorem 3. The considered non-zero sum stochastic game has a unique sta-
tionary equilibrium.
(a) If (1 − γδ)a2TT T /(1 − δ) > a2TT J then the unique (pure) equilibrium is
(TT , T ) with payoffs uk = akTT T /(1 − δ) with k = 1, 2.
(b) If (1 − γδ)a2TT T /(1 − δ) < a2TT J and (1 − γS δ)a1TT J /(1 − δγ) > a1SJ then
the unique (pure) equilibrium is (TT , J) with payoffs uk = akTT J /(1 − δγ) with
k = 1, 2.
(c) If (1−γδ)a2TT T /(1−δ) < a2TT J and (1−γS δ)a1TT J /(1−δγ) < a1SJ then the
unique (mixed) equilibrium (x1 , x2 ) = ((x1TT , x1S ), (x2T , x2J )) with payoffs (u1 , u2 )
are given as follows:
a2ST + δ(1 − γS )u2 a2TT J − a2TT T − δ(1 − γ)u2
x1TT = , x1S = 2 ,
a2ST + 2
aT T J 2
− aTT T + δ(γ − γS )u 2 aST + a2TT J − a2TT T + δ(γ − γS )u2
a1TT J − a1SJ + δ(γ − γS )u1 a1TT T
x2T = , x2J = 1 ,
a1TT J 1 1
− aSJ − aTT T + δ(γ − γS )u 1 aTT J − aSJ − a1TT T + δ(γ − γS )u1
1
u1 = −c11 − (c11 )2 − 4c12 c10 )/(2c12 ) , u2 = −c21 + (c21 )2 − 4c22 c20 /(2c22 ),
where
Figures 4 and 5 illustrate the impact of the silent mode on user payoffs and their
equilibrium strategies for vT1 (PT1T ∗ , PT2 ∗ ) = 1.1, vT2 (PT1T ∗ , PT2 ∗ ) = 1.1, vT1 (PT1T ∗ , PJ∗
2
)=
2 1 2 2 2 1
0.1, vT (PTT ∗ , PJ∗ ) = 1.6, vT (0, P T ∗ ) = 3, v̄ = 4 and γ = 0.8 as functions on dis-
count factor δ and probability of non-detection in silent mode γS . It is interesting
that user 1 never employs silent mode with certainty, he uses either just the trans-
mission mode or chooses randomly between transmission and jamming modes.
Thus, user 1 employs the silent mode versus a sophisticated adversary less often
then versus a random jammer. Employing such a mode allows user 1 to increase
his payoff and to reduce the payoff to user 2. There is one more interesting dif-
ference between the random and sophisticated jammer: payoff to user 1 versus
sophisticated jammer is piece-wise continuous in terms of the discount factor δ
and probability of non-detection in silent mode, while versus the random jam-
mer it is continuous. Finally, note that the zero-sum version of the game (13)
has the same structure as (11) with adaptation of matrix A1 coefficients. Hence,
Theorem 2 also can be applied for the zero-sum version of the game (13).
Anti-jamming Strategies: A Stochastic Game Approach 239
Fig. 5. Equilibrium probability to transmit for user 1 and to jam for user 2.
4 Conclusions
of the network since some threshold values could increase the sensitivity of the
transmissions protocol, while in other situations it produces only a minimal
impact. In our future work, we are going to investigate how different discount
factors, which illustrates differences in the urgency for the users to perform their
action, can impact the optimal strategies. Also, we are going to investigate more
sophisticated jamming and anti-jamming strategies that describe different types
of malicious activity as well as the corresponding responses to them by the pri-
mary user, and to incorporate some learning algorithm in the users behaviour.
min U 1 (u1 )
L1TJ (u1 , x2S ) := (a1TJ S + δu1 )x2S +(a1TJ J + δγu1 )(1 − x2S ) ≤ U 1 (u1 ), (17)
L1S (u1 , x2S ) := δu1 x2S +(a1SJ + δγS u1 )(1 − x2S ) ≤ U 1 (u1 ),
min U 2 (u2 )
L2S (u2 , x1TJ ) := δu2 x1TJ +δu2 (1 − x1TJ ) ≤ U 2 (u2 ), (18)
LJ (u , xTJ ) := (aTJ J + δγu )xTJ +δγS u2 (1 − x1TJ ) ≤ U 2 (u2 )
2 2 1 2 2 1
Then, by (19), (see, Fig. 6) for any u1 ∈ [0, ū1 ], where ū1 = (a1SJ − a1TJ J )/
(δ(γ − γS )), U 1 (u1 ) is a solution of the equations
Thus,
δ 2 (γ − γS )(u1 )2 + δ(a1TJ J − a1SJ − γS a1TJ S )u1 − a1SJ a1TJ S
U 1 (u1 ) = . (20)
δ(γ − γS )u1 + a1TJ J − a1SJ − a1TJ S
Anti-jamming Strategies: A Stochastic Game Approach 241
Fig. 6. LP problem (17) for u1 < ū1 (left) and LP problem (17) for u1 = ū1 (right).
It is clear that
U 1 (0) > 0. (21)
By Fig. 6(b),
References
1. Vadlamani, S., Medal, H., Eksioglu, B.: Security in wireless networks: a tutorial.
In: Butenko, S., Pasiliao, E.L., Shylo, V. (eds.) Examining Robustness and Vul-
nerability of Networked Systems, pp. 272–289. IOS Press, Boston (2014)
2. Sharma, A., Ahuja, S., Uddin, M.: A survey on data fusion and security threats in
CR networks. Int. J. Curr. Eng. Technol. 4, 1770–1778 (2014)
3. Poisel, R.A.: Modern Communications Jamming Principles and Techniques. Artech
House Publishers, London (2006)
4. Xu, W., Trappe, W., Zhang, Y., Wood, T.: The feasibility of launching and detect-
ing jamming attacks in wireless networks. MobiHoc 2005, 46–57 (2005)
5. Negi, R., Goel, S.: Secret communication using artificial noise. In: IEEE VTC 2005,
pp. 1906–1910 (2005)
242 A. Garnaev and W. Trappe
6. Xu, W.: Jamming attack defense. In: Tilborg, H., Jajodia, S. (eds.) Encyclopedia
of Cryptography and Security, pp. 655–661. Springer, New York (2011)
7. Wu, Y., Wang, B., Liu, K.J.R., Clancy, T.C.: Anti-jamming games in multi-channel
cognitive radio networks. IEEE JSAC 30, 4–15 (2012)
8. Sagduyu, Y.E., Berry, R.A., Ephremides, A.: Jamming games for power controlled
medium access with dynamic trafficc. In: IEEE ISIT 2010, pp. 1818–1822 (2010)
9. Fudenberg, D., Tirole, J.: Game Theory. MIT Press, Boston (1991)
10. Manshaei, M.H., Zhu, Q., Alpcan, T., Basar, T., Hubaux, J.P.: Game theory meets
network security and privacy. ACM Comput. Surv. 45, 25:1–25:39 (2013)
11. Theodorakopoulos, G., Baras, J.S.: Game theoretic modeling of malicious users in
collaborative networks. IEEE JSAC 26, 1317–1327 (2008)
12. Firouzbakht, K., Noubir, G., Salehi, M.: On the performance of adaptive packetized
wireless communication links under jamming. IEEE Trans. Wirel. Commun. 13,
3481–3495 (2014)
13. Garnaev, A., Baykal-Gursoy, M., Poor, H.V.: Incorporating attack-type uncer-
tainty into network protection. IEEE Trans. Inf. Forensics Secur. 9, 1278–1287
(2014)
14. Garnaev, Andrey, Hayel, Yezekael, Altman, Eitan, Avrachenkov, Konstantin: Jam-
ming game in a dynamic slotted ALOHA network. In: Jain, Rahul, Kannan,
Rajgopal (eds.) Gamenets 2011. LNICST, vol. 75, pp. 429–443. Springer,
Heidelberg (2012)
15. Chen, L., Leneutreb, J.: Fight jamming with jamming - a game theoretic analysis
of jamming attack in wireless networks and defense strategy. Comput. Netw. 55,
2259–2270 (2011)
16. Liao, X., Hao, D., Sakurai, K.: Classification on attacks in wireless ad hoc net-
works: a game theoretic view. In: 2011 7th International Conference on Networked
Computing and Advanced Information Management (NCM), pp. 144–149 (2011)
17. Altman, E., Avrachenkov, K., Garnaev, A.: Fair resource allocation in wireless
networks in the presence of a jammer. Perform. Eval. 67, 338–349 (2010)
18. Amariucai, G.T., Wei, S.: Jamming games in fast-fading wireless channels.
IJAACS 1, 411–424 (2008)
19. Nguyen, K.C., Alpcan, T., Basar, T.: Stochastic games for security in networks
with interdependent nodes. In: GameNets 2009, pp. 697–703 (2009)
20. Wang, B., Wu, Y., Liu, K.J.R., Clancy, T.C.: An anti-jamming stochastic game
for cognitive radio networks. IEEE JSAC 29, 877–889 (2011)
21. DeBruhl, B., Kroer, C., Datta, A., Sandholm, T., Tague, P.: Power napping with
loud neighbors: optimal energy-constrained jamming and anti-jamming. In: 2014
ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec
2014), pp. 117–128 (2014)
22. Calinescu, G., Kapoor, S., Qiao, K., Shin, J.: Stochastic strategic routing reduces
attack effects. In: GLOBECOM 2011, pp. 1–5 (2011)
23. Garnaev, A., Baykal-Gursoy, M., Poor, H.V.: A game theoretic analysis of secret
and reliable communication with active and passive adversarial modes. IEEE
Trans. Wirel. Commun. (2014, submitted)
24. Comaniciu, C., Mandayam, N.B., Poor, H.V.: Wireless Networks Multiuser Detec-
tion in Cross-Layer Design. Springer, New York (2005)
25. Verdu, S.: Multiuser Detection. Cambridge University Press, Cambridge (1998)
26. Trees, H.L.V.: Detection, Estimation, and Modulation Theory. Wiley, New York
(2001)
27. Urkowitz, H.: Energy detection of unknown deterministic signals. Proc. IEEE 55,
523–531 (1967)
Anti-jamming Strategies: A Stochastic Game Approach 243
28. Digham, F.F., Alouini, M.S., Simon, M.K.: On the energy detection of unknown
signals over fading channels. In: IEEE ICC 2003, pp. 3575–3579 (2003)
29. Garnaev, Andrey, Trappe, Wade: Stationary equilibrium strategies for bandwidth
scanning. In: Jonsson, Magnus, Vinel, Alexey, Bellalta, Boris, Marina, Ninoslav,
Dimitrova, Desislava, Fiems, Dieter (eds.) MACOM 2013. LNCS, vol. 8310,
pp. 168–183. Springer, Heidelberg (2013)
30. Garnaev, A., Trappe, W., Kung, C.-T.: Dependence of optimal monitoring strat-
egy on the application to be protected. In: 2012 IEEE Global Communications
Conference (GLOBECOM), pp. 1054–1059 (2012)
31. Altman, Eitan, Avrachenkov, Konstantin, Garnaev, Andrey: A jamming game in
wireless networks with transmission cost. In: Chahed, Tijani, Tuffin, Bruno (eds.)
NET-COOP 2007. LNCS, vol. 4465, pp. 1–12. Springer, Heidelberg (2007)
32. Garnaev, A., Hayel, Y., Altman, E.: A Bayesian jamming game in an OFDM
wireless network. In: 2012 10th International Symposium on Modeling and Opti-
mization in Mobile, Ad Hoc and Wireless Networks (WIOPT), pp. 41–48 (2012)
33. Altman, E., Avrachenkov, K., Garnaev, A.: Jamming in wireless networks: the case
of several jammers. In: International Conference on Game Theory for Networks
(GameNets 2009), pp. 585–592 (2009)
34. Luo, Z.-Q., Pang, J.-S.: Analysis of iterative waterfilling algorithm for multiuser
power control in digital subscriber lines. EURASIP J. Adv. Sign. Process. 2006,
10 (2006)
35. Yu, W., Ginis, G., Cioffi, J.M.: Distributed multiuser power control for digital
subscriber lines. IEEE JSAC 20(5), 1105–1115 (2002)
36. Altman, E., Avrachenkov, K., Garnaev, A.: Closed form solutions for symmetric
water filling games. In: 27th IEEE Communications Society Conference on Com-
puter Communications (INFOCOM 2008), pp. 673–681 (2008)
37. Altman, E., Avrachenkov, K., Garnaev, A.: Closed form solutions for water-filling
problem in optimization and game frameworks. Telecommun. Syst. 47, 153–164
(2011)
OpenMobs: Mobile Broadband Internet
Connection Sharing
1 Introduction
Today mobile handsets and devices have come to outnumber traditional PCs
several times, and applications for mobile devices exploded in number. But many
such applications are in need or generate a lot of Internet traffic. For example, to
address problems associated with the limited amount of resources (computation,
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 244–258, 2015.
DOI: 10.1007/978-3-319-16292-8 18
Broadband Internet Sharing 245
storage, power) available within the mobile device, or to provide richer experience
to their clients, many application developers appeal to resource providers (the
‘Cloud’) other than the mobile device. This is why today we do have various
mobile applications connected to either Apple iCloud, Google‘s Gmail for Mobile,
or Google Goggles. Of course, for this to happen, application developers rely on
good networking connections with the Cloud.
The networking capabilities offered by mobile devices have become very
diverse lately. Internet access options range from using free Wi-Fi at a hotspot,
to having a mobile broadband (e.g., 3G) or a mobile hotspot access (the “any-
where, anytime” Internet access offered over cellular networks). Among these,
mobile broadband access is still widely used, since it allows the user to go online
anywhere there is a cellular signal.
For mobile broadband access, a mobile data plan from a cell phone provider
allows a client to access the 3G or 4G data network, to send and receive emails,
surf the Internet, use IM, and so on from his mobile device. Mobile broadband
devices such as mobile hotspots and USB mobile broadband modems also require
a data plan from a wireless provider.
Unlimited data plans for cell phones (including smartphones) have been the
norm most recently (sometimes folded in with other wireless services in a one-
price subscription plan for voice, data, and texting). Still, today most providers,
following the example set by AT&T in 2010 [11], use tiered data pricing, thus
eliminating unlimited data access on cell phones. Tiered data plans charge dif-
ferent rates based on how much data the client uses each month. The benefit
is that such metered plans discourage heavy data usage that could slow down
a cellular network. Thus, it is no wonder today that most mobile broadband
plans for data access on laptops and tablets or via mobile hotspots are typically
tiered [10]. The downside is that users have to be more vigilant about how much
data they are using, and for heavy users, tiered data plans are more expensive.
When it comes to choosing a suitable tiered mobile data plan [4], clients
generally tend to go for oversized mobile data plans. For choosing, clients esti-
mate their peak monthly traffic needs, which is natural considering that mobile
operators charge the extra traffic above the data plan limits. In our work, we
started by analysing this fact, through interviews and questionnaires, and found
out that today most clients do tend to pay for a lot of mobile broadband traffic,
but most of the time they never use their entire payed data plan traffic.
On the other hand, when users (accidentally) exceed their mobile plan rates,
they are generally charged extra by the mobile provider. Also, in roaming, the
extra costs for connectivity can be occasionally quite prohibitive. So, the research
question we are addressing in this article is: Can we come up with a solution that
mediates opportunistic sharing of networking resources, when needed, between
users?. For the sharing of WiFi Access Point traffic, opportunistic networking
today provides an answer [2]. For mobile broadband access, we want to let users
share parts of their unused mobile broadband traffic with others. But users pay
a monthly fee to their mobile providers, so they might be reluctant in ‘giving
away’ traffic to others, for free. Thus, we propose letting the user become a
re-seller of broadband traffic that he gets from his mobile provider. This means
246 N.-V. Ciobanu et al.
that a user can sell part of his under-used traffic, and sell it to clients in need,
making a small profit in doing this (such that, at the end of the month, some
of his monthly mobile data plan fee gets payed by others). For a buyer, it is
attractive to have other users let him use their mobile data broadband access, if
he ends up paying less compared to the fees charged by the mobile provider.
In the present work we present OpenMobs, a system designed to support the
sharing of under-utilized resources available on mobile handsets in a distributed
and opportunistic way. When two or more handsets are in wireless proximity,
OpenMobs tries to forward part of one user’s traffic through the mobile data plan
of the other. To incentivize the payments between users, and motive them share
resources in an accountable manner, a digital currency such as BitCoin [9] can
be used as the form of payment for used resources. Here, we present extensive
studies on the feasibility of such a system to minimize the costs users pay to
their mobile providers at the end of the month.
Internet connection sharing has existed as an idea for many years, and every
modern operating system has implemented its fair share of services in order to
address it [12]. For smartphones, the latest venture that comes close to what
the current work is trying to solve is Open Garden [6]. Open Garden leverages
crowdsourcing to create seamless connectivity across 3G, 4G, Wi-Fi and Blue-
tooth. It enables users to create their own ad-hoc mesh networks with other
Open Garden enabled devices (i.e., smartphones, tablets and PCs). Unlike the
centralized idea proposed by Open Garden, OpenMobs allows users to share net-
working resources with minimal interaction with a centralized entity. OpenMobs
tackles the problem of automatic sharing based on context, where the user node
automatically chooses to use the opportunistic shared connection.
Our work can also be compared with the idea of offloading cellular networks
through ad hoc vehicular wireless networks [8]. However, we do not rely only
on the existence of wireless routers (it is even better when such devices exist),
and optimize traffic consumption particularly considering the wireless mobile
broadband charging fees. To the best of our knowledge, this is the first work to
propose such a decentralized traffic sharing approach.
The rest of the paper is structured as follows. In Sect. 2 we first introduce the
theoretical optimization problem linked to the allocation of networking resources,
and propose a heuristic allocation approach for the maximization of compensa-
tion costs. Our approach is further evaluated in extensive simulation experiments
in Sect. 3. Finally, in Sect. 4 we present the conclusions.
we want to find an equilibrium price auction model for bidding traffic between
users, such that by re-routing traffic through other mobile phones and paying a
fee for the temporary use of their data plans, users gain profits and/or pay less,
compared to the case when each user acts selfish sticking to only the local data
plan costs negotiated by each with their cell phone operators.
In this problem, any user can become a traffic provider for other users (seller ).
The traffic is auctioned, and any user is interested to buy traffic from other users
(buyer ) directly located in his wireless communication range (WiFi, Bluetooth
or ZigBee could be employed at no extra costs), if the price is lower than the
cost associated with sending the traffic over the 3G or 4G data network. As men-
tioned, such a situation appears, for example, when a buyer already consumed
its entire monthly data plan traffic, and any extra traffic might be charged by
the mobile operator at considerable higher fees. Or, when a buyer is in roaming,
and the cost of transferring data over the mobile operator can be considerable
higher compared to the costs negotiated with the mobile operator by another
user (i.e., local to the mobile network).
This situation is illustrated in Fig. 1. In the example, userB needs to transfer
some data (send and receive emails, surf the Internet, use IM, and so on) from its
mobile device. For this, he can send data over 3G, using the cost associated with
the data plan negotiated with the mobile provider (costB ). Luckly, in his wire-
less communication range, userA is offering to transfer this data, over a WiFi
connection existing between these two users, at a cost (bidAB ) lower than costB
(so userB actually pays less, the difference being his ‘gain’). For userA , this sit-
uation also brings a small profit (gainA ), since the offered cost bidAB is higher
than the actual cost (costA ) negotiated by userA with her mobile provider for
transferring this data.
When applying equilibrium price auctions
for the allocation of traffic, the two roles, buyer
and seller, face distinct yet linked challenges.
The buyer is interested to transfer traffic at the
minimum cost possible, while the seller wants
to maximize his profit (he will commonly pur-
sue the objective of maximizing profit). Thus,
we need to come up with specific equilibrium
prices each time a user is interested to transfer
some data, such that to avoid situations where
Fig. 1. Example scenario. a seller loses money at the end of the month,
by selling traffic too cheap, compared to the
cost he has to pay for his own traffic transfer
needs. In this case, the profit is given by the difference between the revenue from
the served bids, and the costs associated with all transfers over the mobile data
network.
For the rest of this article, we assume that each user has a mobile data plan,
negotiated with a local mobile provider/operator. Also, each user consumes a
certain amount of traffic, each month, for his own personal needs (transfers
248 N.-V. Ciobanu et al.
generated from the local mobile phone, for emails, web, and others). We further
assume that each user can participate in any auction, with any other users having
different data plans and traffic needs.
Fig. 2. Schematic overview of the optimization model, depicting the decision variables,
and most relevant entities.
Finally, we assume users are mobile, such that with a high probability, over
a longer period of time, any two users can meet at least once (such that ∀b ∈ B,
s ∈ S Tbs ≥ 0 stands).
Costinitial = CFu + COu Tu (2)
u∈U u∈U
Costoptim (x) = Cb Tb − xbs ∗ Tbs
b∈B s∈S
(3)
+ xbs ∗ CSbs (Tbs ) + Cs Ts + xbs ∗ Tbs
b∈B,s∈S s∈S b∈B
where,
CFu if T < DPu .
Cu (T ) = (4)
CFu + COu (T − DP ) , otherwise.
For real-life application scenarios involving thousands of users, the optimal allo-
cation approach may be problematic due to its exponential growth in compu-
tational complexity. Thus, we have developed a heuristic approach that trades
reductions in computation time against potentially sub-optimal solutions. The
idea is to determine an equilibrium price auctioned between any two users.
In our approach, whenever two users, A and B, meet, each presents to the
other a price he is willing to accept for traffic forwarding. This means that A
computes a price, CSAB , he is willing to accept from B (per data unit). If user
B needs to transfer data (for email, or others), he decides whether is cheaper to
transfer it through the mobile network (3G), or send it through A (over WiFi or
other ‘cost-free’ wireless communication protocol). In this case, A gains a small
fee, which is still larger than what it costs him to actually send the data coming
from B, over A’s mobile network. If this is true, than user A becomes the ‘seller’,
and user B the ‘buyer’.
252 N.-V. Ciobanu et al.
The CSbs price depends on several parameters (as described in Model 1): the
mobile data plan of the seller (DPs ), the traffic already used from this data plan
by the seller from the beginning of the current month1 (Ps ), and the amount
of traffic the buyer is interested to transfer (XPb ). The idea is to sell cheap
when the user has plenty of traffic left from the mobile data plan (such that to
guarantee that at least someone buys - the seller wants to maximize his profit,
and use the traffic remaining in the data plan that otherwise would be waste),
and sell at a high rate if the seller does not have much traffic left in the mobile
data plan (such that, in the unfortunate event that in the future he will also
want to use his data plan for own traffic needs, the higher fees operated by the
mobile provider for any extra traffic are still covered by the fees he gains from
his traffic buyers - the seller wants to stay ‘in profit’, and not lose money at the
end of the month).
Thus, the heuristic formula we propose for computing the cost is:
Ps +XPb −DPs
}
CSbs = emin{th, coef (7)
where th is a high upper-limit threshold (that ensures the negotiated fee does
not grow indefinitely), and coef is a coefficient that reflects the mobility environ-
ment. This means that for coef , we start with a predefined traffic value (the pre-
defined preference of the user to sell traffic). If, at the end of the month, the user
losses money (because his preference in selling made him sell cheaper that the
is charged by the mobile operator), the value of this coefficient doubles. After
several iterations, as the experiments presented next show, the system actually
reaches a state of ecquilibrium, and all coef are stable and individually defined
such that we have a positive profit.
3 Evaluation
For testing the proposed heuristic allocation approach, we used three publicly-
available mobility traces. UPB [2] is a trace taken in an academic environment
at the University Politehnica of Bucharest, where the participants were students
and teachers at the faculty. It includes Bluetooth and WiFi data collected for a
period of 64 days, by 66 participants. St. Andrew [1] is a real-world mobility
trace taken on the premises of the University of St. Andrews and around the
surrounding town. It lasted for 79 days and involved 27 participants that used
T-mote Invent devices with Bluetooth capabilities. Finally, MIT Reality [3] con-
tains tracing data from 100 users from the University of Helsinki. The collected
information includes call logs and Bluetooth devices in proximity, collected over
the course of an academic year. Thus, each scenario emulates different running
conditions: users meet scarcely, regularly, or frequently.
1
Through the paper, a month is the time period usually charged by the mobile oper-
ator. As such, a month can actually begin with any day of the montly calendar.
Broadband Internet Sharing 253
exhausted the traffic in his data plan and the amount of included data traffic.
Each data service plan also has an associated probability that is taken into
account when generating the data plan associations for the mobile devices.
After associating a data subscription plan, the next step was to designate a
level of data usage and traffic pattern per device. A traffic pattern has two associ-
ated parameters: the average amount of traffic used per month and a propensity
to use that traffic when in a social context (when the device is in contact with
other devices). The propensity parameter is represented as the probability that
a device will consume traffic when in the presence of another device.
Table 2. Traffic usage pattern parameters used in the simulation.
The data service plans parameters used in our simulation are presented in
Table 1. These plans and their distribution have been empirically determined
based on the real data plan offered in Romania, by the Orange mobile operator.
The Price/MB for traffic included in the data plan has been determined by
factoring out the included data traffic as representing one fifth of the value of
the mobile subscription plan. The 1/5 factor has been selected such that the
price for traffic included in the plan will be less that the price for extra traffic
for most of the service plans. Also, for each service plan there tends to be up to
5 components included in the offer (data traffic, internal voice traffic, external
voice traffic, SMS, MMS).
The traffic usage pattern parameters used for the simulations are presented
in Table 2. These parameters have been selected based on usual data traffic
consumption patterns.
OpenMobs, and what they actually paid when using OpenMobs. As results in
Figs. 3a, 4a, and 5a show, because costs in OpenMobs are optimized, users sell-
ing traffic gain a profit maintained within such limits that users buying would
not lose money at the end of the month, compared to what they would pay if
buying traffic directly from the mobile operator.
Another metric is the usedConections, which shows the link between the
number of times a user buys or sells traffic, versus the gain OpenMobs brings at
the end of the month. As seen in Figs. 3b, 4b, and 5b, users manage to gain at
the end of the month proportionally to the number of times they are able to sell
or buy traffic - thus, OpenMobs can actually incentivitize users to participate in
the traffic sharing collaboration, just by the fact that the more clients use the
system, the more profit they manage to gain.
The popularity shows the relation between the homophily of an user, and
his monthly gain when using OpenMobs. Two users are considered to have a
connection if they spend enough time in contact and share a number of friends
in common [2]. The results in Figs. 3c and 4c show that a more popular user
has a higher probability of making a certain profit from using OpenMobs – in
the left part, the users with relatively few friends are clustered near the bottom,
while in the right part of the plots, clients with more friends are scattered and
tend to gain more from using OpenMobs.
256 N.-V. Ciobanu et al.
Finally, Figs. 3d and 4d show the relation between the average gain for each
node (user), and the actual variance of the gain during the experiments. We
recall that each experiment lasts for several month, and in the beginning users
can actually lose (the lower values for gain). But, because OpenMobs adapts the
strategy and corrects the coefficient used in the auctioned prices for selling traffic,
in the end all users start gaining. This means that, in real life, when OpenMobs
Broadband Internet Sharing 257
is used for even more consecutive months, it can actually start bringing more
profit to each user – which we believe can act as another incentive for users to
use and participate in the collaboration.
4 Conclusions
OpenMobs is a system designed to optimize the economical costs in accessing
mobile broadband Internet through mobile handset devices. In this paper, we
presented our approach to share under-utilized networking resources among co-
located users through free wireless access. Whenever two or more users are in the
vicinity of each other, OpenMobs forms an ad hoc mesh network and redirects
traffic in the most economic and viable way. We presented extensive studies
on the feasibility of such a system to minimize the costs users pay to their
mobile providers at the end of the month, and even financially compensates users
willingness to participate in the collaboration. We are currently well-underway
with a real-world implementation of OpenMobs, on Android-operated devices.
Also, ink the future, we aim to address also energy consumption as another
parameter for our cost model.
References
1. Bigwood, G., Rehunathan, D., et al.: Exploiting self-reported social networks for
routing in ubiquitous computing environments. In: IEEE International Conference
on Wireless and Mobile Computing Networking and Communications, WIMOB
2008, pp. 484–489. IEEE (2008)
2. Ciobanu, R.I., Dobre, C., Cristea, V.: Sprint: social prediction-based opportunis-
tic routing. In: IEEE 14th International Symposium and Works on a World of
Wireless, Mobile and Multimedia Networks (WoWMoM), pp. 1–7 (2013)
3. Eagle, N., Pentland, A.: Reality mining: sensing complex social systems. Pers.
Ubiquit. Comput. 10(4), 255–268 (2006)
4. Hanlon, J.: Choosing the best broadband plan, August 2013. http://www.
whistleout.com.au/Broadband/Guides/choosing-the-best-bradband-plan.
Accessed 10 April 2014
5. Hillier, F.S., Lieberman, G.J.: Introduction to Operations Research. Tata McGraw-
Hill Education, New Delhi (2001)
6. Laridinios, F.: Open Garden 2.0 makes sharing your WiFi and mobile connections
easier and faster. http://techcrunch.com/
7. Lawler, E.L., Wood, D.E.: Branch-and-bound methods: a survey. Oper. Res. 14(4),
699–719 (1966)
8. Malandrino, F., Casetti, C.: C Chiasserini, and Marco Fiore. Content download
in vehicular networks in presence of noisy mobility prediction. IEEE Trans. Mob.
Comput. 13(5), 1007–1021 (2014)
258 N.-V. Ciobanu et al.
Abstract. The attention that the scientific community has paid to the
use of Network Coding techniques over Wireless Mesh Networks has
remarkably increased in the last years. A large group of the existing
proposals are based on the combination of packets belonging to different
flows, so as to reduce the number of real transmissions over the wireless
channels. This would eventually lead to better performances, together
with an energy-aware operation. However, there are certain aspects that
might prevent their use. In this paper we empirically study one of these
deterrents; we propose an algorithm to establish the feasibility of apply-
ing Network Coding over random topologies, by identifying a set of nodes
that might be able to act as coding entities. Besides, we discuss the
appropriateness of such selection, by comparing the lengths of the cor-
responding paths. The results show that the probability of promoting
these techniques is rather low.
1 Introduction
Network Coding was originally proposed by Ahlswede et al. in their seminal
paper [2] more than a decade ago. Since then, the scientific community has
made outstanding efforts in order to apply such techniques over different types
of networks. It is worth highlighting the tremendous research effort that is being
made towards the applicability of Network Coding over wireless networks, in
general, and over mesh topologies, in particular.
Although there might be other sensible classifications, we can roughly sepa-
rate the various proposals that have been made between those that code packets
belonging to different flows (these are in fact those which were first proposed)
from others that combine different packets (fragments) belonging to the same
data flow (in this case, these solutions show some commonalities with fountain
codes, such as LT or Raptor ).
If we focus on the first of the aforementioned groups, there are some aspects
that might deter the applicability of such solutions over real networks. The first
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 259–274, 2015.
DOI: 10.1007/978-3-319-16292-8 19
260 P. Garrido et al.
one is the impact of random errors, which we already studied in our previous
research [5,6], where we assessed the impact of packet erasure channels over
canonical mesh topologies (i.e. the so-called X and Butterfly ones). In this paper
we pay attention to another aspect that could limit the potential gains of these
solutions. In particular, we look at random network topologies and we empirically
evaluate the real possibilities of using Network Coding techniques over them.
Namely, our analysis focus on the optimal choice of a coding element along the
route, seeking the best intermediate node which is crossed by the various data
streams, thus it (or they) could be able to merge the previously received packets
into a single unit of information. We propose an algorithm to establish whether a
particular random topology over might be considered as appropriate to use such
technique. For the sake of simplicity, in this very first approach we only consider
the presence of two data flows within the network; limiting as well the number of
coding entities to one single node. The results show that, in spite of using flows
that are chosen so as to favor the merging process, the probability of finding
a suitable coding node is rather low and, in some cases, it would require using
rather awkward routes, leading to performances that might be even lower than
those seen by the legacy approach (i.e. traditional store-and-forward routing).
Besides, we have included an additional testbed based on the ns-3 simulation
platform [1], quantifying the potential benefit brought about by combining the
information of different flows along the network, thus reducing the number of
transmissions required by legacy store-and-forward schemes. In this case, we
have observed a performance increment of about 1.5 % when the length of the
different routes is not altered by the usage of Network Coding (NC) techniques.
The rest of the document has been structured as follows: Sect. 2 outlines the
main contributions found in the literature that address the same topics covered
in this work. Section 3 presents and describes the solution that we have car-
ried out to discover the potential coding opportunities within a random wireless
topology. After that, Sect. 4 describes the empirical assessment used to evaluate
the capability of the use of Inter-flow NC techniques over Wireless Mesh Net-
works (WMNs) to improve the performance exhibited by a traditional routing
scheme. Finally, Sect. 5 properly concludes the documents and hints those issues
that shall be tackled in the future.
2 Related Work
Although the research on routing protocols for multi-hop networks started already
almost twenty years ago, we have seen that the relevance of such topologies has
recently increased. They were originally conceived as communication alternatives
for rather particular scenarios (for instance, natural disasters), thus limiting their
actual potential and applicability. However, during the latest years, the use of
mesh networks is proposed as a means to extend the coverage of more traditional
topologies. Some key examples are the IEEE 802.11s standard or the device-
to-device communication, that is currently under consideration by the 3GPP in
the latest LTE specifications. In addition, we should also reflect the relevance of
Inter-flow Network Coding over Random Wireless Mesh Networks 261
packets at the coding routers waiting for coding opportunities, the synchroniza-
tion between the flows within the scenario, etc.) might have over the low perfor-
mance exhibited by TCP over WMNs, mainly due to the way TCP reacts upon
random losses. Although the results showed a relevant performance enhance-
ment over ideal channels, when the wireless channels started to cause random
errors, the interplay between TCP and NC was not as good as could have been
expected, showing a remarkably lower performance than the one observed by the
legacy TCP.
One of the most important characteristics of a generic WMN lies on its ran-
domness, since the position and mobility of the nodes might be unpredictable. In
such cases, the identification of a suitable node to carry out the coding function-
ality is far from being obvious (as it was, for instance, with the aforementioned
canonical scenarios). In this sense, it is deemed necessary to provide a number of
mechanisms to appropriately identify the most appropriate location of the cod-
ing entity (or entities) in real time, yet minimizing the required overhead that
needs to be transmitted, as routing/coding signaling messages. Some works, such
as [14,16] have already tackled this, posing an optimization problem to carry
out the corresponding analysis. The first paper introduces two suboptimal code
generation techniques: one uses linear programming, offering certain flexibility
to select the objective functions; while the other use an optimization problem
that establishes greater restrictions, posing an integer problem. The authors of
the second paper proposes a distributed optimization problem, exploiting the
information that is provided by neighboring nodes.
This work starts from the basis established by the contributions that are
described hereinafter. On the first hand, Le et al. introduced Distributed Coding-
Aware Routing (DCAR) [11] to overcome the main limitations of COPE [9],
questioning its feasibility over dynamic topologies. Two main conclusions were
derived: first, the choice of coding elements is tightly related to the pre-established
routes (i.e. static conditions); second, the code structure in COPE is limited
within a two-hop region. In order to sort these limitations out, they presented an
on-demand and link-state routing protocol bringing about the discovery of high
throughput paths by means of a novel routing metric (Coding-Aware Routing
Metric - CRM ). The protocol detects the potential network coding opportunities
(i.e. the coding nodes along the network), being able to distinguish between
“coding-feasible” and “coding impossible” paths. Additionally, they proposed a
set of conditions that must be fulfilled by a node to become a coding element.
Afterwards, Guo et al. [7] questioned whether these requirements are enough
when there are various intersecting nodes along a path, and they proposed a
new coding-aware routing metric, Free-Ride-Oriented Routing Metric (FORM),
able to exploit a larger number of coding opportunities, regardless of the number
of flows and intersecting nodes. It is worth mentioning that, contrary to these two
works, we do not focus on the protocol itself (i.e. we do not study its overhead,
discovery messages, etc.), but our main objective is to carry out a thorough
analysis of the corresponding coding conditions. To our best knowledge there
are no other works that have carried out such a study. In order to do so, we
Inter-flow Network Coding over Random Wireless Mesh Networks 263
3 Implementation
Before describing the algorithm that we have designed to identify the node(s)
acting as coding element(s), we recall again that we have based this proposal
on the conditions identified in [7,11]. We have broadened them and, exploiting
graph theory, we have developed an algorithm that provides with the list of
potential coding nodes (if any) in any wireless random topology.
We list below a set of assumptions that might help the reader to follow the
analysis depicted afterwards.
– For the sake of simplicity, we consider the coding of two different flows.
– By exploiting the broadcast nature of the wireless channel, any node within
the coverage area of a transmitter is able to overhear the packets it sends,
even though they are not the real destination.
– As was already mentioned, we assume that there are always two active flows
between two pairs of endpoints (i.e. s1 → d1 and s2 → d2 ); besides, if the
algorithm found any coding opportunity, it would return a list of the potential
coding nodes cj , where j = [0...N ], being N the total number of them.
– We define N (u) as the set of one-hop neighboring nodes to node u.
– fi denotes the ith flow within the scenario.
– Finally, the notations U(cj , fi ) and D(cj , fi ) are used to identify the upstream
and downstream nodes, respectively. The former are defined as the nodes
between the two sources (si ) and a coding element (cj ), while the latter ones
are those nodes between cj and the corresponding destinations (di ).
In other words, we have to ensure that the destination nodes are able to directly
overhear the native packets belonging to the other flow. It is worth highlighting
that there exists a different approach to address the coding/decoding operation,
based on the exchange of periodic reports which contain the essential information
of each node (i.e. current neighbors, stored packets, etc.), as described in [9].
Nonetheless, this alternative would require an additional channel overhead and
contention, thus clashing with the foundations of our NC approach [5,6].
Assuming ideal channel conditions (i.e. no packet losses due to propagation
impairments, hidden terminals, etc.) as well as a perfect MAC scheduling (i.e.
no collisions), the above constraints are necessary and sufficient. However, the
authors in [7] questioned that these statements might not be appropriate if there
exist various articulation points between the endpoints, proposing a number of
modifications to discover the best set of nodes within a generic mesh topology.
In order to align the aforementioned requirements with the NC protocol
operation we presented in [5,6], we have imposed the following constraints to
deploy the corresponding scenarios: (1) we will only consider two data flows:
s1 → d1 and s2 → d2 ; (2) we will limit the number of coding nodes cj to 1;
(3) only destination nodes can take care of the decoding process. Considering
these limitations, the conditions described in Definition 1 need to be slightly
updated, as can be seen below.
Inter-flow Network Coding over Random Wireless Mesh Networks 265
– There exists u1 ∈ U(c, f1 ) such that u1 ∈ N (d2 ) and d2 is the destination node
of f2 , or u1 = d2 .
– There exists u2 ∈ U(c, f2 ) such that u2 ∈ N (d1 ) and d1 is the destination node
of f1 , or u2 = d1 .
1 s2
a
s1 U (c1 , f1 )
a
U (c1 , f2 )
c1 D (c1 , f1 )
b
a U (c1 , f2 )
b
a⊕
Overhearing
a⊕
links
b
d2 a⊕b
2 d1
Once we have established all the necessary conditions (as well as the con-
straints imposed by our Inter-flow NC protocol) that any scenario must ful-
fill to leverage a coding scheme within the network, Algorithm 1 shows the
procedure that we have implemented to identify the potential coding nodes
within the corresponding graph. Essentially, the algorithm receives the following
input arguments: the underlying graph G(V, E) of the particular network topol-
ogy/scenario, where V is the set of nodes of the network and E the edges/links
that are established between them. In addition, it also needs the two pairs of
endpoints s1 /d1 , s2 /d2 that establish the corresponding flows. Finally, we intro-
duce a last parameter, Δ, which limits the maximum number of additional hops
that a route might take in order to find a coding router (compared to a tradi-
tional store-and-forward routing scheme). It is worth highlighting the key role
266 P. Garrido et al.
that this parameter will play in this approach, since the usage of longer paths
has a tremendous impact over WMNs, as will be discussed below. Hence, it is
deemed essential to limit the maximum number of “additional hops” generated
by the use of our NC solution in order to prevent from insensible outputs. The
algorithm returns the list of potential coding nodes that satisfy the coding con-
ditions gathered in Definition 2, as well as the j routes between si and di (one
route per flow and coding router cj ).
In order to find the corresponding paths between the endpoints, we use two
different routing alternatives: Dijkstra’s [4] and Yen’s [18] algorithms. The sec-
ond one, which provides a set of the K shortest paths, is used so as to be able
to identify additional routes that might lead to better performances, since the
former just returns a single path.
The main quality of this type of NC schemes is to combine packets belong-
ing to different flows, thus saving a number of transmissions (up to 25 % in
canonical topologies). However, due to the contention-based nature of the IEEE
802.11 medium access control, if these techniques led to longer routes, the corre-
sponding performance enhancement would be mitigated, even leading to worse
behaviors. For that purpose, the last part of our proposed algorithm introduces
a novel constraint that establishes a sensible bound on the number of hops that
has to be used in order to find a suitable coding router, mapped as Δ. As we
Inter-flow Network Coding over Random Wireless Mesh Networks 267
will see later, all the routes provided by the NC approach will be, at least, equal
in length to those ones corresponding to the legacy scheme; for that purpose,
when the Δ constraint is enabled, all those paths which exceed the length of
the legacy Dijkstra’s path plus the aforementioned threshold will be automati-
cally discarded, hence its resulting coding node cj will not be part of the list of
potential coding elements. In other words, even though in these cases the cod-
ing conditions introduced in Definition 2 are accomplished, the resulting routes
might be considered as false positives.
4 Results
In this section we outline the process that we have followed to evaluate the
behavior of our solution (based on Algorithm 1). In a nutshell, we have carried
out an empirical assessment that can be structured according to the following
stages: (1) randomly deploy the nodes within the scenario; (2) execute the rout-
ing algorithms whose operation was described in Sect. 3; (3) discuss the results
provided by the proposed procedure, comparing them to those that would have
been obtained by a traditional routing scheme.
After this step, the resulting output (i.e. the scenarios which have, at least,
a potential coding node) will cater a second stage, where the routes generated
by the algorithm will be used as the input of a simulation-driven assessment
(through the ns-3 platform, using the Inter-flow NC implemented that we pre-
sented in [5,6]), where we will evaluate the potential benefit of combining the
use of NC techniques with TCP over random wireless networks.
Before starting with the description of the algorithm assessment, we enumer-
ate the characteristics of the scenarios that have been used for the analysis.
– A total number of 32 nodes is randomly deployed within a 100 × 100 meters
squared area, following a Poisson Point Process.
– We discard any network whose corresponding graph is not connected; that is
to say, we only analyze networks in which there is, at least, one path between
any pair of nodes.
– We do not consider node mobility, and thus all nodes is stay static throughout
the simulation.
– The coverage area of the nodes is modeled by a 20 m disk radius.
– Two pairs of endpoints define the two long-lived flows considered in the sce-
nario (i.e. s1 → d1 and s2 → d2 ). All these nodes are selected so as to
increase the likelihood of finding a coding entity at any of the articulation
nodes between the two streams. Namely, we take the coordinates (20, 80),
(80, 80), (20, 20) and (80, 20) as reference points, choosing the closest nodes
to these positions as s1 , s2 , d1 and d2 , respectively.
– For the sake of simplicity, we will assume that the flows f1 and f2 will be
active throughout the process.
We have generated a total of 1000 scenarios complying with the above char-
acteristics, using them as the input of the routing solutions. The algorithm pro-
posed in this work returns the list of potential coding nodes (if any) as well as
268 P. Garrido et al.
19
23
29 24
s1
20
17 8
s1
27 15 s2
24
8 10
32 19
s32 13 11
21 14 1
30 21 7
16 30
23 29
11 7
10 31
5 22
4 18
6 14
27 22 9
9 17 5 28
18 2
15 3
2 d2
31 28 16
d1
25 32 4
C
1
12 13 d2
20 d1
26
26 6
12 C
25
the length of the K paths for each of the two flows. For comparison purposes,
a traditional routing scheme establishes the shortest paths between the endpoints
by means of Dijkstra’s algorithm.
As an illustrative example, Fig. 2 shows two of the random scenarios that were
studied. The topology on the left (Fig. 2a), corresponds to a situation in which
the coding router identified by the algorithm seems to be an appropriate one,
since node C is placed at an articulation point between the two paths, without
increasing their overall number of hops. Besides, the broadcast nature of the
wireless medium would allow the destination nodes d1 and d2 to directly overhear
the transmissions from nodes 27 and 18, respectively. This information will be
later used to decode the coded packets and retrieve the information originally
addressed to them. On the other hand, the topology on the right (Fig. 2b) leads
to a rather worse behavior, since the position of the coding node is far from being
adequate; we can see that the routes that connect C to the flow endpoints require
many unnecessary hops (i.e. 2 for the path s1 → d1 and 5 for the other one), thus
significantly increasing the overall path lengths and likely jeopardizing the global
performance. On the basis of this first analysis, we can conclude that, although
the proposed algorithm is able to find the most appropriate coding router within
a random scenario, the results might require a second analysis, since the resulting
performance would be much lower than the one achieved using more traditional
routing mechanisms. In addition, this second scenario poses two main issues:
on the first hand, it reinforces the usage of a kind of limiter to avoid awkward
routes that would not provide any kind of benefit (tuning the aforementioned
Δ parameter, which will prevent the inclusion on insensible routes); besides, it
would be interesting to enhance the algorithm so that it could be able to detect
these undesired and illogical results, such as detecting i.e. the presence of cycles
along the routes.
Inter-flow Network Coding over Random Wireless Mesh Networks 269
0.3 0.3
Probability
Probability
0.2 0.2
0.1 0.1
0 0
5 10 15 20 25 5 10 15 20 25
# hops # hops
(a) Traditional routing (K=1) (b) Network coding (K=1)
The first aspect to be studied is the probability density function (pdf ) of the
number of hops required by the source nodes (si ) to reach their corresponding
destinations (di ), as shown in Fig. 3. We can see that the traditional routing
scheme (Dijkstra) is always able to find paths with ≤18 hops, with the presence
of a peak centered at 9–11 hops; on the other hand, the NC scheme yields a
more uniform set of probabilities, yet it is not a good performance at all, since
the use of these techniques leads to longer routes (i.e. we can find cases in which
a packet has to use up to 23 different hops before reaching its destination).
This means that, in some specific scenarios, the coding scheme proposed by our
algorithm would impose the use of a much larger number of hops, bringing about
a performance degradation, rather than increasing it.
Figure 4 compares the number of hops that were obtained by the two app-
roaches for every scenario. As can be seen, the NC scheme route lengths are
bounded by the ones provided by the traditional approach. Although in many
cases the increase of the route lengths might be acceptable, there are cases in
which the paths that are required to foster the coding scheme are not sensible
at all, hence such scheme should be discarded for those particular scenarios.
It is worth highlighting that, for the analysis carried out so far, the results
obtained by the use of Yen’s algorithm were similar, for different K values
(K = 1, 2, 3). Hence, we only represent the results obtained for K = 1, which is
equivalent to the ones achieved by Dijkstra’s.
Figure 5 shows the probability of finding at least one coding router (in this
case we do not consider the appropriateness of the resulting configuration). In the
x axis we represent the difference between the number of hops of the routes
provided by the two proposed routing algorithms. We can see that the use of
Yen’s algorithm with a higher K value could lead to some benefits, since the
probability of finding a coding node increases as long as K gets higher. Anyway,
although Algorithm 1 is able to establish a coding node in ≈47 % of the scenarios
(without limiting the difference between the two path lengths), the corresponding
topologies might deter the use of this technique, since the number of hops that
are required for the NC scheme is much larger than the one established by the
270 P. Garrido et al.
25
20
# hops NC
15
10
5
5 10 15 20
# hops SF
Fig. 4. Graphical representation of the difference between the path lengths (Traditional
routing vs Network coding)
0.5
0.4
Probability
0.3
0.2
K=1
0.1 K=2
K=3
0 ∞
0 1 2 3 5 10
# hops SF - # hops NC
0.8
Probability
0.6
0.4
0.2 NC NC
TCP TCP
0
0.6 0.8 0 0.5 1
Throughput (Mbps) Throughput (Mbps)
(a) Δ = 0 (b) Δ = 1
about by the extra hop that the two flows have to carry out in order to reach
c1 , whose downstream coded flows, D (c1 , fi ), with i ∈ [1, 2], are not able to
compensate. Besides, it is also true that there is a second zone (right side of the
figure) where the NC scheme performs better than TCP, showing a throughput
enhancement of ≈2 %.
Last, but not least, a higher value of Δ would lead to even worse behaviors,
leading to situations in which the throughput of the Inter-flow NC scheme is
remarkably lower than TCP’s.
5 Conclusions
The use of NC techniques to enhance the performance over WMNs has drawn the
attention of the scientific community during the last years. The corresponding
research has covered various aspects, ranging from coding/decoding issues to
the proposal of novel protocols able to promote these solutions. Some of the
existing works can be grouped as Inter-Flow techniques, since they are based
on the combination of packets belonging to different data streams. In this sense,
we have used an algorithm to study the feasibility of NC over random wireless
topologies.
In this work, we have started from the set of statements proposed in [7,11]
to detect the potential network coding opportunities over generic scenarios, by
exploiting the broadcast nature of the wireless medium (which allows the nodes
to directly overhear transmissions within their coverage area). With the intention
of combining these conditions with a holistic NC framework we have previously
introduced in [5,6], we have tailored a new algorithm, which has to be adapted
to its particular requirements. For instance, decoding tasks could only be tackled
by destination nodes, hence they should be able to overhear the information of,
at least, one of the “‘native” flows (i.e. without coding) to correctly perform the
decoding tasks without leading to decoding failures. Like it predecessors, the
main focus of this algorithm aims at the identification of the potential coding
nodes.
We have seen that, although the scenarios that have been studied were syn-
thetically tailored to increase the probability of finding at least a coding alter-
native, only 47 % of the network topologies showed the possibility of using these
techniques. Furthermore, in most of those cases, the resulting topologies would
most likely deter from using NC, since the number of hops required by the corre-
sponding routes are much larger than those obtained by a more legacy approach.
In this sense, the probability of using NC over a reasonable topology would be
around 10 %. For that purpose, we have introduced a new constraint, Δ, which
establishes an upper bound to the number of extra hops required by our partic-
ular coding and decoding conditions.
In order to provide a quantitative answer to the question of how much would
be the performance enhancement with this approach, we have carried out an
extensive simulation campaign (through the ns-3 platform), assessing the poten-
tial throughput benefit of combining the packets along the network. From the
Inter-flow Network Coding over Random Wireless Mesh Networks 273
References
1. The ns-3 network simulator. http://www.nsnam.org/
2. Ahlswede, R., Cai, N., Li, S.Y., Yeung, R.: Network information flow. IEEE Trans.
Inf. Theory 46(4), 1204–1216 (2000)
3. Chachulski, S., Jennings, M., Katti, S., Katabi, D.: Trading structure for random-
ness in wireless opportunistic routing. In: Proceedings of the 2007 Conference on
Applications, Technologies, Architectures, and Protocols for Computer Communi-
cations, SIGCOMM 2007, ACM, New York, NY, USA, pp. 169–180 (2007). http://
doi.acm.org/10.1145/1282380.1282400
4. Dijkstra, E.W.: A note on two problems in connexion with graphs. Numer. Math.
1, 269–271 (1959)
5. Gómez, D., Hassayoun, S., Herrero, A., Agüero, R., Ros, D.: Impact of network
coding on TCP performance in wireless mesh networks. In: 2012 IEEE Proceedings
of 23th International Symposium on Personal, Indoor and Mobile Radio Commu-
nications (PIMRC), September 2012
6. Gómez, D., Hassayoun, S., Herrero, A., Agüero, R., Ros, D., Garcı́a-Arranz, M.:
On the addition of a network coding layer within an open connectivity services
framework. In: Timm-Giel, A., Strassner, J., Agüero, R., Sargento, S., Pentikousis,
K. (eds.) MONAMI 2012. LNICST, vol. 58, pp. 298–312. Springer, Heidelberg
(2013)
7. Guo, B., Li, H., Zhou, C., Cheng, Y.: Analysis of general network coding conditions
and design of a free-ride-oriented routing metric. IEEE Trans. Veh. Technol. 60(4),
1714–1727 (2011)
274 P. Garrido et al.
8. Hunderbøll, M., Ledet-Pedersen, J., Heide, J., Pedersen, M., Rein, S., Fitzek, F.:
CATWOMAN: Implementation and Performance Evaluation of IEEE 802.11 Based
Multi-Hop Networks Using Network Coding
9. Katti, S., Rahul, H., Hu, W., Katabi, D., Médard, M., Crowcroft, J.: XORs in the
air: practical wireless network coding. IEEE/ACM Trans. Netw. 16(3), 497–510
(2008)
10. Koetter, R., Médard, M.: An algebraic approach to network coding. In: Proceedings
of 2001 IEEE International Symposium on Information Theory, p. 104 (2001)
11. Le, J., Lui, J.S., Chiu, D.M.: DCAR: distributed coding-aware routing in wireless
networks. IEEE Trans. Mob. Comput. 9(4), 596–608 (2010)
12. Li, S.Y., Yeung, R., Cai, N.: Linear network coding. IEEE Trans. Inf. Theory 49(2),
371–381 (2003)
13. Neumann, A., Aichele, C., Lindner, M., Wunderlich, S.: ABetter Approach To
Mobile Ad-hoc Networking (B.A.T.M.A.N.). IETF Internet Draft - work in
progress 07, individual, April 2008. http://tools.ietf.org/html/draft-wunderlich-
openmesh-manet-routing-00
14. ParandehGheibi, A., Ozdaglar, A., Effros, M., Médard, M.: Optimal reverse car-
pooling over wireless networks - a distributed optimization approach. In: 2010 44th
Annual Conference on Information Sciences and Systems (CISS), pp. 1–6, March
2010
15. Sengupta, S., Rayanchu, S., Banerjee, S.: An analysis of wireless network coding
for unicast sessions: the case for coding-aware routing. In: 26th IEEE International
Conference on Computer Communications, INFOCOM 2007, pp. 1028–1036. IEEE,
May 2007
16. Traskov, D., Ratnakar, N., Lun, D.S., Koetter, R., Médard, M.: Network coding
for multiple unicasts: an approach based on linear optimization. IEEE Int. Symp.
Inf. Theory 52, 6 (2006)
17. Wu, Y.: Information exchange in wireless networks with network coding and
physical-layer broadcast (2004)
18. Yen, J.Y.: Finding the lengths of all shortest paths in n-node nonnegative-distance
complete networks using 12n3 additions and n3 comparisons. J. ACM 19(3), 423–
424 (1972). http://doi.acm.org/10.1145/321707.321712
Applications and Context-awareness
A Relational Context Proximity Query Language
1 Introduction
The increase in massive immersive participation scenarios drives our move to-
wards a society of virtual and augmented reality supported by the massive
deployment of global sensor infrastructures enabled by open solutions such as [1]
and [2]. Such realities include immersive games such as Google Ingress [3]. These
immersed realities serve to fuse a dynamic and multifaceted experience of peo-
ple, places and things. Solutions such as SenseWeb [4], IP MultiMedia Sub-
system (IMS) [5] and SCOPE [6] were developed in response to this need to
provision information supporting such immersive realities. This context infor-
mation enables the creation of interactive experiences that reflect the dynamic
relationships that exist among a users, environments and services.
They are however limited with respect to expressiveness and thus are not
capable of sufficiently answering the question of Who you are, who you are
with and what resources are nearby as required by Schilit and Adams [7]. While
semantic approaches such that described by Liu et al. in [8] offer some support
towards this problem, Adomavicius et al. in [9] suggested that these types of
approaches are limited and should be complemented by metric type approaches
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 277–289, 2015.
DOI: 10.1007/978-3-319-16292-8 20
278 J. Walters et al.
thus realizing the ability to answer the question of nearness as posited by both
Schilit et al. in [7]. This would permit us to sufficiently offer support for the
complementing metric-type similarity models which, according to Hong et al.
in [10], is critical in realizing applications and services that can discover nearby
sensors or points of information.
In achieving this we are firstly required to identify and select candidate enti-
ties and manage the subsequent volume of required context information. Petras
et al. in [5] used centralized presence systems where an entity watches a list
of other entities in its address book. While this reduces the volume of con-
text information required to maintain relationships, the resulting relationships
are not context centric and limits the ability to discover entities of interest with
which to establish common context relationships. With the average address book
estimated to be limited in size to 0.005 ∗ population [5], this solution limits the
number of entities that can be discovered. Subscribing to all users would not
be a feasible solution as the volume of messages per status change would be
approximated as populationxpopulation. This solution would not scale well and
simply pruning the message queue as suggested by Petras et al. in [5] would offer
little guarantee with regards to the quality of the context information.
Zimmermann et al. in [11] posits the notion of proximity as the overarch-
ing factor in establishing context relationships. This subsumes address book
approaches, realizing truly context-centric networks where interactions, discov-
ery and relationships are realised over the degrees of relationships between enti-
ties and their associated context information. In [12] we defined an approach
to establishing context centric relationships between entities on an Internet of
Things. This extended the approach of Zimmermann et. al in [11] to relational
proximity thus subsuming spatial distance as the overarching indicator of con-
text. However Zimmermann et al. in [11] provided no solution for discovering the
candidate entities and establishing relationships in light of the highly dynamic
nature of context information. Schmohl partially addressed this in [13], propos-
ing a multi-dimensional hypersphere of interest in which entities entering are
deemed to be candidates related and are evaluated and selected according to
a proposed proximity measure. Here entities are discovered through the use of
multi-dimensional indexing structures such as R-trees, kd-trees and space par-
titioning grids. These solutions are less optimal for multi-dimensional dynamic
context environments as the cost of indexing increases exponentially with a lin-
ear increase in the number of dimensions. Queries therefore risk being executed
against outdated indexes with no guarantees of information freshness. As a solu-
tion to this problem, Schmohl suggested in [13] that dimensions could be selec-
tively. However applications depending on less popular dimensions would not
stand to realize any benefit from this optimization.
Yoo et. al in [14] and Santa et. al in [15] proposed the use of publish-subscribe
approaches as suitable alternatives with Kanter et. al in [16] showing that such
approaches are scalable and can realize dissemination times on par with UDP
signally used in SIP implementations. Frey and Roman in [17] extended this
approach however this is based on events rather than the raw underpinning
A Relational Context Proximity Query Language 279
2 Relational Proximity
the n − dimensional hypershere around the entity with respect to the range and
identifies those entities that closer or further away.
where ai ∈ AD D
i , aj ∈ Aj
Here, w is the weighting for each attribute. The value of r can be adjusted to
reflect the perceived distance between P and Q as shown by Shahid et al. in [22].
The distance is normalized with respects to the maximum distance between
states in the encompassing application space. Our measure of proximity therefore
logically subsumes existing Lp − norm approaches.
The EM D algorithm is then applied to derive the largest possible transfor-
mation between P and Q that minimizes the overall context transformation cost,
where:
m
n
W ORK(P → Q, F ) =
i=1 j=1
ij ≥ 01 ≤ i ≤ m, 1 ≤ j ≤ n
1. f
m
2. i=1 fij ≤ P 1 ≤ i ≤ m
n
3. fij ≤ Q1 ≤ j ≤ n
i=1
m n m n
4. i=1 j=1 fij = min i=1 P j=1 Q
The first constraint permits the transformation and hence the proximity from
P → Q and not the opposite. The second and third constraints limit the trans-
formation P → Q to the maximum number of context observations made for P
or Q. The final constraint forces the maximum transformation possible between
both entities. The context proximity, δ(P, Q), is the Earthmover’s distance nor-
malized by the total flow:
⎛ ⎞ ⎛ ⎞−1
m
n m
n
δ(P, Q) = ⎝ fij dif ⎠ ∗ ⎝ fij ⎠
i=1 j=1 i=1 j=1
A Relational Context Proximity Query Language 281
given that:
⎛ ⎞
m
n m
n
fij = max ⎝ P, Q⎠
i=1 j=1 i=1 j=1
CPQL SQL
get, sub select
where where
f rom f rom
[table] [people], [place], [thing]
order order by
At the core of the query language are two main types of statements: a get
statement or sub statement. Both a CPQL get and a sub are functionally similar
to an sql-select and retrieves the current context entities that possess a state
within the hypersphere of interest defined by the proximity algorithm. The fun-
damental difference however being that a CPQL get does a single retrieve on
the current matching states and the query is then terminated. It’s purpose is to
locate only current entities matching query parameters. A sub, assumes the func-
tionality of a continuous query continually retrieving all entities with a current
state that currently satisfies the defined proximity function.
GET—SUB PRESENTITY
WHERE DISTANCE [DISTNAME] ¡ [VALUE]
[ORDER [ASC—DESC]]
DEFINING [DISTANCENAME]
ASsqrt(pow(Flat (Plat , Qlat ), 2) + pow(Flon (Plon , Qlon ), 2));
A Relational Context Proximity Query Language 283
DEFINING [DISTANCENAME]
ASsqrt(pow(Flat (Plat , Qlat ), 2) + pow(Flon (Plon , Qlon ), 2));
GET—SUB PRESENTITY
WHERE DISTANCE [DISTNAME] < [VALUE]
[ORDER [ASC—DESC]]
5 The Architecture
The interactive API is at the top our solution and accepts user input defini-
tions of complex multi-criteria proximity relationships. Implemented as a Java
extension to the MediaSense platform, this component provides several key func-
tionalities. Firstly, It allows us to define and introduce query comparators into
the distributed architecture. Query comparators are further detailed in Sect. 5.2.
284 J. Walters et al.
Comparator Usage
Hamming Comparator Compare string distances
Great Circle Compare line of sight difference in distance
Urban Commuting Compare the difference between distances in urban
environments
Urban Public Commuting Compare Distances in urban environments with public
transportation
Simple Temperature Compare temperature
Energy Temperature Compare temperature as a function of the monetary
cost of the energy to meet equilibrium
A Relational Context Proximity Query Language 285
5.3 Parser
The parser resides locally on each node on the distributed platform. The parser is
implemented as a Java Compiler parsing engine accepting queries as statements
written in Java. Each query is parsed at the local parser for correctness and
completeness and any subsequent errors returned via the user interface to the
calling application. The resulting output of the parser is a Java based parse
tree representation of the query which can be used for execution and comparing
results sets. The resulting parse tree is routed to the executor.
5.4 Executor
The executor consists of two components, a global executor which resides local
to the executing query and the query executor which is instantiated across all
endpoints in response to a query being distributed on the platform.
Global Executor. The global executor for a given query R resides locally at
the query’s originating node. Each query received from the parser is sent to the
global executor. The global executor decomposes the proximity function into its
constituent dimensional queries. Each query finding a state possessing a context
dimension that satisfies the parameters of the unified relational proximity func-
tion. Each dimension has an upper and lower limit, the range of the query. For
simplicity, the upper and lower bounds of an application space is used, i.e., the
bounds of each dimension that constitutes the n-dimensional hypersphere cir-
cumscribing the proximity function. The constituent queries are therefore range
queries which are then executed on the platform. The queries are then routed
using the platform’s native lookup implementation. The query:
getpresentity
wheredistancecommutingdistance < 0.5
orderbyasc
def iningdistancecommutingdistance
ASsqrt(pow(Flat (Plat , Qlat ), 2) + pow(Flon (Plon , Qlon ), 2));
286 J. Walters et al.
Query Executor. The sub-queries arrive at the query executor at each remote
node that is capable of answering the query, i.e., nodes which are storing val-
ues within the range of the sub-query. Firstly, the sub-query executor fetches
the comparator and compares the dimensions. If there are any matching states,
the node routes a response to an intermediate collator. This is done by calling the
route(key) function of the underlying distributed platform, where key is gener-
ated through a triple derived from the ID of the requesting entity, P, the match-
ing entity Q and the executing query R such that key = key(Pid , Qid , Qryid ).
The function used to generate the key is specific to the implemented routing
algorithm used by the distributed platform. All states responding to the query
(Pid , Qid , Qryid ) would arrive and be collated at the same node. This exploits the
distribution platform for minimizing the volume of information sent to remote
end points originating context proximity queries.
5.5 Collator
The collator consists of two parts: the intermediate collator and the global colla-
tor. The intermediate collator for each (Pid , Qid , Qryid ) gets a collection of entity-
dimensions matching the originating query. The intermediate node is responsible
for filtering states that do not match the complete query. Each entity arriving
at an intermediate node is compared and only returned to the global collator if
all states satisfies the context query. This is then returned to the global collator
at the query’s originating node.
6 Evaluation
The CPQL query language provides for an approach to querying context informa-
tion over a heterogeneous architectures. For evaluation we measured the number
of entities returned to an endpoint with CPQL and the address book approach
compared to the expected number of entities matching the context query pro-
vided. We simulated the query algorithm by creating a global population of
context entities of size N where N is between 1000 and 10000 entities. Each
entity is assigned a random number of context dimension constituting its pro-
file. An application was created with three random dimensions (da , db , dc ) such
that d ∈ D. Each simulation was run 20 times with a new configuration of
dimensions and entities. The results are shown in Table 3.
The number of entities that satisfy the query requirements are shown. CPQL
queries show a reduction in the number of entities located when compared to
the total network size and are comparable to the number of expected entities
A Relational Context Proximity Query Language 287
7 Conclusion
References
1. Strohbach, M., Vercher, J., Bauer, M.: A case for IMS. IEEE Veh. Technol. Mag.
4(1), 57–64 (2009)
2. Kardeby, V., Forsström, S., Walters, J.: The updated mediasense framework.
In: 2010 Fifth (ICDT) (2010)
3. Ingress
4. Kansal, A., Nath, S., Liu, J., Zhao, F.: Senseweb: an infrastructure for shared
sensing. IEEE MultiMedia 14(4), 8–13 (2007)
5. Petras, David, Baronak, Ivan, Chromy, Erik: Presence Service in IMS. Sci. World
J. 2013, 8 (2013) 606790
6. Baloch, R.A., Crespi, N.: Addressing context dependency using profile context in
overlay networks. In: 2010 7th IEEE Consumer Communications and Networking
Conference (CCNC), pp. 1–5. IEEE, January 2010
A Relational Context Proximity Query Language 289
7. Schilit, B., Adams, N., Want, R.: Context-aware computing applications. In:
First Workshop on Mobile Computing Systems and Applications, WMCSA 1994,
pp. 85–90. IEEE, December 2008
8. Liu, L., Lecue, F., Mehandjiev, N., Xu, L.: Using context similarity for service rec-
ommendation. In: 2010 IEEE Fourth International Conference on Semantic Com-
puting, pp. 277–284, September 2010
9. Adomavicius, G., Tuzhilin, A.: Context-aware recommender systems. RecSys
16(16), 335–336 (2010)
10. Hong, J.I., Landay, J.: An infrastructure approach to context-aware computing.
Hum. Comput. Interact. 16(2), 287–303 (2001)
11. Zimmermann, A., Lorenz, A., Oppermann, R.: An operational definition of con-
text. In: Kokinov, B., Richardson, D.C., Roth-Berghofer, T.R., Vieu, L. (eds.)
CONTEXT 2007. LNCS (LNAI), vol. 4635, pp. 558–571. Springer, Heidelberg
(2007)
12. Walters, J., Kanter, T., Rahmani, R.: Establishing multi-criteria context relations
supporting ubiquitous immersive participation. Int. J. 4(2), 59–78 (2013)
13. Robert Schmohl. The Contextual Map. deposit.ddb.de (2010)
14. Yoo, S., Son, J.H., Kim, M.H.: A scalable publish/subscribe system for large mobile
ad hoc networks. J. Syst. Softw. 82(7), 1152–1162 (2009)
15. Santa, J., Gomez-Skarmeta, A.F.: Sharing context-aware road and safety informa-
tion. IEEE Pervasive Comput. 8(3), 58–65 (2009)
16. Kanter, T., Österberg, P., Walters, J., Kardeby, V., Forsström, S., Pettersson, S.:
The mediasense framework. In: 2009 Fourth International Conference on Digital
Telecommunications, pp. 144–147. IEEE, July 2009
17. Frey, D., Roman, G.-C.: Context-aware publish subscribe in mobile Ad Hoc net-
works. In: Murphy, A.L., Vitek, J. (eds.) COORDINATION 2007. LNCS, vol. 4467,
pp. 37–55. Springer, Heidelberg (2007)
18. Adomavicius, G., Tuzhilin, A.: Context-aware recommender systems. In: Proceed-
ings of the 2008 ACM conference on Recommender Systems - RecSys ’08, p. 335
(2008)
19. Schmidt, A., Beigl, M., Gellersen, H.W.: There is more to context than location.
Comput. Graph. 23(6), 893–901 (1999)
20. Padovitz, A., Loke, S.W., Zaslavsky, A.: Towards a theory of context spaces. In:
Proceedings of the Second IEEE Annual Conference on Pervasive Computing and
Communications Workshops, pp. 38–42. IEEE, March 2004
21. Rubner, Y., Tomasi, C., Guibas, L.J.: A Metric for Distributions with Applications
to Image Databases, p. 59, January 1998
22. Shahid, R., Bertazzon, S., Knudtson, M.L., Ghali, W.A.: Comparison of distance
measures in spatial analytical modeling for health service planning. BMC Health
Serv. Res. 9, 200 (2009)
23. Walters, J., Kanter, T., Savioli, E.: A distributed framework for organizing an
internet of things. In: Del Ser, J., Jorswieck, E.A., Miguez, J., Matinmikko, M.,
Palomar, D.P., Salcedo-Sanz, S., Gil-Lopez, S. (eds.) Mobilight 2011. LNICST,
vol. 81, pp. 231–247. Springer, Heidelberg (2012)
24. Forsström, S., Kardeby, V., Walters, J., Kanter, T.: Location-based ubiqui-
tous context exchange in mobile environments. In: Pentikousis, K., Agüero, R.,
Garcı́a-Arranz, M., Papavassiliou, S. (eds.) MONAMI 2010. LNICST, vol. 68,
pp. 177–187. Springer, Heidelberg (2011)
Socially-Aware Management of New Overlay
Applications Traffic - The Optimization
Potentials of the SmartenIT Approach
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 290–300, 2015.
DOI: 10.1007/978-3-319-16292-8 21
Socially-Aware Management of New Overlay Applications Traffic 291
1 Introduction
In the current phase of Internet development we are observing significant
coexistence and mutual stimulation of cloud computing and on-line network-
ing. Fundamental solutions elaborated for traditional network management are
probably not adequate for observed new situation. For evolving Future Internet
concept and for contemporary overlay applications, we need novel characteri-
zation approaches, new business models, stakeholders characterization leading
further to design of innovative network and traffic management mechanisms
[12,13].
SmartenIT consortium has defined social awareness, QoE awareness and
energy efficiency as the key targets for the design and optimization of traf-
fic management mechanisms for overlay networks and cloud-based applications.
Such three main targets, although defined in wide meaning, can be used as design
goals for emerging proposals, e.g., to establish content distribution systems min-
imizing energy consumption by using energy awareness of its architecture ele-
ments and using social awareness translated into efficient structure of caches
which minimizes volume of transferred data.
The remaining part of the paper is organized as follows. In Sect. 2, we intro-
duce relevant definitions and terminology used to describe stakeholders in cur-
rent service layer of networks. Section 3 provides descriptions of two different
segments of the value chain of service provisioning, i.e., network-centric and
user-centric cases. Section 4 aims at presenting the optimization potential for
joint consideration of cloud and overlay, illustrated by description of HORST
and DTM solutions. System architecture specified and being under practical
development within SmartenIT consortium is presented in Sect. 5. Finally, Sect. 6
draws conclusions worked out after specification phase of SmartenIT project and
briefly overviews next steps to come.
entered it. While the amount of traffic crossing the Internet and IP WAN
networks is projected to reach 1.3 ZBs (i.e., 109 TBs) per year in 2016, the
amount of DC traffic is already 1.8 ZBs per year, and by 2016 will nearly quadru-
ple to reach 6.6 ZBs per year. This represents a 31 % Compound Annual Growth
Rate (CAGR). The higher volume of DC traffic is due to the inclusion of traffic
inside DC (typically, definitions of Internet and WAN stop at the boundary of
DC). Factors contributing to traffic remaining in DC include functional separa-
tion of application servers, storage, and databases, which generates replication,
backup, and read/write traffic traversing DC. In the cloud systems era Cloud
Service Providers are becoming the main direct customers of DCs. The resources
provided on demand by DCs allow those providers to build new innovative ser-
vices addressing requirements, like scalability, mobility, availability and reliabil-
ity. Also, the flexibility of Cloud Service Providers may introduce new business
models which result in cost reduction on the end-user’s side. DCOs have been
convinced that the use of cloud platforms to manage their resources is profitable
for their business. Using systems like OpenStack they can apply the Infrastruc-
ture as a Service (IaaS) model to offer dynamically resources like CPU, memory
and storage. According to the NIST definition [3], the consumer in this cloud
model does not manage or control the underlying cloud infrastructure but has
control over operating systems, storage, deployed applications, and possibly lim-
ited control of selected networking components which are in fact virtual items.
The SmartenIT approach provides consistent sequence of terms and concepts
starting from most generally defined scenarios, providing more functionality-
oriented use cases, up to detailed solutions [6]. We foresee an environment where
both user-driven and operator-driven actions lead to a mutually accepted solu-
tion. Therefore a proper architecture is required for this. The Internet services
provisioning is enabled by means of commercial business agreements among the
Internet stakeholders, thus creating a multi-provider and user-centric environ-
ment, presented in the next section.
In the remainder of the paper we focus our analysis on the End User Focused
Scenario (EFS), pertaining to the services and interactions visible to the end-
users and the Operator Focused Scenario (OFS) reflecting the wholesale agree-
ments among network and overlay service providers - that though not visible
to the end users, greatly affect the Internet services. These two scenarios are
also important from a technical point of view since they allow different poten-
tial for optimizations and respective traffic mechanisms that operate in different
time scales and over different granularities of Internet traffic under a unified
framework, the SmartenIT architecture.
There exist two different segments of the value chain of service provisioning. On
the one hand, there is the wholesale market with large time scale agreements
regarding traffic aggregates of providers who interact in order to provision ser-
vices to the users. On the other hand, users exhibit their demand in small time
Socially-Aware Management of New Overlay Applications Traffic 293
scales by means of flows. Consequently, two different scenarios are defined, one
addressing the end-users’ view of the system (EFS) and the other addressing the
operators’ view of the system (OFS).
The EFS is based on the involvement of the end-user and his devices as active
instances in the system, e.g., for prefetching data to be consumed in the near
future. As opposed to that, the OFS is focusing on the interactions among the
different operators acting in SmartenIT framework, namely Cloud Providers,
Service Providers, and Internet Service Providers. One key feature of Cloud
Computing paradigm is the fact that cloud services are offered to the end-user
in a transparent way, e.g. the user is not aware of physical location.
The SmartenIT scope clearly highlights the fact that multiple stakehold-
ers are involved, each of them having its own interests, strategies and business
goals. It must be noted that a SmartenIT solution would inevitably have to
accommodate the possibly conflicting goals and needs of the stakeholders in an
incentive-compatible way. Otherwise, the SmartenIT propositions would have
limited or even no chance of being adopted in practice, resulting in limited or
zero impact on the market.
This data can be derived from energy models providing an energy estimate of
the placement. Energy models which estimate energy consumption from net-
work measurements are existing in literature, e.g., [10]. Moreover, modelling the
energy efficiency of placements allows also for the prediction of future energy
consumptions.
Besides providing a platform for placing content and services, the uNaDa
layer accounts for the increasing importance of mobile devices such as smart
phones and tablets. Therefore, means for offloading from cellular technologies to
the uNaDas, local WiFis are offered. The offloading capabilities are combined
with possibilities to access the content stored on the uNaDa, thus the uNaDa
may act as a wireless cache for relevant content.
The stakeholders in this scenario are Cloud Application Providers offering
the content and services (e.g., YouTube, Vimeo or Amazon EC2), the Internet
Service Provider, and the end user. Cloud Application Providers and end users
have a clear incentive to collaborate in this scenario, as it will increase the QoE
of the end user, which will in turn increase his willingness to pay for services
and content [11]. Moreover, the cloud application can reduce the provisioning
of own resources. On the contrary, the ISP can benefit from the solution, if it
is implemented in an underlay aware way, which will reduce traffic over peered
links and can relieve the cellular infrastructure.
its (storage, computing) service to offer other services and applications to end-
users. Moreover, data migration/re-location may often be imposed by the need
to reduce overall energy consumption within the federation by consolidating
processes and jobs to few DCs only.
On the other hand, the traffic generated by the data replication or migra-
tion performed by the DCOs significantly burdens the networks of the ISPs,
which implies increase of operating cost for the ISP mainly in terms of tran-
sit inter-connection cost. Consequently, the ISPs would like to employ certain
mechanisms and pricing models that will enable efficient traffic management, as
well as sharing of revenues obtained by cloud services and applications.
Therefore, the OFS scenario defines a series of interesting problems to be
addressed by SmartenIT, specifically:
– the interactions between the members of the Cloud Federation (from technical
and business point of view)
– the interactions of DCOs and ISP in terms of traffic crossing ISP WAN links,
fair optimization of resource allocation and sharing among the federated mem-
bers, and
– energy efficiency for DCOs, either individually or overall for all member of
the federation.
Home router sharing based on trust (HORST) [1] is a mechanism which addresses
data offloading, content caching/prefetching, and content delivery. The HORST
mechanism eases data offloading to WiFi by sharing WiFi networks among
trusted friends. Moreover, it places the content near to the end-user such that
users can access it with less delay and higher speed, which generally results in a
higher QoE. The SmartenIT solution consists of a firmware for a home router and
an OSN application. The HORST firmware establishes a private and a shared
WiFi network (with different SSIDs) and manages the local storage of the home
router as a cache.
To participate a user needs a flat rate Internet access at home where he has
to install the HORST firmware to his home router. The owner of the home router
uploads the WiFi access information of the shared WiFi to the OSN application.
Each user can share his WiFi information to other trusted users via the app and
request access to other shared WiFis.
HORST ensures resource contribution by incorporating an incentive mecha-
nism coupling different resource contributions of end-users to the system, namely
the provisioning of storage capacity and offloading capacity, to the receivable
QoE, based on the vINCENT incentive mechanism presented in [4]. In order to
enhance the overall system performance, the incentive includes social networking
data as a base of trust, thus increasing market liquidity.
296 K. Wajda et al.
The HORST router has a social monitor component to collect social infor-
mation from an OSN about the router’s owner and his trusted friends. If a user
approaches the home router of a trusted friend, he is provided with access data
via the OSN app to connect to the shared WiFi. Every HORST system predicts
the content consumption (i.e., when and where will which content be requested)
of his owner based on history and information from the OSN, such as content
popularity and spreading. If a predicted content, e.g., a video, is not yet avail-
able in the local cache, it will be prefetched (e.g., first chunks of a video as
proposed by the Socially-aware Efficient Content Delivery (SECD) mechanism
[7]). If the user is connected to a friend’s home router, a prefetch command is
sent to the HORST system on the friend’s router. For prefetching as well as for
actual requests which cannot be served locally, HORST chooses the best source
(either another HORST home router or a cloud source) based on overlay infor-
mation, and fetches the desired content. In regular intervals, HORST checks if
the content in his own local cache is still relevant (either for local consumption
or as a source for content delivery) and decides whether to keep or replace it.
Finally, HORST federates all home routers to form an overlay content deliv-
ery network (CDN), which allows for efficient content placement and traffic
management. Thereby, ISP costs can be included in the local decisions at the
home router, e.g., by taking into account the location of contents in terms of
Autonomous Systems (AS). For the communication between the home routers
and the distributed storage of meta-information, RB-Tracker [9] is used due to
its efficiency. Thus, RB-Tracker builds the basis of HORST and hides the com-
plexity of the overlay management.
makes a prediction for the next period and finds a better traffic distribution
in which the ISP’s cost is minimized; (2) the compensation procedure which
determines how the traffic distribution should be influenced at a given moment
to achieve the optimal solution at the end of the accounting period, i.e., decides
on selection of inter-domain link. To make it possible to dynamically react to
the situation in the network and predict costs periodical traffic measurements
on links is needed. Finally, to make it possible to dispatch flows to appropriate
tunnels SDN controller may be used.
5 System Architecture
Based on the description of scenarios, the identified properties of the SmartenIT
approach and the high-level description of such solutions, this section provides an
overview of the SmartenIT system architecture [8]. The component diagram in
Fig. 1 shows all the core components of the architecture as well as the necessary
interfaces. The color-coding of the components denotes whether a component
already exists in external systems (white) or if it is SmartenIT-specific (blue).
In the rest of this section, we highlight the most important components and
provide a short overview of their functionality.
With this in mind, scenarios and functionality can be aligned along a common
axis. This further allows mapping each SmartenIT proposed mechanism to a
set of envisioned functionalities, while at the same time showing the mainly
addressed scenario. A detailed view of this mapping of all mechanisms to the
respective scenarios/functionality is attached to the appendix of [7].
Acknowledgment. This work has been performed in the framework of the EU ICT
STREP SmartenIT (FP7-ICT-2011-317846). The authors would like to thank the entire
SmartenIT team for discussions and providing insights on major research problems.
300 K. Wajda et al.
References
1. Seufert, M., Burger, V., Hoßfeld, T.: HORST - home router sharing based on trust.
In: Social-aware Economic Traffic Management for Overlay and Cloud Applications
Workshop (SETM 2013), in conjunction with 9th International Conference on Net-
work and Service Management (CNSM), Zurich, Switzerland, October 2013
2. Cisco Systems White Paper: Cisco Global Cloud Index: Forecast and Methodology
2011–2016 (2012)
3. Mell, P., Grance, T.: SP 800–145. The NIST Definition of Cloud Computing,
National Institute of Standards & Technology (2011)
4. Wichtlhuber, M., Heise, P., Scheurich, B.: Hausheer, D: Reciprocity with virtual
nodes: Supporting mobile peers in Peer-to-Peer content distribution. In: Social-
aware Economic Traffic Management for Overlay and Cloud Applications Work-
shop (SETM 2013), in conjunction with 9th International Conference on Network
and Service Management (CNSM), Zurich, Switzerland, pp. 406–409, October 2013
5. Duliński, Z., Stankiewicz, R.: Dynamic traffic management mechanism for active
optimization of ISP costs. In: Social-aware Economic Traffic Management for Over-
lay and Cloud Applications Workshop (SETM 2013), in conjunction with 9th
International Conference on Network and Service Management (CNSM), Zurich,
Switzerland, pp. 398–401, October 2013
6. Biancani, M., Cruschelli, P., (eds.): SmartenIT Deliverable 1.2 – Cloud Service
Classifications and Scenarios, October 2013
7. Burger, V. (ed.): SmartenIT Deliverable 2.2 – Definitions of Traffic Management
Mechanisms and Initial Evaluation Results, October 2013
8. Hausheer, D., Rückert, J. (eds.): SmartenIT Deliverable 3.1 – Initial System Archi-
tecture, April 2013
9. Lareida, A., Bocek, T., Waldburger, M., Stiller, B.: RB-tracker: A fully distributed,
replicating, network-, and topology-aware P2P CDN. In: IFIP/IEEE International
Symposium on Integrated Network Management (IM 2013), Ghent, Belgium, pp.
1199–1202, May 2013
10. Schwartz, C., Hoßfeld, T., Lehrieder, F., Tran-Gia, P.: Angry apps: the impact of
network timer selection on power consumption, signalling load, and web QoE. J.
Comput. Netw. Commun. 2013, Article ID. 176217, 13 pp. (2013). doi:10.1155/
2013/176217
11. Reichl, P.: From charging for quality of service to charging for quality of experience.
Ann. Telecommun. 65(3–4), 189–199 (2010)
12. Stiller, B., Hausheer, D., Hoßfeld, T.: Towards a socially-aware management of
new overlay application traffic combined with energy efficiency in the internet
(SmartenIT). In: Galis, A., Gavras, A. (eds.) FIA 2013. LNCS, vol. 7858, pp. 3–
15. Springer, Heidelberg (2013)
13. Hoßfeld, T., Hausheer, D., Hecht, F., Lehrieder, F., Oechsner, S., Papafili, I., Racz,
P., Soursos, S., Staehle, D., Stamoulis, G.D., Tran-Gia, P. Stiller, B., Hausheer,
D.: An economic traffic management approach to enable the TripleWin for users,
ISPs, and overlay providers. In: Tselentis, G., et al. (eds.) FIA Prague Book –
Towards the Future Internet - A European Research Perspective, pp. 24–34. IOS
Press Books (2009)
14. Fiedler, M., Hossfeld, T., Tran-Gia, P.: A generic quantitative relationship between
quality of experience and quality of service. IEEE Netw. Spec. Issue Improving QoE
Netw. Serv. 24(2), 36–41 (2010)
Implementing Application-Aware Resource
Allocation on a Home Gateway
for the Example of YouTube
Abstract. Today’s Internet does not offer any quality level beyond
best effort for the majority of applications used by private customers.
If multiple customers with heterogeneous applications share a bottle-
neck link to the Internet, this often leads to quality deterioration for the
customers. This particularly holds for home networks with small band
Internet access and for home networks with resource limitation like a
bad channel quality within a wireless network. For such cases, the best
effort allocation of resources between heterogeneous applications leads
to an unfair distribution of the application quality among the users. To
provide a similar application quality for all users, we propose to imple-
ment an application-oriented resource management on a home gateway.
Therefore, allocation mechanisms need to be implemented such as the
prioritization of network flows. Furthermore, a component monitoring
the application quality and dynamically triggering these mechanisms is
required. We show the feasibility of this concept by the implementation
of an application monitor for YouTube on a standard home gateway. The
gateway estimates the YouTube video buffers and prioritizes the video
clip before the playback buffer depletes.
1 Introduction
The success of tablet computers, game consoles, and Smart TVs reflects the
increased user demand for Internet-based services at home. The users in the
home network can access value-added services offered directly by the network
provider, such as IPTV. Likewise, they also use Over-The-Top (OTT) services
like YouTube, Netflix, or online gaming and browse the web or download files. All
these services have specific requirements with respect to the network resources
which have to be fulfilled to ensure a good Quality-of-Experience (QoE) for the
users. Furthermore, multiple users may concurrently access different services via
the central Internet access point in the home network, the home gateway.
As stated by the Home Gateway Initiative (HGI) [1], the network at home
and the broadband Internet access link may constitute a bottleneck. This may be
due to the insufficient availability of broadband access, i.e., the network provider
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 301–312, 2015.
DOI: 10.1007/978-3-319-16292-8 22
302 F. Wamser et al.
offers only small band Internet access, or due to limitations within the home
network, like the varying channel quality of the wireless networks.
Traffic in today’s network structures is usually transmitted on a best effort
basis. As a result, different services or applications with varying requirements
and capabilities are treated equally on a per flow-basis, resulting in a unfairness
in terms of QoE. Long lasting OS updates on a home computer may thus inter-
fere with video streaming to a Smart TV which leads to video stallings and a
degradation in the QoE of the user. In such a case, it is necessary to explicitly
allocate the network resources unequally to the involved applications on network
level to achieve a similar QoE across multiple applications, as introduced in [2].
This explicit network resource control is known as Application-Aware Net-
working. Scalability issues hinder the implementation of such mechanisms within
aggregation and wide area networks, but the small size of the home network
makes them a promising candidate. Most of the traffic is forwarded in home net-
works via the home gateway, making it possible to control the network resources,
and therewith the application quality, at this entity. The applicability of this
approach was shown in [3], however, performance studies on the potential of the
approach on a home gateway are missing.
In this paper, we evaluate the potential of flexibly allocating network resources
to different applications. We focus on a two-application scenario where YouTube
flows and a file download compete for resources via a shared bottleneck link.
YouTube maintains a playback buffer to overcome resource limitations on a
short time scale. We take advantage of this in order to provide an accurate
reaction against video stallings. We implemented a network-based buffer estima-
tor for YouTube which allows the accurate monitoring of the application video
pre-buffering state. If the local video buffer runs empty, the IP flow is priori-
tized, if the buffer is sufficiently filled, the prioritization is turned off, or other
applications like browsing are prioritized.
The remainder of this paper is structured as follows. Section 2 summarizes the
home gateway architecture and its components. In Sect. 3, the implementation
details for the specific scenario are described. Section 4 highlights the evaluation
setup, and Sect. 5 presents the results of our evaluation. The related work is
summarized in Sect. 6, and the paper is concluded in Sect. 7.
This means that they are triggered if an indication for, or the QoE degrada-
tion itself, has been detected and a resource management action may improve
the QoE of one application while not degrading the quality of others. In the
following, we describe the four components.
Details of our approach to estimate the application quality of YouTube are dis-
cussed in this subsection.
To enable a smooth playback of YouTube videos for the customer with a fluc-
tuating network bandwidth, the YouTube application buffers a certain amount
of the video data. To further achieve an economically efficient service, the pre-
buffered video data have to be kept as small as possible to minimize the waste of
304 F. Wamser et al.
transport resources if the user aborts the video playback. This task is performed
by different flow control algorithms. Thus far YouTube implements two different
approaches on flow control: the “throttling approach” performed on the server
side by controlling network bandwidth [4] and the “range request approach”
performed by the client by requesting the required parts of the video similar to
the MPEG-DASH approach [5].
YouTube supports multiple media container formats [6]. Each format encap-
sulates the content provided in a different way. The most common formats
are Flash Video (FLV) [7], Third Generation Partnership Project (3GPP) [8]
and MPEG 4 (MP4) [9] where MP4 can be implemented as a continuous or
fragmented format, the last one currently being the default format for newly
uploaded videos to the platform.
MP4 specifies that media and meta data can be separated and stored in
one or several files, as long as the meta data is stored as a whole. The file is
divided in so-called boxes. The standard files contain one “ftyp”, “mdat” and
“moov” box. Where “ftyp” specifies the file type, “mdat” contains the media
data and “moov” the meta data. “moov” has several subboxes that contain the
video header and timescale, information about substreams and the base media
decode time.
Fragmented MP4 provides the ability to transfer the “mdat” box in multiple
fragments instead of one big blop. As shown in Fig. 1 each fragment consists of a
“moof” and a “mdat” box. The “moof” (movie fragment) box contains subboxes
determining the default sample duration, the number of samples in the current
track and a set of independent samples.
The fragmented MP4 is currently used for YouTube’s range request algorithm
and is currently the default format for newly uploaded videos. Therefore the
application used for the YouTube buffer prediction focuses on this format.
To detect the requests to the server, the outgoing TCP traffic is monitored
for HTTP GET requests. When a request is detected, video information like the
video id, the format and the signature as well as streaming related information
like the utilized flow control approach, are derived.
As for accumulating the currently downloaded amount of data for a certain
video, the incoming traffic is monitored for the fragment boxes and incoming
fragments are added to the estimation of the video’s download process. The
buffer estimation is then calculated as the difference between the available play-
back time of the already downloaded amount and the time passed since the begin
of playback. We compute the application state for the video and the audio flow.
User interaction like pausing, reverse jumping can not be taken into account.
Forward jumping can be identified using the media data if the user jumps to a
location which has not been downloaded yet.
The most flexible way to control the network resources on a per-flow basis is the
assignment of a dynamic rate limit to the individual flows. This enables a granular
adjustment of each flow, but also results in a lot of state information required to
maintain the single queues for each individual flow. A more simple approach is to
define several priority queues and assign the flows to the different priorities with
respect to the current application state, cf. Fig. 2. Since we evaluate a small home
scenario, we implemented the priority approach by using five different queues with
a queue size of 25 packets for each queue. To dynamically re-assign flows to differ-
ent priorities, we use a Python wrapper script for the Linux Traffic Control [10]
(TC) API via a simple TCP socket interface. With TC a stateful queuing is done
that classifies the incoming packets and sorts them into the priority classes that
are emptied by a scheduler according to the priorities.
5 Evaluation Results
5.1 YouTube Buffer Level During Video Playback
At first, we have a look at the YouTube buffer level over time without any
resource management. Using fragmented MP4, the audio and the video stream
are transmitted separately resulting in different playback buffer states. If one of
the playback buffers is empty, the video clip stalls until the stalling threshold of
5 s is reached.
audio buffer
30
video buffer
25
playback buffer [s]
20 stalling threshold
15
10
0
0 50 100 150 200 250 300
time [s]
Fig. 3. Buffered playtime for audio and video without application monitoring and
resource management
Figure 3 shows the playback buffer for the audio and video buffers. Both
buffer levels increase until the parallel download is started. After that, the buffer
level for both buffers decreases at about 20 s. In this example, the video buffer
initially is empty at second 40, resulting in a short stalling period although the
audio buffer stays at a sufficient buffer level. Since not enough network capacity is
available to allow a smooth video playback in case of a concurrent file download,
the video playback is interrupted several times. After 60 s, the audio buffer falls
1
Video clip available at http://www.youtube.com/watch?v=Aeaz4s7q0Ag&wide=
1&hd=1.
308 F. Wamser et al.
below the threshold of 5 s while the video buffer still maintains a higher buffer
level. Consequently, the audio flow is also taken into account in the following.
70
audio buffer
60 video buffer
50
playback buffer [s]
40
30
20
stalling threshold
10
0
0 50 100 150 200 250 300
time [s]
Fig. 4. Buffered playtime for audio and video with application monitoring and resource
management
Figure 4 shows audio and video playback buffer over time with dynamic pri-
oritization enabled. In contrast to the best-effort case, the buffer levels increase
although the parallel download has started. Both flows are prioritized until the
playback buffers overrun a threshold of 45 s. After that, the resource control
mechanism is turned off, until the buffers fall below 25 s. The TCP flow control
results in additional dynamics and it may take some time until a fair bandwidth
share between the flows is reached. This is reflected by the minimal and maximal
playback buffer levels at 15 s and 63 s. In addition, the re-assignment to different
priorities may lead to packet reordering influencing the TCP control loop and
resulting in a reduction of the TCP sending window, cf. [12].
After investigating the time series for the audio and video streams for the sce-
narios with and without prioritization, we focus on a statistical significant com-
parison between both approaches. For that, we conduct 10 runs and compare the
CDFs for the buffer fillings. The results of this investigation is illustrated in Fig. 5
with a confidence level of 95 %. It can be seen that the application-aware app-
roach using dynamic prioritization with respect to the application state clearly
outperforms the approach without prioritization, i.e., the best effort case. This
holds for both flows, the audio and the video flow. Stalling is minimized allowing
a better user-perceived quality for the video streaming user. Further, it can be
seen, that the buffer level for the audio buffer is typically higher as for the video
buffer. Hence, we can conclude that a video stalling is more likely due to a video
Implementing Application-Aware Resource Allocation on a Home Gateway 309
1
without
0.8 prioritization
0.6 stalling
CDF
threshold
0.4 with prioritization
buffer under run. Figure 5 also shows that the majority of values are located
between 20 and 50 s, closely related to the values of the control hysteresis.
6 Related Work
7 Conclusion
Traffic in today’s network structures is typically transmitted on a best effort
basis. As a result, different services or applications with varying requirements
and capabilities are treated equally on a per flow-basis. This may result in unfair-
ness in terms of the user-perceived quality, in particular if the overall resources
are limited. This holds especially for home networks where different users may
compete for limited network resources.
In this paper, we presented an application-aware networking approach to
overcome this problem. We investigated the feasibility of the approach for a
scenario consisting of two users, a download user and a user watching a YouTube
video clip. We implemented an application monitoring component for YouTube,
a prioritization mechanism to control the resources, and a simple decision logic.
The monitoring component is able to estimate the video and audio playback
buffer. The resulting information are used to trigger a prioritization of the video
streaming if the buffer runs empty. Hence, a good QoE for the video streaming
user can be guaranteed. The components were implemented on a typical home
gateway. The evaluation of the scenario indicates the potential of the mechanism
to manage the application quality and therewith the QoE for multiple users in
a multi-application scenario.
References
1. Home Gateway Initiative: Home Gateway QoS module requirements. whitepaper,
Dec 2012
2. Eckert, M., Knoll, T.M.: ISAAR (Internet Service Quality Assessment and Auto-
matic Reaction) a QoE monitoring and enforcement framework for internet ser-
vices in mobile networks. In: Timm-Giel, A., Strassner, J., Agüero, R., Sargento,
S., Pentikousis, K. (eds.) MONAMI 2012. LNICST, vol. 58, pp. 57–70. Springer,
Heidelberg (2013)
Implementing Application-Aware Resource Allocation on a Home Gateway 311
3. Wamser, F., Zinner, T., Iffländer, L., Tran-Gia, P.: Demonstrating the prospects
of dynamic application-aware networking in a home environment. In: Proceedings
of the 2014 ACM Conference on SIGCOMM, pp. 149–150. ACM (2014)
4. Alcock, S., Nelson, R.: Application flow control in youtube video streams. ACM
SIGCOMM Comput. Commun. Rev. 41(2), 24–30 (2011)
5. Sieber, C., Hoßfeld, T., Zinner, T., Tran-Gia, P., Timmerer, C.: Implementation
and user-centric comparison of a novel adaptation logic for DASH with SVC. In:
IFIP/IEEE International Workshop on Quality of Experience Centric Management
(QCMan), Ghent, Belgium (2013)
6. Wikipedia: Youtube – Wikipedia, the free encyclopedia (2014). https://en.
wikipedia.org/wiki/Youtube. Accessed 20 May 2014
7. A.S. Incorporated: Video File Format Specification, Adobe Systems Incorporated
Std., Rev. 10, November 2008. http://download.macromedia.com/f4v/video file
format spec v10 1.pdf. Accessed 26 May 2014
8. Singer, D.: 3GPP TS 26.244; Transparent end-to-end packet switched streaming
service (PSS); 3GPP file format (3GP), ETSI 3GPP Std., Rev. 12.3.0, March 2014.
http://www.3gpp.org/DynaReport/26244.htm. Accessed 26 May 2014
9. MPEG 4 standards ISO/IEC 14496–1 ff, International Organization for Standard-
ization Std., Rev. 2010 (1999). http://www.iso.org/iso/iso catalogue/catalogue
ics/catalogue detail ics.htm?csnumber=24462
10. Hubert, B.: Linux Advanced Routing & Traffic Control, Linux Foundation.
Accessed 26 May 2014
11. Freetz project: Freetz. http://freetz.org/
12. Zinner, T., Jarschel, M., Blenk, A., Wamser, F., Kellerer, W.: Dynamic application-
aware resource management using software-defined networking: implementation
prospects and challenges. In: IFIP/IEEE International Workshop on Quality of
Experience Centric Management (QCMan), Krakow, Poland (2014)
13. IEEE 802.1Q 2011: Standard for Local and Metropolitan Area Networks - Media
Access Control (MAC) Bridges and Virtual Bridge Local Area Networks, August
2011
14. IEEE 802.11e-2005: Standard for Information technology - Part 11: Wireless LAN
Medium Access Control (MAC) and Physical Layer (PHY) specifications, Novem-
ber 2005
15. IEEE 802.16m-2011: Standard for Local and metropolitan area networks - Part 16:
Air Interface for Broadband Wireless Access Systems, Amendment 3: Advanced
Air Interface (802.16m-2011), May 2011
16. Paul, S., Jain, R., Pan, J., Iyer, J., Oran, D.: Openadn: A case for open appli-
cation delivery networking. In: 2013 22nd International Conference on Computer
Communications and Networks (ICCCN), pp. 1–7. IEEE (2013)
17. RFC 791: DARPA Internet program protocol specification (1981)
18. Nichols, K., Blake, S., Baker, F., Black, D.: RFC 2474: Definition of the differen-
tiated services field (DS field) in the IPv4 and IPv6 headers (1998)
19. Agboma, F., Liotta, A.: QoE-aware QoS management. In: 6th International Con-
ference on Advances in Mobile Computing and Multimedia, pp. 111–116. ACM
(2008)
20. Fiedler, M., Hoßfeld, T., Tran-Gia, P.: A generic quantitative relationship between
quality of experience and quality of service. IEEE Network 24(2), 36–41 (2010).
Special Issue on Improving QoE for Network Services
21. Gross, J., Klaue, J., Karl, H., Wolisz, A.: Cross-layer optimization of OFDM trans-
mission systems for MPEG-4 video streaming. Comput. Commun. 27(11), 1044–
1055 (2004)
312 F. Wamser et al.
22. Khan, S., Peng, Y., Steinbach, E., Sgroi, M., Kellerer, W.: Application-driven cross-
layer optimization for video streaming over wireless networks. IEEE Commun.
Mag. 44(1), 122–130 (2006)
23. Reis, A., Chakareski, J., Kassler, A., Sargento, S.: Quality of experience optimized
scheduling in multi-service wireless mesh networks. In: IEEE Conference on Image
Processing (ICIP), pp. 3233–3236. IEEE (2010)
24. Pries, R., Hock, D., Staehle, D.: QoE based bandwidth management supporting
real time flows in IEEE 802.11 mesh networks. Praxis der Informationsverarbeitung
und Kommunikation 32(4), 235–241 (2010)
25. Ameigeiras, P., Ramos-Munoz, J.J., Navarro-Ortiz, J., Mogensen, P., Lopez-Soler,
J.M.: QoE oriented cross-layer design of a resource allocation algorithm in beyond
3G systems. Comput. Commun. 339(5), 571–582 (2010)
26. Huang, C., Juan, H., Lin, M., Chang, C.: Radio resource management of hetero-
geneous services in mobile WiMAX systems [Radio Resource Management and
Protocol Engineering for IEEE 802.16]. IEEE Wireless Commun. 14(1), 20–26
(2007)
27. Thakolsri, S., Khan, S., Steinbach, E., Kellerer, W.: QoE-driven cross-layer opti-
mization for high speed downlink packet access. J. Commun. 4(9), 669–680 (2009)
28. Superiori, L., Wrulich, M., Svoboda, P., Rupp, M., Fabini, J., Karner, W., Stein-
bauer, M.: Content-aware scheduling for video streaming over HSDPA networks.
In: Second International Workshop on Cross Layer Design, IWCLD 2009, pp. 1–5.
IEEE (2009)
29. Staehle, B., Wamser, F., Hirth, M., Stezenbach, D., Staehle, D.: AquareYoum:
Application and quality of experience-aware resource management for YouTube in
wireless mesh networks. PIK - Praxis der Informationsverarbeitung und Kommu-
nikation (2011)
30. Xiao, M., Shroff, N., Chong, E.: A utility-based power-control scheme in wireless
cellular systems. IEEE/ACM Trans. Netw. 11(2), 210–221 (2003)
31. Andrews, M., Qian, L., Stolyar, A.: Optimal utility based multi-user throughput
allocation subject to throughput constraints. In: IEEE INFOCOM, vol. 4, pp.
2415–2424. IEEE (2005)
32. Song, G., Li, Y.: Utility-based resource allocation and scheduling in ofdm-based
wireless broadband networks. IEEE Commun. Mag. 43(12), 127–134 (2005)
33. Saul, A.: Simple optimization algorithm for mos-based resource assignment. In:
VTC Spring 2008, pp. 1766–1770. IEEE (2008)
34. Pei, X., Zhu, G., Wang, Q., Qu, D., Liu, J.: Economic model-based radio resource
management with qos guarantees in the cdma uplink. Eur. Trans. Telecommun.
21(2), 178–186 (2010)
35. Zinca, D., Dobrota, V., Vancea, C., Lazar, G.: Protocols for communication
between qos agents: Cops and sdp. In: COST
36. Katchabaw, M., Lutfiyya, H., Bauer, M.: Usage based service differentiation for
end-to-end quality of service management. Comput. Commun. 28(18), 2146–2159
(2005)
37. Martin, J., Feamster, N.: User-driven dynamic traffic prioritization for home net-
works. In: Proceedings of the 2012 ACM SIGCOMM Workshop on Measurements
Up the Stack, pp. 19–24. ACM (2012)
Ambient Assisted Living (AAL)
Architectures
Reliable Platform for Enhanced
Living Environment
Abstract. The aim of this paper is to present an idea of the platform for
enhanced living environment that will allow flexible and reliable use of cloud
computing and sensor networks for highly customized services and applications.
The architecture is based on sensors using IEEE 802.15.4 and zigbee protocols.
Furthermore, the access to the cloud could be done by any available wired or
wireless technology. We propose an application layer service using peer port for
reliable and scalable data transmission. The presented solution is dynamic,
flexible and conforms to the health and home automation standards.
1 Introduction
The aim of this work is to investigate the Quality of Service (QoS) and performance
parameters of the sensor-to-cloud communication. We highlight the QoS parameters
between the Modbus TCP programmable logic controllers (PLC) and the wireless
sensor mesh network of zigbee 802.15.4 devices working at 868.3 MHz and at
2.4 GHz. The traffic parameters considered are throughput, delay, loss, topology dis-
covery, and topology configuration time. The applicability of the solution is proven by
a set of experiments in the lab with up to 20 sensors, few repeaters and one PLC. The
solution could be applied for personal area network (PAN) setup and combined well
with body area network (BAN), home/business automation network, health care net-
work from one side and wide range of cloud applications for data storing and statistical
analyses [1] from the other side. Due to the vitality of the information, the connection
to the cloud is considered to be peer-to-peer (P2P) using a special type of reservation
channel at the application layer called peer port (PP). The peer port application is
simulated and experimented in the lab LAN as well.
© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 315–328, 2015.
DOI: 10.1007/978-3-319-16292-8_23
316 R. Goleva et al.
During our experiments, we check the network performance in the case of radio
interference at the same radio frequency, with additional radio frequency noise, dif-
ferent number of devices and Modbus channels, in the sleeping/waking up modes, and
different zigbee profiles.
The paper is organized in two main sections. The first one explains the details of the
experiments in sensor network. The second part concerns trials in the lab of
the application layer protocol and peer port implementation. The tests performed show
the vitality of the solution and its further perspectives. Future research and imple-
mentation plans are discussed at the end of the paper.
visible at this layer [9]. Most of the QoS parameters estimated in this work are at
network layer. Some of the Quality of Experience (QoE) parameters [5] could be also
calculated [12, 13].
Losses are calculated as a part of the totally sent data. It is not always possible to
distinguish between the data sent successfully during first trial, during the second trial
etc. For example, the enocean sensors repeat the same message three times in case of
lack of acknowledgement (ack). The ratio between numbers of retransmissions over the
total transmissions is calculated according to [14]. The availability of the service in
zigbee and enocean technology is up to 99,99 %. This means only 52 min without
service annually [15]. This reliable solution is thanks to the vitality of the sensors and
their capability to structure and restructure clusters using flexible ad hoc On-Demand
Distance Vector (AODV) routing.
Sensors work in a sleeping/waking up mode in a ratio 1:100. They are grouped in
ad hoc clusters, form clusters, reshape clusters dynamically. The transmission is per
hop or end-to-end [7, 16, 17]. This feature makes the technology very appropriate for
PAN and BAN. During the experiments series zigbee, input/output zigbee (i/o zigbee),
i/o zigbee controllers are used for home automation [18, 19]. Trials with enocean
sensors are planned for further analyses and comparison. The experiments in the lab are
setup with up to 20 sensors on 20 sq. m with a possibility for radio interference. The
transmission interval is expected to increase exponentially with the amount of infor-
mation to be sent and the sensor density.
The trials also include measurements when there is a direct transmission line
between sensors in a mesh topology and when there is a lack of direct channel and
sensors have to retransmit, i.e. having a semi-mesh topology [9]. Thanks to the
reconfiguration by the ad hoc routing protocol, the clusters usually form mesh topology
[20]. When mesh configuration is not possible, the sensors establish a communication
channel using dynamic relays. This feature allows transmission in real-time or near-
real-time because the time for reconfiguration is less than 30 ms. Additional 15 ms are
necessary for the sensor to wake up and another 15 ms for the channel access. Practical
applications show that these time intervals are up to 10 times bigger [9, 21, 22]. The
channel capacity is usually up to 100 kbps. The protocol could tolerate up to 30 %
losses. That is why the zigbee standards are applicable in Internet of Things (IoT) and
Delay (Disruption) Tolerant Networks (DTN).
Zigbee works at about 1.10-5 Bit Error Rate (BER) and about 0 signal/noise ratio.
Other technologies like WiFi and Bluetooth can reach up to 1.10-1 BER [23]. The
communication channel is point-to-point at 868 MHz with up to 20 kbps rate. The rate
at 2.4 GHz could be up to 250 kbps and there might be up to 26 communication
channels accessed by Carrier Sense Multiple Access with Collision Avoidance
(CSMA-CA) protocol. A beacon frame is used for waking up the sensors. Waking
intervals race significantly without the use of a beacon [24–26]. Theoretically, the
topology with clusters could be scalable and very big. Up to 255 clusters with up to 254
nodes could be connected. This allows the technology to be used in applications in
business, hospital/hotel environments in a scalable manner [27–30].
318 R. Goleva et al.
The protocol Modbus was created for PLC and remote terminals communication using
the master/slave protocol. It is asynchronous and supports point-to-point and point-to-
multipoint connections via RS232, RS485, RS422 interfaces of wired and wireless
modems. In our experiments, we use Modbus RTU (Remote Terminal Unit) and
Modbus TCP. The payload of the data unit is up to 252 bytes. Data units are encap-
sulated into TCP segments.
In addition to IEEE 802.15.4 physical and MAC layers, the zigbee protocol stack
supports network layer routing and packet forwarding using peer-to-peer communi-
cation. Star topology is also supported. The zigbee protocol has an application sublayer
(APS) for cluster, profile, and end-points addressing. At application layer there is a
Zigbee Device Object (ZDO) for device, services, and network management. IEEE
802.15.4 defines Full Function Devices (FFD) and Reduced Function Devices (RFD).
The zigbee router, the part of the end-devices and the coordinator are FFDs. Some of
the end-devices are RFDs. The zigbee coordinator and routers are always awaken. FFD
end-devices could sleep (Fig. 1).
Enocean Zigbee
Inputs Network Inputs
Network Outputs
Outputs
Serial Comm.
Zigbee
Enocean modules
modules
Enocean Radio Module
Gateway
Ethernet Command Module Memory
Controllers
Internal Comm.
Other electr.
Inputs
Other
Outputs
Serial Comm.
The protocol repeats the same frames three times using the uniformly distributed
intervals to avoid collisions.
The protocol is optimized for energy consumption. Transmission is limited in time.
Rx and maximal Tx maturity times are accordingly 100 ms and 40 ms. The commu-
nication is near-real-time. The network could also have repeaters that have different
time-slot organization. Repeaters also count how many times the message is repeated.
The difference between repeater of type A and repeater of type B is the retransmission
procedure. Type A repeater retransmits only the messages from the original devices
whereas type B repeaters retransmit only the messages that are already retransmitted
ones or are coming from the original device.
The QoS tests organized in the network are summarized on Table 1. The parameters
estimated during the experiments are throughput, losses, and delay. All analyses of the
topology lead to the wake up and sleep time estimation as well as time for network
reconfiguration, network access, network routing. The detailed tests are made in normal
and extreme working environment.
The data exchanged in the sensor network is discrete or analogue and uses serial
communication for reliability. In the Fig. 1 the PLC can act as a client and a server at
the same time. HMI (Human Machine Interface) and SCADA (Supervisory Control
and Data Acquisition) are usually Modbus TCP clients. The data from sensors requires
Modbus TCP client/server mode. In this case, it is possible to transmit critical sensor
information to PLC immediately. Data from sensors can be sent periodically in sleep or
320 R. Goleva et al.
waking up modes. Sensors can transmit only changes in data values for energy safe. It
could be applied in sleep/waking up mode or in only wake up mode with delay critical
data, i.e. alarms. The data could be requested by the gateway. It is useful for traffic
management but needs additional functionality in the gateway. The functionality is
useful when the end-user likes to send a command to the end-device, i.e. switching on
or off. All types of data transfers could be combined depending on the customer request
of specific scenario application. Data could be fragmented. Sleep mode is not useful in
serial data transmission due to the additional delay. EnOcean sensors are STM310 and
STM300 types and work at 868.3 MHz. Radio module has 3 analogue and 4 digital
inputs, software and hardware control on waking up. The power supply could be
conventional or alternative. Repeaters have a port for serial communication using RS-
232 interface (Fig. 2). Retransmissions are limited to two. Transmission mode is pure
broadcasting because the messages are not addressed. This allows formation of a
typical mesh topology for PAN.
Topological schemas like tree or star are not considered in this paper. The MAC
layer also can be adapted to support guaranteed and contention-based time slots,
beacon or non-beacon-enabled networks. The binding table also allows service com-
position and decomposition. The connection to the cloud is done via zigbee gateway.
Regardless of its possible interference to IEEE 802.11b/g, the WLANs and the zigbee
networks can work together. This is feasible because 802.11 can change the frequen-
cies from one side and zigbee can transmit in guard bands from other side. If necessary
zigbee and IEEE 802.11 can use time division multiple access (TDMA) for interop-
erability. The compatibility with Bluetooth is supported by adaptive frequency hopping
(AFH). Due to the retransmission capabilities zigbee technology could coexists with
wireless phones and micro ovens (Fig. 3).
SynthNV and MixNV devices (produced by WindFreak Technologies) are used for
radio interference together with “Dolphin Sniffer”. The module is connected to the PC
and managed by DolphinView software. A Faraday cage is applied for interference
management. Test equipment also includes PLC M340 (Schneider Electric), opera-
tional panel HMI STU 855 (Schneider Electric), wireless switch WRT54G (Linksys,
Cisco Systems), and SCADA IGSS software for test configuration (Fig. 3).
Part of the measurements in zigbee sensor network is shown in Table 2. Only two
types of the experiments are presented in full mesh and semi mesh topologies with up
to 4 sensor-to-sensor retransmissions and additional radio noise at the transmission
channel. The transmission in 5 s corresponds to the case when the channel is one and
there is more than one sensor to transmit or there is a noise at transmission frequency.
Reliable Platform for Enhanced Living Environment 321
Loss occurs also in the situation of contention or noise. In all other cases the trans-
mission delays of up to 600 ms are acceptable for sensor data. In all cases the losses are
acceptable because sensors retransmit the data in case of error.
Mobile cloud computing and Internet of Things environments are interesting area for
QoS and QoE because of its lack of guarantees. End users expect to have the necessary
information and available communication line continuously. The difference between
physical and logical platforms is invisible from user perspective. The traffic is often
322 R. Goleva et al.
recovered from the replica in one of the peer ports. Usually, the replica occupies the
resource at peer port until data acknowledgment.
The protocols applied in all peers in the overlay is sliding window [39, 41]. The go
back N and selected repeat sliding window protocols are developed for point-to-point
connections (referred further as standard protocol). Here, we propose a combination
between point-to-point protocols and point-to-multipoint communication [40] at
application layer. The data from sending peer is sent simultaneously to the receiving
peer and to the peer port. The receiving peer sends acknowledgment (acks) or not
acknowledgment (nacks) for a specified packet to the peer port. In case of confirmation,
the peer port removes this packet, thus releases resources, and increases capacity to
store information (Fig. 5). In an active network operation, P2P clients are many, i.e.
there are many receiving peers and peer port must be flexible. In another case, upon
reception of nack from the receiving peer, the peer port prepares the relevant packet to
be sent, and sends it on communication channel.
The experiments in the lab are setup for behavior analyses of the peer port. A Java
application and a ns-2 model are used for performance verification. The model contains
one sending, one receiving and one peer port for simplicity. The services used during
the trials are books, video, audio, archives, and text messages transfer. The protocol
implemented is the well-known three-way handshake modified for the peer port (PP) by
added multicasting. In Fig. 6 the scenario for selected repeat is presented. The standard
link between sending (SP) and receiving peers (RP) is interrupted in both models
randomly.
At transport protocol, there are also options to use UDP or TCP over the different
point-to-point connections. The main experiment supposes TCP connections between
the sending peer from one side and receiving peer and peer port from the other side. In
addition, the connection between the peer port and the receiving peer is UDP. Other
combinations of TCP/UDP were also explored. The buffer length is set to 1 MB and a
2 ms delay limit standard transmission between sending and receiving peers is con-
sidered. The packet payload is 60 and 500 bytes and reflects the behavior of the queues.
Simulations and life experiments are performed by Java application in similar condi-
tions for further comparison of the results. All buffers and timers of the three peers are
equal. The go back N and selected repeat protocols are applied on a standard line and
on peer port links.
On Table 3 we present data from simulations when a message is transferred directly
and via peer port. The broken line is slower due to the peer port retransmission.
Standard link is the fastest way to transmit. The time to do the replica at peer port is
almost the same as the transmission time over standard line. The difference between
standard line and replication to peer port is comparable to the additional delay over the
Reliable Platform for Enhanced Living Environment 325
broken link. Additional delay added due to the broken link and peer port might be
significant in cases when the transmission is in real-time and the nodes are overloaded.
On Table 4, we present the same results for myfile.wav transfer. Single packet trans-
mission takes about 20–100 ms. The delay when using big file replica is much higher
and not acceptable in real-time communication.
The difference in time for transmission over peer port and direct line of myfile.wav
file is plotted on Fig. 7. The blue line on the figure represents standard line and the red
line shows the transmission over the peer port. The grids on the graph represents the
fluctuations of the length of the sending and receiving buffers due to the go back N
protocol or selective repeat protocols. The plots are cumulative for simplicity. After the
line was broken it was not recovered until the end of file transfer. Additional delay due
to retransmission is far from the acceptable delay bound in real-time applications.
Nevertheless, the technology is fully acceptable in continuous data replication between
storage networks and sensor networks with non time-critical data. All multihoming
solutions also could benefit from the solution.
Table 3. Simulated results for sending/receiving message with and without broken link.
Information Bytes Standard Broken Peer Peer port vs. Additional
line time, s line port standard line delay, s
time, s time, s deviation, s
Message 40 0,105 0,13 0,10232 0,00268 0,025
‘Hello 80 0,1142 0,1414 0,10264 0,01156 0,02722
world’ 120 0,107 −0,107
134 0,11478 0,1524 0,114784 0,037594
174 0,1074 −0,1074 0
214 0,112 −0,112 0
268 0,112 −0,112 0
Table 4. Simulated results for sending/receiving myfile.wav with and without broken link.
Information Bytes Standard line time, Broken line time, Additional delay,
s s s
Myfile.wav 40 1,2023 1,3589 0,1566
80 1,207 1,4586 0,2516
656 1,2116 1,5953 0,3837
1232 1,2162 1,5999 0,3837
1808 1,2208 1,6007 0,3799
2384 1,2254 1,6501 0,4247
2960 1,23 1,7105 0,4805
1063952 9,718 18,373 8,655
1068560 9,7548 18,41 8,6551
326 R. Goleva et al.
Fig. 7. The first 60000 sent/received packets from file archive over standard line and via peer
port.
7 Conclusion
This paper presents an idea for combined sensor-to-cloud platform that is reliable
enough to send PAN information. We set experiments in the lab dividing sensor and
peer port QoS parameters. All experimental results demonstrate the feasibility of the
proposed solution and its reliability. Sensor networks and peer port behavior could
influence significantly the near-real-time and non-real-time services in the distributed
overlays. We implement zigbee sensors in real trail as well as sliding window protocols
in ns2 model and in a real-time Java application. We show the additional delay for
telegrams and file transfers over the network. In non-prioritized environments, the
additional delay could be significant. Only messages and continuous packet streams
can conform to the QoS requirements. The future work aims to map the Quality of
Service and the Quality of Experience parameters over cross-layer approach for dis-
tributed quality management [42, 43].
Acknowledgments. Our thanks to ICT COST Action IC1303: Algorithms, Architectures and
Platforms for Enhanced Living Environments (AAPELE); Project No ИФ-02-9/15.12.2012,
Gateway Prototype Modeling and Development for Wired and Wireless Communication Net-
works for Industrial and Building Automation; Comicon Ltd., Bulgaria.
References
1. ZigBee Document 075360r15. ZigBee Health CareTM, Profile Specification, ZigBee
Profile: 0x0108. Revision 15, Version 1.0, March, Sponsored by: ZigBee Alliance (2010)
2. BlackBox ZigBee™ Test Client (ZTC), Reference Manual. Freescale Semiconductor
Literature Distribution Center, Document Number: BSBBZTCRM, Rev. 1.2 (2011)
3. Severino, R., Koubâa, A.: On the Performance Evaluation of the IEEE 802.15.4 Slotted
CSMA/CA Mechanism. IPP-HURRAY Technical Report, HURRAY-TR-080930,
September 2008
4. Agarwal, A., Agarwal, M., Vyas, M., Sharma, R.: A study of Zigbee technology. Int.
J. Recent Innov. Trends Comput. Commun. 1(4), 287–292 (2013). ISSN: 2321–8169
Reliable Platform for Enhanced Living Environment 327
5. Kaur, G., et al.: QoS measurement of Zigbee home automation network using various
modulation schemes. Int. J. Eng. Sci. Technol. (IJEST) 3(2), 1589–1597 (2011). ISSN:
0975-5462
6. Chen, F., Wang, N., German, R., Dressler, F.: Simulation study of IEEE 802.15.4 LR-
WPAN for industrial applications. Wirel. Commun. Mob. Comput. 10, 609–621 (2010).
doi:10.1002/wcm.736
7. Zigbee Specification, Document 053474r17 (2008)
8. ZigBee RF4CE Specification, version 1.01, ZigBee Document 094945r00ZB (2010)
9. Rawat, P., Singh, K.D., Chaouchi, H., Bonnin, J.M.: Wireless sensor networks: a survey on
recent developments and potential synergies. J. Supercomput. 68, 1–48 (2013). doi:10.1007/
s11227-013-1021-9. Springer Science + Business Media New York
10. Ciobanu, R.-I., Marin, R.-C., Dobre, C., Cristea, V., Mavromoustakis, C.X.: ONSIDE:
Socially-aware and interest-based dissemination in opportunistic networks. NOMS 2014, 1–
6 (2014)
11. IEEE 802.15.4/ZigBee Measurements Made Easy Using the N4010A Wireless Connectivity
Test Set. Agilent Technologies, Inc. (2009)
12. Tsitsipis, D.; Dima, S.M., Kritikakou, A., Panagiotou, C., Gialelis, J., Michail, H., Koubias,
S.: Priority Handling Aggregation Technique (PHAT) for wireless sensor networks. In: 2012
IEEE 17th Conference on Emerging Technologies & Factory Automation (ETFA), pp. 1, 8,
17–21 Sept 2012. doi:10.1109/ETFA.2012.6489574
13. Tung, H.Y., Tsang, K.F., Tung, H.C., Rakocevic, V., Chui, K.T., Leung, Y.W.: A WiFi-
ZigBee building area network design of high traffics AMI for smart grid. Smart Grid Renew.
Energy 3, 324–333 (2012) http://dx.doi.org/10.4236/sgre.2012.34043
14. 315 MHz Radio Communications in Buildings, EnOcean white paper
15. EnOcean Technology – Energy Harvesting Wireless, EnOcean white paper (2011)
16. EnOcean Wireless Systems – Range Planning Guide, EnOcean white paper (2008)
17. ZigBee-2007 Layer PICS and Stack Profiles, ZigBee Document 08006r03, Revision 03
(2008)
18. Alves, M., Koubaa, A., Cunha, A., Severino, R., Lomba, E.: On the development of a test-
bed application for the ART-WiSe architecture. In: Euromicro Conference on Real-Time
Systems (ECRTS 2006), (WiP Session) July 2006
19. EnOcean_Equipment_Profiles_EEP_V2.5, EnOcean Serial Protocol, March 4 (2013)
20. Woo, S.-J., Shin, B.-D.: Efficient cluster organization method of Zigbee nodes. Int. J. Smart
Home 7(3), 45–55 (2013)
21. ZigBee PRO Stack, User Guide, JN-UG-3048, Revision 2.4, NXP Laboratories UK (2012)
22. Krogmann, M., Heidrich, M., Bichler, D., Barisic, D., Stromberg, G.: Reliable, real-time
routing in wireless sensor and actuator networks. International Scholarly Research Network
ISRN Communications and Networking, vol. 2011, Article ID 943504, 8 p., (2011). doi:10.
5402/2011/943504
23. Zigbee Home Automation Public Application Profile, ZigBee Profile: 0x0104, Revision 26,
Version 1.1 (2010)
24. Singhal, S., Gankotiya, A.K., Agarwal, S., Verma, T.: An investigation of wireless sensor
network: a distributed approach in smart environment. In: Second International Conference
on Advanced Computing & Communication Technologies (2012)
25. Koubaa, A., Severino, R., Alves, M., Tovar, E.: H-NAMe: Specifying, Implementing and
Testing a Hidden-Node Avoidance Mechanism for Wireless Sensor Networks. IPP-
HURRAY Technical Report, HURRAYTR-071113, April 2008
26. Boonma, P., Suzuki, J.: Self-Configurable Publish/Subscribe Middleware for Wireless
Sensor Networks. 978-1-4244-2309-5/09. IEEE (2009)
328 R. Goleva et al.
27. Jurčík, P., Severino, R., Koubâa, A., Alves, M., Tovar, E.: Real-time communications over
cluster-tree sensor networks with mobile sink behaviour. In: the 14th IEEE International
Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA
2008), Kaohsiung, Taiwan (2008)
28. FP7-ICT-STREP Contract No. 258280, TWISNet, Trustworthy Wireless Industrial Sensor
Networks. Deliverable D4.1.2, Hardware platform characterization/description (2012)
29. Cuomo, F., Luna, S.D., Monaco, U., Melodia, T.: Routing in ZigBee: benefits from
exploiting the IEEE 802.15.4 association tree. ICC 2007 Proceedings (2007)
30. Terry, J.D., Jensen, C., Thai, S.: The Evolution of Spectrum Management: A Technical
Framework for DSA Management. 978-1-4244-2017-9/08. IEEE (2008)
31. Coulouris, G., Dollimore, J., Kindberg, T.: Distributed Systems Concepts and Design.
Adison Wesley, USA (2005)
32. El-Ansary, S., Haridi, S.: An overview of structured P2P overlay network. Swedish Institute
of Computer Science (SICS), Sweden. Royal Institute of Technology – IMIT/KHT, Sweden
(2004)
33. Lua, E.K., Crowcroft, J., Pias, M., Sharma, R., Lim, S.: A survey and comparison of peer-to-
peer overlay network schemes. IEEE communication survey and tutorial, March (2004)
34. Mahlmann, P., Schindelhaue, C.: Peer-to-peer netzwerke: algorithmen und methoden.
Springer, Berlin/Heidelberg, Germany (2007)
35. Huang, M.L., Lee, S., Park, S.-C.: A WLAN and bluetooth coexistence mechanism for
health monitoring system. 978-1-4244-2309-5/09/$25.00 ©2009. IEEE (2009)
36. Stoica, I., Morris, R., Karger, D., Kaashoek, F., Balakrishnan, H.: Chord: A scalable peer-to-
peer lookup service for Internet applications. In: Proceedings of ACM SIGCOMM 2001,
August (2001)
37. Zhao, B., Kubiatowicz, J., Joseph, A.: Tapestry: An infrastructure for fault-tolerant wide-
area location and routing. Technical Report UCB/CSD-01-1141, University of California at
Berkeley, Computer Science Department (2001)
38. Buford, J.F., Yu, H., Lua, E.K.: P2P Networking and Applications. Morgan Kaufmann,
USA (2009)
39. Stainov, R.: Peer ports for layered P2P streaming. In: Proceedings of the 6th International
Conference in Computer Science and Education in Computer Science, CSECS 2010, 26–29
June, Fulda/Munich, Germany, ISBN: 978-954-535-573-8 (2010)
40. Stainov, R., Goleva, R., Genova, V., Lazarov, S.: Peer port implementation for real-time and
near real-time applications in distributed overlay networks. In: 9th Annual International
Conference on Computer Science and Education in Computer Science 2013 (CSECS 2013),
29 June, 2 July, Fulda-Wuertzburg, Germany, pp. 87–92 (2013)
41. Stainov, R.: Peer ports: mobility support in peer-to-peer systems. In: Proceedings of the 5th
International Conference in Computer Science and Education in Computer Science, CSECS
2009, May 2009, Boston, USA (2009). ISBN 978-954-535-573-8
42. Sieber, C., Hossfeld, T., Zinner, T., Tran-Gia, P., Timmerer, C.: Implementation and user-
centric comparison of a novel adaptation logic for DASH with SVC. In: 2013 IFIP/IEEE
International Symposium on Integrated Network Management (IM 2013), pp. 1318, 1323,
27–31 May 2013
43. Tyson, G., Mauthe, A., Kaune, S., Grace, P., Taweel, A., Plagemann, T.: A middleware
platform for supporting delivery-centric applications. ACM Trans. Internet Technol. 12(2)
Article 4, 28 (2012). doi:10.1145/2390209.2390210. http://doi.acm.org/10.1145/2390209.
2390210
General Assisted Living System
Architecture Model
1 Introduction
own healthcare, on one hand, and on the other to provide flexibility in the life of patient
who lead an active everyday life with work, family and friends.
There are technical requirements (instrument usability, power supply, reliable
wireless communications and secure transfer of information) for the healthcare systems
based on wearable and ambient sensors [1]. However, there are also concerns about the
technology acceptance in the healthcare. Many authors have considered this issue. For
example, Cocosila and Archer [2] are investigating the factors favoring or disfavoring
the adoption of mobile ICT for health promotion interventions.
Ambient Assisted Living (AAL) has the ambitious goal of improving the quality of
life and maintaining independence especially of elderly and people with disabilities
using technology [3]. AAL can improve the quality of life by reducing the need of
caretakers, personal nursing services or the transfer to nursing homes. In this context,
there are two goals: a social advantage (a better quality of life) and an economic
advantage (a cost reduction for society and public health systems) [4, 5].
Most efforts towards building Ambient Assisted Living Systems are based on
developing pervasive devices and use Ambient Intelligence to integrate these devices
together to construct a safety environment [6]. But, technology limitation is that it
cannot fully express the power of human being and the importance of social connec-
tions. In this concept, the usage of advanced information and communication tech-
nology (social networks) could be helpful in connecting people together and organizing
community activities.
It is important for AAL systems to ensure high-quality-of-service. Essential
requirements of AAL systems are usability, reliability, data accuracy, cost, security,
and privacy. According to [7] to achieve this requirements it is important to involve
citizens, caregivers, healthcare IT industry, researchers, and governmental organiza-
tions in the development cycle of AAL systems, so that end-users can benefit more
from the collaborative efforts.
The electronic health record (EHR) is a collection of electronic health information
about individual patients and population, operated by institutions (medical centers) [8].
It is a mechanism for integrating health care information currently collected in both
paper and electronic medical records (EMR) for improving quality of care. A personal
health record (PHR) is a record where health data and information related to the care of
a patient is maintained by the patient [9]. PHR provides a complete and accurate
summary of an individual’s medical history that is accessible online. One of the
advantages of AAL systems is integrating data from AAL systems and smart homes
with data from electronic health or patient records. Although it is still in an early stage,
aggregating data from different medical devices and integrating them with data in
health records enable a comprehensive view on health data [10]. Presenting these
health data can lead to more efficient and competent decisions of physicians, nurses,
patients, and informal caregivers.
AAL systems are based on interoperability and integration of various medical
devices. Nevertheless, the lack of standards and specification is one of the biggest
obstacles for their commercial penetration on the market. In this context, AAL systems
and platforms rely on different standards and specifications by various initiatives and
groups, such as: Health Level 7 (HL7) [11] - supporting clinical practice and the
management, delivery, and evaluation of health services; Continua Health Alliance
General Assisted Living System Architecture Model 331
[12] which produces industry standards and security for connected health technologies
such as smart phones, gateways and remote monitoring devices; ETSI [13] which
provides harmonized standards for radio & telecommunications terminal equipment;
AAL Europe [14] which is funding projects that involves small and medium enterprises
(SME), research bodies and user’s organizations (representing the older adults).
One major issue concerning AAL systems is the ethical problem due to the mul-
titude and heterogeneous personal information continuously collected by AAL systems
[15]. There is concern about possible negative consequences [16] such as:
– loneliness or isolation, resulting from the use of certain devices that replace human
caretakers, which may be the user’s only regular social contact;
– privacy issues - surrounding biometrics and “smart home” systems collect personal
information;
– discrimination - wearable biometric monitors or mobility devices are highly visible
and can make a person’s disability very obvious.
These are the reasons why AAL systems need to be seen as tool for help and assistance
rather than controlling device for what are people doing.
AAL is seen as a promising alternative to the current care models and consequently
has attracted lots of attention. Although according to [17] there are three categories of
Ambient Assisted Living interoperability services: (1) notification and alarming ser-
vices, (2) health services, and (3) voice and video communication services, we found
that systems for assisted living need to be more general and to support more services in
order to be helpful not only for elderly and people with disabilities, but for all people
who want to live healthy life in accordance with their everyday obligations.
The System for Assisted Living we present in this paper uses mobile, web and
broadband technologies. Broadband mobile technology provides movements of elec-
tronic care environment easily between locations and internet-based storage of data
allowing moving location of support. The most important benefits of our proposed
system model are increased medical prevention, more immediate time response at
emergency calls for doctors, 24 h monitoring of the patients’ condition, possibility for
patient notification in different scenarios, transmissions of the collected biosignals
(blood pressure, heart rate) automatically to medical personnel, increased flexibility in
collecting medical data. The proposed system model creates an opportunity for
increasing patient health care within their homes by 24 h monitoring on the one hand,
and increasing medical capacity of health care institutions on the other hand. This
results in reducing the overall costs for patients and hospitals and improves the
patient’s quality of life.
2 Related Work
In the last several years Ambient Assisted Living is one of the most popular research
areas among scientists. Thus, many sensors, technologies and systems are developed.
Ruiz-Zafra et al. [18] are presenting the m-health cloud-transparent platform called
Zappa. Zappa is extensible, scalable and customizable cloud platform for the development
of eHealth/mHealth systems. Its main advantage is the ability to operate in the cloud.
332 V. Trajkovik et al.
By using cloud computing, open technologies (open-source software, open hardware, etc.)
and additional techniques the platform provides uninterrupted monitoring with the goal of
obtaining some information that can be subsequently analyzed by physicians for diag-
nosing. In order to show the applicability of the platform the authors are introducing two m-
health systems, Zappa App and Cloud Rehab, based on the Zappa platform.
In [4], Takacs et al. present a complex wireless and personalized AAL solution that
includes telemonitoring, health management, mental monitoring, mood assessment as
well as physical and relaxation exercises. Their approach is based on a novel com-
putational and communication platform called Virtual Human Interface (VHI), spe-
cifically designed to bridge the gap between people and computers by using virtual
reality and animation technologies. The main goal of the research is to create an open-
architecture and reconfigurable system which is as independent as possible from
individual manufacturers and wireless standards.
AlarmNet [19] is an assisted living and residential monitoring network for pervasive
adaptive healthcare in assisted living communities with residents or patients with
diverse needs. According to the authors (Wood et al.) the primary reason for developing
AlarmNet was to use environmental, physiological and activity data of assisted living
residents in order to improve their health outcomes. AlarmNet unifies and accommo-
dates heterogeneous devices in a common architecture that spans wearable body net-
works, emplaced wireless sensors, user interfaces and back-end processing elements.
Contributions and novelties of this work include extensible heterogeneous network,
novel context-aware protocols and a query protocol for online streaming-SenQ.
Kleinberger et al. [5] are presenting an approach and several evaluations for
emergency monitoring applications (research projects: EMERGE and BelAmI). The
main goal of EMERGE is to support elderly people with emergency monitoring and
prevention by using ambient, unobtrusive sensors and reasoning about arising emer-
gency situations. Experiments were performed in laboratory settings in order to eval-
uate the accuracy of recognizing Activities of Daily Living (ADL). The interpretation
of the evaluation results have proved that it is possible to measure ADLs accurately
enough for detecting behavior deviations. But, according to the Kleinberger et al., to
reach this objective it is very useful to include all stakeholders very early in the
requirements analysis and development process for the prototypes and especially in the
setup of the experiments.
Lopez de Ipina et al. in [20] present the CareTwitter AAL platform. They propose
the adoption of passive RFID tags as tiny databases where a log of a person can be
stored, so that other users with their NFC devices can access and manipulate the data in
them. The data is encoded in the resident’s RFID tags, and such care logs are then
transferred into a public micro-blogging service Twitter. The CareTwitter platform
stores a log for every new care procedure applied on a resident’s RFID wristband,
following a data-on-tag approach. CareTwitter makes data stay at any time with the
resident and be available in real-time and without relying on wireless links. The
experiments provided in the paper [20] have proven that the storage capacity of either a
1 K (wristband) or a 4 K (watch) Mifare RFID tag is sufficient for storing the care logs
of a whole day. The integration of CareTwitter with Twitter proves the high potential of
using interactions with everyday objects or people to automatically publish data into
Internet, in this case, the log of residents in a care center, so that their relatives and
General Assisted Living System Architecture Model 333
friends can be kept up-to-date about them. The tweets published by CareTwitter are
never made publicly available. Only users authorized by the residents or their family
can follow them.
In [21] an Internet of things-based AAL architecture to support blood glucose
management and insulin therapy is presented. This architecture offers a set of services
for monitoring, interconnecting with the Diabetes Information System (glycemic index
database), and ubiquitous access to the information based on the developed personal
device (Movital), AAL environment gateway (Monere), web portal, and the manage-
ment desktop application. The important aspect of presented solution is that most of the
measurements and interactions with the patient are done at home. This enhances the
self-monitoring blood glucose solutions and allows the interaction with the nurses and
physicians through new technologies such as personal health card based on RFID and
the Web diabetes management portal. According to the authors (Jara et al.), Internet of
things allows the defining of solutions closer to the patient, physician and nurses, which
allows an easier integration and acceptance of them. The evaluation of the proposed
architecture has presented that nurses and physicians are very interested and open to
these kinds of solutions, considering it very useful and suitable to be included in
hospitals.
Mileo et al. [22] present a monitoring system, called SINDI, equipped with a
pervasive sensor network and a non-monotonic reasoning engine. Proposed system,
gathers data about the user and his/her environment, through a wireless sensor network.
Combining different data sources, the system interprets the evolution of the patient’s
health state and predicts changes into risky states according to a graph-based com-
putational model of medical knowledge and the clinical profile of the monitored
patient. In this system, the results of context-aware interpretation of gathered data are
used to predict and explain possible evolutions of the patient’s health state in terms of
functional disabilities, dependency in performing daily activities and risk assessment,
as well as to identify correct interaction patterns. The advantage of the system is in
providing various: suggestions (according to the medical practice and the results of the
prediction reasoning task), alerts (when the system identifies behaviors or situations
that are potentially dangerous for patient), alarms (when specific environmental or
clinical conditions are detected), notifications (when the system receives new input or
terminates the inference process) and reminders (according to an agenda).
We should also mention some of the recent developed assisted living technologies
for commercial use.
BeClose [23] is an affordable, easy-to-use home monitoring and care giving
technology for the elderly. This system indicates that everything is okay and provides
independence and peace of mind for the user. The BeClose system consists of a base
station and a variety of small sensors throughout users home. These electronic devices
are designed to work together to make sure the user is up and about each day.
If something is out of the ordinary, the system will alert users’ family members and
caregivers.
Basis has introduced Body IQ [24] in fall 2013. It is a proprietary technology that
recognizes and displays users’ activities automatically, like walking, running and bik-
ing, as well as sleeping. Body IQ ensures users to get credits for their efforts in real-time,
334 V. Trajkovik et al.
including caloric burn, with no need to push buttons, switch modes or tag activities. It
also automatically determines when users fall asleep and when they wake up.
Apple is said to be working on a wrist-worn device that would go far beyond telling
time, allowing users to measure and track health and fitness data with a new wearable
device – “iWatch”. Apple’s iWatch [25] is expected be able to operate independently of
an iPhone or iPad. Reports have suggested that iWatch should debut in fall 2014.
According to previous brief review of literature and other works, not mentioned in
this paper, there is need for general architecture of the system for assisted living to be
proposed. The system for assisted living should be of help not only for elderly and
people with disabilities, but for all people who want to lead healthy life.
The body sensor networks (BSN) are type of a wireless sensor networks (WSN)
composed of sensors usually attached on human body or in some cases implanted
inside the human body. The main purpose of BSN is to measure the physiological
signals and to provide information about human behavior. Therefore, the number, the
type and the characteristics of the sensors may vary and they mainly depend on the
application and system infrastructure [26]. Two types of sensors could be applied: one
capable to collect continuous time-varying signals such as accelerometers, pedometers,
gyroscopes, electro-encephalograph (EEG) sensors, electromyography (EMG) sensors,
visual sensors, and auditory sensors and the other to collect discrete time-varying
signals such as glucose sensors, temperature sensors, humidity sensors, blood pressure
sensors. State-of-the art sensors nowadays have high compact factor and thus high
wearability and high biocompatibility. Wireless communication technologies such as
Bluetooth or Zigbee, radio frequency identification devices (RFID), and Ultra Wide-
band (UWB) could be employed to transmit the collected data.
The environmental sensors are reading the value of users’ environmental param-
eters. Moreover, the sensor technology can be applied to collect environmental
information regarding the location of people and objects, information about their
interaction, etc. Additionally, by applying data fusion techniques on the data gathered
both from BSN and environment sensors, reliable assessments of persons’ behavior and
the activities performed could be conducted. From sensor technologies perspectives,
AAL applications are facing various challenges, among which, one of the most
important ones is regarding the quality of collected data, which is the basis for further
behavioral analysis [27].
and along with data from clinical centers, medical databases and social networks are
sent for further processing by assisted healthcare algorithms. The processed data (by
assisted healthcare algorithms) are sent back to the end users in order to allow wanted
services. The logical architecture of System for Assisted Living is shown on Fig. 1.
In the service layer different processes of different users can be integrated. This
allows non-medical processes, medical processes, care processes and communications
within social networks to be incorporated in the architecture of System for Assisted
Living.
The whole interaction in the proposed system is request/reply based. If there is need
for additional information then new request is raised. We should emphasize that the
information generated from the social networks are reliable information and the
information from personal profiles (age, weight, height, diagnose entered by end user)
are unreliable information. This information should be confirmed by the medical
records from clinical centers and then deployed on data generated by corresponding
algorithms implemented in the social networks. In this way the tips (recommendations)
generated from social networks are reliable or valid.
– Medical databases - collect data from clinical centers and different databases, per-
form tests and experiments. They process and analyze collected data and based on
their research draw conclusions, recommendations and suggestions for diagnosis,
therapy and activities.
– Government organizations - make specific analysis of system data information and
give recommendations for national actions, programs and strategies.
– Policy makers - can get filtered system data information, make specific analysis of it
and give recommendations for non-government organizations, including programs
and strategies.
– Social networks - allow direct communication between users, sharing their results
and exchange of their experience. Social networks can send to the user tips based on
the users’ health condition, prior knowledge derived from users’ health history and
physical activities, and the knowledge derived from the medical histories and
physical activities of users with similar characteristics.
– Services for environmental data - supply data for weather condition (weather
temperature, atmospheric pressure, air humidity, wind speed) in users’ environment.
Data is collected from body sensor networks; users’ PC, laptop, tablet, smartphone
or smart TV; environmental sensors; clinical centers; medical databases; social
networks.
In the integration level, collected data from different sources is being adjusted
according to the standards and formats of the inputs of the assisted healthcare
algorithms.
Collected data is processed according to the need or demand:
– for generating recommendation for users;
– for clinical centers - for monitoring the user condition or clinical purposes;
– for medical databases analyzes;
– for purposes of governmental organizations and policy makers;
– for generating knowledge for social networks.
Processed information is sent to: users (on their PC, laptop, tablet, smartphone or
smart TV), clinical centers, medical databases, governmental organizations, policy
makers or social networks.
The medical personnel can remotely monitor the users’ medical condition,
reviewing the data arriving from the users’ personal or mobile device. In this way,
medical personnel can quickly respond to the user by suggesting most suitable therapy
as well as when to receive it, focusing on activities that are necessary for his reha-
bilitation and maintenance of his health, sending him/her (on his/her personal or mobile
device) various tips and suggestions for improving his/her health.
The conclusions drawn from research data, while exploring medical databases, can
be routed back to the clinical centers. These data can be used as additional knowledge
for the individual analyzes of the users’ condition. Clinical centers can exchange data
and information with the social networks and thus have access to a larger group of
users that can share research, recommendation and suggestion of the medical
personnel.
Social networks allow direct communication between users and sharing their data.
At the same time, the users’ individual data can be compared with average data
obtained using different collaborative filtering techniques. The social networks can
learn from recommendation made by medical personnel and generate notifications and
recommendation based on the most successful scenarios. These portals also can pro-
vide an interface and use data from a variety of medical databases and environmental
databases (temperature, wind speed, humidity).
The complex structure of data from the social networks along with the data arriving
from different clinical centers can be used by different medical databases for further
analysis and research.
Governmental organizations and policy makers can get the data from social net-
works, clinical centers and medical databases, make specific analysis on it and give
recommendations for national action by governments and non-government organiza-
tions, including programs and strategies.
The key stakeholders of the proposed System for Assisted Living are elderly and
people with disabilities who needs monitoring of health condition, reminders for
everyday obligations, and assistance in everyday routine and social inclusion. Family
of the elderly and people with disabilities who need professionals to take care of their
338 V. Trajkovik et al.
family members and want to remotely monitor health status can also use the system.
People who want to lead healthy life can use system. By using the system they can
monitor their own health condition and physical activities. In addition, Clinical centers
can remotely monitor their patients and gather all kind of health data from different
patients for further analyzes. Governmental, non-governmental organizations and
Policy makers can get summarized health information from the proposed system that
can help in generating health programs, strategies and policies. Industry, especially
medical and pharmaceutical industries, can benefit from the proposed system by getting
the health information that can help them in developing new devices, applications and
therapies that are needed.
The general use case scenarios for System for Assisted Living are shown in Fig. 3.
Scenario A:
The user switches on the application on his phone and starts his physical activities.
Application reads the data (blood pressure, pulse, sugar level and type of activity, length
of path, time interval). If irregularity occurs while reading the data, such as patient’s
blood pressure is quite higher than normal to perform the operation, the application
sends signal and message with those data to the medical center. The application signals
to user that there is some irregularity. Medical personnel review the submitted data and
previous medical records of the patient. Based on the patient’s diagnosis, treatment
received and his activity currently carried out, along with the medical data received from
the application, a recommendation is issued back to the application of the patient, to
temporarily stop his activities and receive appropriate medicine (if by that moment it’s
not already received) or to reduce the pace of the activity itself. The application signals
to the user that a message from the medical center has arrived. The user applies the
recommendation from the medical center.
Scenario B:
Medical personnel review the patient’s data (diagnosis, therapy received, activities
done) and conclude that the patient did not receive his regular therapy and does not
perform the recommended actions or has excessive over-activity. The medical center
sends an urgent message to the patient to do an emergency medical examination.
Scenario C:
The user switches on the application on his phone and connects to the social
network. He enters his personal data and therapy that has received and updates his
Personal Health Record. User can share his PHR with other users of the network.
Additionally, if the user assumes to have certain heart disease diagnosis he can enters
that he has heart troubles. On the base on his PHR and results of performed physical
activities (compared to the average results of the other users with the similar problem)
social network give him a proposition if he has or not such diagnosis and advice him to
talk to his physician.
Scenario D:
Medical database sends a request to the clinical centers to send data from a period
for its users. Clinical Centers sends its data. Medical database sends a request to the
social networks to send data from a period of time for their users. Social networks send
its data. Medical database analyzes compares and investigates the collected data and its
own available data. Medical database draws conclusions, recommendations and sug-
gestions from the analyzed data. Medical database sends data (latest information) to the
Clinical centers, about diagnostics, recommended therapies and activities for patients
with certain diagnosis as well as suggestions for patients appropriately diagnosed.
Scenario E:
Policy maker sends a request to the social networks to send data from a period of
time for its users that have heart disease diagnose. Social networks use collaborative
filter to extract those data. Social networks send its data. Policy maker analyze data and
give recommendation, make program and strategy for prevention of heart disease.
General Assisted Living System Architecture Model 341
5 Conclusions
This paper presents a general model of assisted living system architecture. Generally,
the main objectives of the proposed System for Assisted Living are:
(1) Help its users to actively participate in their health care and prevention, thereby
providing: monitoring of users’ health parameters and their physical activities
(condition); 24-h medical monitoring; recommendation with tips on how to
improve their health; opportunity for health care within users’ homes; increased
capacity of health institutions, resulting with reduction of overall costs for con-
sumers and healthcare institutions.
(2) Alignment of the solution to the current state of technology.
(3) Collecting different types of data and combining them into complex structures of
health data. The survey, analysis and research of such structures allows to
understand the impact and the influence of applied therapy, physical activity, time
parameters and other factors on the development of the health condition of the
user. Such analysis can be further used by all stakeholders for diagnosis, treat-
ment, therapy and prevention.
The presented architecture gathers all common features of assisted living system
features and determines possibilities for various assisted living system deployments by
presenting use cases scenarios derived from proposed architecture.
Acknowledgement. The authors would also like to acknowledge the contribution of the COST
Action IC1303 - AAPELE, Architectures, Algorithms and Platforms for Enhanced Living
Environments.
References
1. Korhonen, I., Parkka, J., Van Gils, M.: Health monitoring in the home of the future. IEEE
Eng. Med. Biol. 22(3), 66–73 (2003)
2. Cocosila, M., Archer, N.: Adoption of mobile ict for health promotion: an empirical
investigation. Electron. Markets 20(3–4), 241–250 (2010)
3. Cardinaux, F., Bhowmik, D., Abhayaratne, C., Hawley, M.S.: Video based technology for
ambient assisted living: A review of the literature. J. Ambient Intell. Smart Environ. 3(3),
253–269 (2011)
4. Takács, B., Hanák, D.: A mobile system for assisted living with ambient facial interfaces.
Int. J. Comput. Sci. Inf. Syst. 2(2), 33–50 (2007)
5. Kleinberger, T., Jedlitschka, A., Storf, H., Steinbach-Nordmann, S., Prueckner, S.: An
approach to and evaluations of assisted living systems using ambient intelligence for
emergency monitoring and prevention. In: Stephanidis, C. (ed.) UAHCI 2009, Part II.
LNCS, vol. 5615, pp. 199–208. Springer, Heidelberg (2009)
6. Sun, H., De Florio, V., Gui, N., Blondia, C.: Promises and challenges of ambient assisted
living Systems. In: Proceedings of the 6th International Conference on Information
Technology: New Generations, Las Vegas NV, 27–29 April 2009, pp. 1201–1207 (2009)
342 V. Trajkovik et al.
7. Memon, M., Wagner, S.R., Pedersen, C.F., Beevi, F.H.A., Hansen, F.O.: Ambient assisted
living healthcare frameworks, platforms, standards, and quality attributes. Sensors 14, 4312–
4341 (2014)
8. Gunter, T.D., Terry, N.P.: The emergence of national electronic health record architectures in the
United States and Australia: Models, costs, and questions. J. Med. Internet Res. 7(1), e3 (2005)
9. Tang, P., Ash, J., Bates, D., Overhage, J., Sands, D.: Personal health records: definitions,
benefits, and strategies for overcoming barriers to adoption. JAMIA 13(2), 121–126 (2006)
10. Knaup, P., Schöpe, L.: Using data from ambient assisted living and smart homes in
electronic health records. Methods Inf. Med. 53, 149–151 (2004)
11. http://www.hl7.org. Accessed 06 August 2014
12. http://www.continuaalliance.org. Accessed 06 August 2014
13. http://www.etsi.org/standards. Accessed 06 August 2014
14. http://www.aal-europe.eu. Accessed 06 August 2014
15. Viron, G, Sixsmith A (2008) Toward Information Systems for Ambient Assisted Living. In:
Proceedings of the 6th International Conference of the International Society for
Gerontechnology, Pisa, Tuscany, Italy, 4–7 June 2008
16. Hill C, Grant R, Yeung I (2013) Ambient Assisted Living Technology. An interactive
qualifying project report submitted to the Faculty of Worcester Polytechnic Institute
17. Mikalsen M, Hanke S, Fuxreiter T, Walderhaug S, Wienhofen L (2009) Interoperability
Services in the MPOWER Ambient Assisted Living Platform. In: Medical Informatics
Europe (MIE) Conference, Sarajevo, 30 August–2 September 2009
18. Ruiz-Zafra, Á., Benghazi, K., Noguera, M., Garrido, J.L.: Zappa: An open mobile platform
to build cloud-based m-health systems. In: van Berlo, A., Hallenborg, K., Rodríguez, J.M.
C., Tapia, D.I., Novais, P. (eds.) Ambient Intelligence - Software and Applications. AISC,
vol. 219, pp. 87–94. Springer, Heidelberg (2013)
19. Wood, A., Stankovic, J., Virone, G., Selavo, L., He, Z., Cao, Q., Doan, T., Wu, Y., Fang, L.,
Stoleru, R.: Context-aware wireless sensor networks for assisted living and residential
monitoring. IEEE Netw. 22(4), 26–33 (2008)
20. López-de-Ipiña, D., Díaz-de-Sarralde, I., García-Zubia, J.: An ambient assisted living
platform integrating RFID data-on-tag care annotations and twitter. J. Univers. Comput. Sci.
16(12), 1521–1538 (2010)
21. Jara, A.J., Zamora, M.A., Skarmeta, A.F.G.: An internet of things–based personal device for
diabetes therapy management in ambient assisted living (AAL). Pers. Ubiquit. Comput. 15, 431–
440 (2011)
22. Mileo, A., Merico, D., Bisiani, R.: Support for context-aware monitoring in home
healthcare. J. Ambient Intell. Smart Environ. 2(1), 49–66 (2010)
23. http://www.assistedlivingtechnologies.com/remote-monitoring-elderly/11-beclose.html.
Accessed 28 June 2014
24. http://www.mybasis.com/blog/2013/11/body-iq-intelligence-the-most-advanced-way-to-
recognize-activity-sleep-and-caloric-burn/. Accessed 28 June 2014
25. http://appleinsider.com/futures/iwatch. Accessed 28 June 2014
26. Liolios, C., Doukas, C., Fourlas, G., Maglogiannis, I.: An overview of body sensor networks
in enabling pervasive healthcare and assistive environments. In: Proceedings of the 3rd
International Conference on PErvasive Technologies Related to Assistive Environments,
Samos, Greece, 23–25 June 2010
27. Nugent, C.D., Galway, L., Chen, L., Donnelly, M.P., McClean, S.I., Zhang, S., Scotney, B.
W., Parr, G.: Managing sensor data in ambient assisted living. J. Comput. Sci. Eng. 5(3),
237–245 (2011)
General Assisted Living System Architecture Model 343
28. Gama, O., Carvalho, P., Alfonso, J.A., Mendes, P.M.: Quality of service support in wireless
sensor networks for emergency healthcare services. In: Proceedings of the 30th Annual
International Conference of the IEEE Engineering in Medicine and Biology Society,
pp. 1296–1299. IEEE Computer Society (2008)
Continuous Human Action Recognition
in Ambient Assisted Living Scenarios
1 Introduction
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 344–357, 2015.
DOI: 10.1007/978-3-319-16292-8 25
Continuous Human Action Recognition in AAL 345
In human action recognition (HAR), actions like falling, walking, sitting and
bending are recognised. Great advances have been made in order to improve
the recognition rate, support multiple views and view-invariant recognition [2,3]
as well as real-time performance [4,5]. However, it can be observed that HAR
has been addressed by classifying short video sequences that contain single
actions. Therefore, two strong assumptions have been made: (1) segmented video
sequences which only contain a single action each are provided, and (2) all the
video sequences necessarily match with one of the learnt action classes. Whereas
these assumptions are commonly made in the state of the art and most of the
datasets provide such data, these do not hold true in practical situations as in
AAL scenarios, but also regarding human-computer interaction, gaming or video
surveillance. In people’s homes, cameras will provide a continuous video stream
which can contain actions at any moment. This leads to continuous human action
recognition (CHAR). In other words, an unsegmented video stream has to be
analysed in order to detect actions at any point. Another restriction, which comes
along with dealing with the raw video stream of the cameras, is that actually
these may not record the expected actions. The person could be performing an
unknown action, or nothing at all. Therefore, the proposed system needs to be
robust enough in order to discard unknown actions that otherwise would result
in misclassifications.
In this paper, continuous human action recognition (CHAR) is addressed in
order to overcome the aforementioned assumptions. The concept of action zones
is introduced and a novel method is proposed to detect the most discriminative
segments of action sequences. For continuous recognition, a method based on
a sliding and growing window technique is presented. Finally, to perform con-
tinuous evaluation considering specific constraints of AAL scenarios, a suitable
evaluation scheme based on segment analysis and F1 score is proposed. Experi-
mental results on two publicly available datasets are provided.
2 Related Work
are tested for the temporal segmentation. The first proposal handles alignment
of the optical flow direction by means of dynamic programming. Whereas in
the second, assuming that boundaries are characterised by zero velocity, move-
ment begin-end detection is performed with a boosting based classifier. The first
option performed better, since it does not classify specific temporal moments,
but aligns a globally optimal segmentation taking into account movement direc-
tion. In [9], start and end key frames of actions are identified. Segmentation
is performed based on the posterior probability of model matching considering
recognition rounds. Depending on the accumulated probability, rounds are ended
if a threshold is reached. Adjacent rounds classified as the same action classes
are connected into a single segment. Lu et al. deal with temporal segmenta-
tion of successive actions in [10]. During the learning, only a few characteristic
frames are selected based on change, which leads to an outstanding temporal
performance of the recognition. Likelihood of action segments is computed con-
sidering pair-wise representations of characteristic frames. Although good results
are obtained, no further instructions are provided on how an actual continuous
video stream would be handled.
A very popular technique in video and audio processing is the sliding win-
dow approach. Sliding windows allow to analyse different overlapping segments of
the stream in order to isolate a region of interest and then perform classification
comparing the window to a set of training templates. If a variable size is also con-
sidered, both window position and size dynamically change so that all necessary
locations and scales are taken into account. Some works have applied the sliding
window technique to CHAR [8,11,12]. In [13], a sliding window is employed to
accumulate and smooth the frame-wise predictions of a frame-based low-latency
recognition. Low-latency CHAR is also considered in [14], where so-called action
points are proposed as natural temporal anchors. These are especially useful for
gaming. Two approaches are proposed. The first relies on a continuous observa-
tion hidden Markov model (HMM) with firing states that detect action points.
And the second employs a direct classification based on random forests classi-
fiers and sliding window. In conclusion, by means of sliding window techniques,
the temporal segmentation is simplified, since no specific boundaries have to
be detected. However, due to its computational cost it may only be used if the
applied segment analysis can be performed very efficiently.
As it has been previously mentioned, this work builds upon prior contributions
in which HAR has been successfully performed for action sequences that have
been segmented beforehand. Since in this work these contributions are extended
to support continuous recognition, this section provides a brief summary of the
related previous publications.
For pose representation a silhouette-based approach has been chosen due to
its rich spatial information and low computational requirements. More specifi-
cally, a feature representation based on the distance between the contour points
Continuous Human Action Recognition in AAL 347
Based on Definition 4.1, for instance, the fall action contains an action zone
corresponding to the segment from where the body is partially bent, until it is
completely collapsed. In other words, the part where the person is standing still
is ignored as well as the part where the person is lying on the floor, since these are
not discriminative with respect to other actions. In this way, the most relevant
segments can be identified in order to ease the differentiation between actions.
Furthermore, action zones are shorter than the original sequences. For this rea-
son, the matching time will be significantly reduced. Since the underlying HAR
method also presents a very low computational cost, a sliding window approach
may be employed without prohibitively reducing the temporal performance.
Initially, the same learning is performed as detailed in Sect. 3. Since seg-
mented sequences are still needed for the learning process, these can easily be
obtained relying on the frame-wise ground truth and discarding the segments
where no action is performed. Action zones may be located at different parts of
the actions depending on the type of action and how the action ground truth has
been labelled. However, based on the provided definition, action zones can be
348 A.A. Chaaraoui and F. Flórez-Revuelta
detected automatically by analysing the transition of key poses. For this purpose,
we first compute the discrimination value of each key pose wkp . All available pose
representations are matched with their nearest neighbour among the bag of key
matcheskp
poses and the ratio of within-class matches is obtained (wkp = assignments kp
).
Therefore, this value indicates how valuable a key pose is for distinguishing action
classes. In this way, based on the transition of key poses and their discriminative
value, our action zones, i.e. the most discriminative segments, can be detected.
Specifically, for each training sequence of action class a and a specific tem-
poral instant t, the following steps are taken for the corresponding frame:
1. The feature representation V̄ (t) of the current frame is matched with the
key poses of the bag-of-key-poses model. For each action class a, the nearest
neighbour key pose kpa (t) is obtained.
2. For the A action classes, the raw class evidence values Hraw1 (t), Hraw2 (t), ...,
HrawA (t) are computed based on the ratio between the discrimination value
wkpa (t) and the distance distkpa (t) , where distkpa (t) denotes the Euclidean
distance between the pose representation and the matched key pose kpa (t).
Hence, the discrimination value will be taken into account depending on how
well the key pose defines the current pose.
wkpa (t)
Hrawa (t) = , ∀a ∈ [1...A]. (1)
distkpa (t)
5. Attenuating the resulting value, the final class evidence H(t) is obtained:
Figure 1 shows the H(t) evidence values that have been obtained over the
course of a bend action. In comparison to the raw values, here outliers have
been filtered and the differences between classes have become more pronounced.
As it can be seen, the evidence of the bend class is significantly higher than
the others in the central part of the sequence. This is due to the fact that
the person is initially standing still. He or she then bends down and, finally,
returns to the initial position. The segment that corresponds to the poses in
Continuous Human Action Recognition in AAL 349
Fig. 1. Evidence values of each action class before and after processing are shown for
a bend sequence of the Weizmann dataset.
which the person is bent down is the most discriminative one. The poses of this
segment match with the most discriminative key poses of the bend action class,
whereas the ratio between discrimination value and distance is lower for the other
classes. For this reason, action zones can be detected by defining the thresholds
HT1 (t), HT2 (t), ..., HTA (t) that have to be reached by the class evidence values
of these segments. Specifically, an action zone will be collected from the frame
on where:
Haction (t) > Hmedian (t) + HTaction , (5)
350 A.A. Chaaraoui and F. Flórez-Revuelta
where action corresponds to the action class of the current sequence and
Hmedian (t) indicates the median value out of H1 (t), H2 (t), ..., HA (t). An action
zone will end if this condition ceases to be met. The median value is employed
because the expected peak of Haction (t) would influence the average. Moreover,
this approach also works if action segments present a high evidence value for
more than one action class, which may happen for very similar actions. A second
example is shown in Fig. 2, where the class evidence values that have been
obtained for the cyclic jumping jack action are detailed. Several short action
zones could be found choosing the appropriate threshold HTa . It can also be
seen that the peaks correspond to the discriminative segments in which the
limbs are outstretched.
50 55 60
Frames
Fig. 2. Evidence values H(t) of each action class and the corresponding silhouettes of
one of the peaks of evidence are shown for a jumping jack sequence of the Weizmann
dataset.
Continuous Human Action Recognition in AAL 351
5 Continuous Recognition
In this proposal, continuous human action recognition is performed by detect-
ing and classifying action zones. For the continuous recognition of the incoming
multi-view data, a sliding window technique is employed. More specifically, a
sliding and growing window is used to process the continuous stream at different
overlapping locations and scales. At this point, a null class has to be considered
in order to discard unknown actions and avoid false positives. This class corre-
sponds to all the behaviours that may be observed and have not been modelled
during the learning.
Algorithm 1 details the process: The sliding and growing window grows δ
frames in each iteration and slides γ frames if the window has reached its maximal
length lengthmax . If at least lengthmin frames have been collected, the segment
of the video stream (or video streams if available) S that corresponds to the
window is compared to the known action zones. The best match is obtained
by matching the segments of key poses using DTW. Then, a threshold value
DTa is taken into account in order to trigger the recognition. This value DTa
indicates the highest allowed distance in a per-frame basis. In this way, only
segments which match well enough with an action zone are classified. Eventually,
the unrecognised frames will be discarded and considered to belong to the null
class.
6 Experimentation
6.1 Parametrisation
Special consideration has been given to the parameters HT1 , HT2 , ..., HTA and
DT1 , DT 2, ..., DTA . The first ones define the threshold that has to be surpassed
by the class evidence Haction (t) in comparison to the Hmedian (t) value. Differ-
ent values are admitted for each action class, since the class evidence behaves
differently for each type of action. In the case of the second set of parameters,
each action class is considered to require a specific similarity between sequence
segments and action zones in order to confirm the match as a recognition and
avoid false positives for ‘poor matches’. This leads us to two sets of A parameters
that are difficult to establish empirically, as exhausting tests are unaffordable.
Among the possible search heuristics, evolutionary algorithms stand out since
they are proficient for scenarios where the shape of the solution space is unknown
and this hinders the election of optimal algorithms. They can also deal with
a large number of parameters in a moderate run time. Moreover, relying on a
coevolutionary-based approach the intrinsic relationship between our two param-
eter sets can be considered. For this reason, a technique based on the cooperative
coevolutionary algorithm from [18] has been employed for parameter selection.
By means of this method, the best performing combination of HT and DT values
can be found.
352 A.A. Chaaraoui and F. Flórez-Revuelta
start = 0
length = 0
repeat
—————— Sliding and growing window ——————-
length = length + δ
in video surveillance the fact that the action happened is more relevant (e.g.
punching). In AAL, it is especially important not to miss any actions, because
this could result in safety issues (e.g. falling down). A delay of a few seconds
may be acceptable if this improves the recognition avoiding false negatives. As
a result, the applied evaluation scheme varies between authors.
A common option is to apply frame-by-frame evaluation as in [10], but the
reliability of this approach is arguable. This is due to the lack of correlation
between frames and actions. It could happen that only a few last frames of an
action are not recognised correctly. This would result in a high frame-by-frame
recognition rate (e.g. 90 %), although only one correct class label and one or
more incorrect predictions have been returned by the system. This means that
50 % or more of the returned labels were erroneous. For this reason, other levels
of evaluation have been proposed, such as event analysis, where only the activity
occurrence and order is considered, or the hybrid segment analysis [19]. In this
last approach, a segment is defined as “an interval of maximal duration in which
both the ground truth and the predicted activities are constant”. In this way,
despite the fact that segments may have different durations, alignment is given
since each ground truth or prediction change leads to a new unit of evaluation.
Fig. 3. This finite-state machine details the logic behaviour of the applied segment
analysis.
This last level of analysis has been employed in this work, as it provides
a clear way to align the recognitions with the ground truth and avoids the
disadvantages of the frame-based analysis. Figure 3 shows how the null class has
been considered in the segment analysis. As it can be seen, only new recognitions
(i.e. different from the last predicted action class) are taken into account for
evaluation. The thresholded recognitions are retained and their segments are
considered to belong to the null class. In addition, recognitions are accepted for
a delay of τ frames after the ground truth indicated the end of the action. Note
that this is only allowed if no prediction was given until that moment, i.e. the null
class state was active since the action started and until the delayed recognition
has occurred. Otherwise, the action would have already been classified (correctly
or wrongly).
In view of the multi-class classification that is performed and that now a null
class has to be contemplated, results are measured in terms of true positives
354 A.A. Chaaraoui and F. Flórez-Revuelta
(TP), false positives (FP), true negatives (TN) and false negatives (FN). These
values are accumulated along a cross validation test. A leave-one-actor-out cross
validation (LOAO) is proposed in which each sequence includes several contin-
uously performed actions of an actor. In order to consider both precision and
recall rates, the F1 -measure is used as follows:
precision × recall
F1 = 2 × (6)
precision + recall
TP
precision = (7)
TP + FP
TP
recall = (8)
TP + FN
6.3 Results
Our approach has been validated on the multi-view INRIA XMAS (IXMAS) [20]
dataset and the single-view Weizmann [21] dataset. The former provides contin-
uous multi-view sequences of different actions performed by the same actor,
whereas the latter provides segmented single-view sequences. In order to sup-
port continuous recognition, the sequences of the same actor are concatenated
into a single continuous sequence. Consequently, unnatural transitions are cre-
ated due to the gaps of information. Nevertheless, tests have been performed on
this dataset for illustrative purposes so that a comparison with other approaches
can be made.
With regard to the introduced parameters, the following values have been
used during the experimentation (these have been chosen based on experimen-
tation):
Fig. 4. Evidence values H(t) of each action class and the detected action zone are
shown for a scratch head sequence of the IXMAS dataset.
Table 1. Obtained results applying CHAR and segment analysis evaluation over a
LOAO cross validation test. Results are detailed using the proposed approaches based
on action zones (1) or segmented sequences (2).
Table 1 shows the scores that have been achieved by our approach over the
ideal value F1 -measure of 1.0. The IXMAS dataset presents several known diffi-
culties as view invariance and noise which explain the score difference. Further-
more, the segments labelled as null class in which ‘other actions’ are performed
can easily lead to an increase of false positives. In order to show the benefit
gained from the action zones approach (approach 1), tests have also been per-
formed using the entire segmented sequences instead (approach 2). In this way,
larger segments are considered by the sliding and growing window and these are
compared to the original action sequences provided by the ground truth. It can
be seen that the proposed continuous recognition based on action zones provides
a substantial performance increase and leads to higher scores in general.
Comparison with other state-of-the-art works is difficult in CHAR, due to
different evaluation schemes. In [10], frame analysis is employed and 81.0 % accu-
racy is reported on the IXMAS dataset. In the case of the Weizmann dataset,
for example in [9], CHAR is performed and a score of 97.8 % is reached. Seg-
ment analysis is employed in this case, although the rate of correctly classified
segments is computed based on a 60 % overlap with the ground truth.
356 A.A. Chaaraoui and F. Flórez-Revuelta
The temporal performance has also been evaluated for this continuous
approach. While the sliding and growing window technique is computationally
demanding, this is offset by the proposed action zones. The short lengths of
both action zones and temporal windows make the comparisons between them
very efficient. Using a PC with an Intel Core 2 Duo CPU at 3.0 GHz and
Windows x64, a rate of 196 frames per second (FPS) has been measured on
the Weizmann dataset including all necessary processing stages.
In this work, a method for segmented human action recognition has been
extended to support continuous human action recognition. Improvements have
been made at the learning and recognition stages. The concept of action zones
has been introduced to define and automatically learn the most discriminative
segments of action performances. Relying on these action zones, recognition can
be carried out by finding the equivalent segments that clearly define the action
that is being performed. For this purpose, a sliding and growing window app-
roach has been employed. Finally, segment analysis is used introducing special
considerations for the specific AAL application of our work. Tests have been per-
formed relying on the whole segmented sequences or only on the action zones,
and significant differences can be seen. By means of action zones, higher accu-
racy scores are obtained. Real-time suitability of this continuous approach has
also been verified. This is indispensable for most of the possible applications,
and a necessary premise for online recognition.
In future works, further evaluation should be applied to ease the comparison
to other approaches. It could be useful to implement other state-of-the-art tech-
niques and test them in the same conditions as our proposal. Furthermore, a con-
sensus should be reached about the appropriate evaluation schemes. It has also
been observed that regarding CHAR, there is a lack of suitable benchmarks
including foreground segmentations or depth data. Therefore, new datasets
should be created along the corresponding evaluation schemes.
References
1. Cardinaux, F., Bhowmik, D., Abhayaratne, C., Hawley, M.S.: Video based tech-
nology for ambient assisted living: a review of the literature. J. Ambient Intell.
Smart Environ. 3(3), 253–269 (2011)
2. Poppe, R.: A survey on vision-based human action recognition. Image Vis. Comput.
28(6), 976–990 (2010)
3. Aggarwal, J., Ryoo, M.: Human activity analysis: a review. ACM Comput. Surv.
43(3), 16:1–16:43 (2011)
4. Shotton, J., Fitzgibbon, A.W., Cook, M., Sharp, T., Finocchio, M., Moore, R.,
Kipman, A., Blake, A.: Real-time human pose recognition in parts from single
depth images. In: IEEE Conference on Computer Vision and Pattern Recognition,
CVPR 2011, pp. 1297–1304 (2011)
Continuous Human Action Recognition in AAL 357
Abstract. This paper describes a cloud based architecture for processing data
and providing services for smart living environments and support for assistive
technologies. Based on scalable cloud technologies and optimized software
architectures, it provides infrastructure for an extendible set of various func-
tionalities. The paper describes the core processing module along with several
related proof-of-concept services. Several use case scenarios are presented
including a mobile app voice navigation tool for the blind, text to hand sign
speech video sequencing tool for the deaf, image processing tool for a smart
home, etc. Details are presented about the software development tools used and
their integration in a functional multiplatform application. Guides for future
works and extension of the system are discussed.
1 Introduction
The advantages of the cloud based architectures delivering high computing yet scalable
environment and resource sharing, provides a powerful resource for expansion of
services and platforms for smart living environments and assistive technologies. Sys-
tems using the main principles of cloud computing, provide infrastructure for an
extendible set of various functionalities and services. This paper describes a cloud
based system for assistive technologies and smart living services. Descriptions are
given of the core processing module along with several related proof-of-concept
assisted living services, concerning security in the cloud, interoperability and inter-
connectivity between various devices and their interaction with the end-users.
The presented core module relies on the principles and standards of cloud computing
delivering better characteristics and platforms compared to similar solutions based on
traditional principles. Several use case scenarios are presented in details regarding the
software development tools used and their integration in a functional multiplatform
application. The described architecture is not limited to the concept subsystems
described. Various smart living room functionalities are planned and can be developed
© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 358–369, 2015.
DOI: 10.1007/978-3-319-16292-8_26
Cloud Based Assistive Technologies 359
on top of the backbone core module described. Broadening assistive living systems to
encompass smart living-room features increases their viability and sustainability. The
border between such systems is thin and a universal approach brings efficiency and
quality. Guidelines for future work and extensions are discussed.
This article is structured as follows: Sect. 2 presents related work. Section 3
describes the architecture of the Core Cloud based system and interaction between
components. Also security issues are discussed. In Sect. 4 several composite modules
are presented and their interaction with the Core system. Section 5 concludes the paper.
2 Related Work
The digital revolution has dramatically increased the usage and function of consumer
smart devices, simultaneously reducing their cost. The interoperability among IT-based
services as well as wireless smart devices are an indispensable tool for assisted living as
well. Numerous IT companies contributed in support and deployment of new smart
environments, smart cloud-based services and wireless devices, improving user con-
venience and better living.
Notable examples include the Electronic Cane [9] and the Navigation Tool for Blind
People based on a microcontroller with two vibrators and two ultrasonic sensors [10].
3 System Architecture
The proposed cloud based system provides an integrated environment for development
of composite, user-driven Assistive and Home Enhancement Technology services. The
system defines a protocol for interaction and communication between interconnected
components, provides enhanced security, authentication and accounting protocols,
allowing managed Plug&Play functionality of various components and services.
The Core module shown on Fig. 1 is fully developed on top of the Windows Azure
platform [15]. The interconnectivity is based on the Pure HTTP protocol [17], pro-
viding two way communication. In addition, the system defines REST-full HTTP
endpoints independent of consumer platforms, due to the simple nature of the HTTP
protocol and JSON data format [18]. The components supported by the system are
divided in three main categories, based on their functionality: sensors, processors and
triggers. Due to the cloud oriented architecture of the system, the client platforms pose
no limitation in interconnectivity and interoperability. Therefore, we have so far
developed components/clients based on Android, Windows and Linux operating sys-
tems that interact with the cloud system correctly.
The system provides a Graphical Design Tool (GDT), implemented as a Web
Application, for design and modeling Assistive and Home Enhancement services and
workflows. The GDT tool is a simple, yet functional user interface, where the user can
develop a model using drag&drop techniques, intuitively describing the rules (wires)
for interaction between the components. In addition, multiple components are
Cloud Based Assistive Technologies 361
supported for integration in terms of sensors, processors and triggers including smart
phones, tablets, personal computers, laptops, game consoles. Each component used in a
model has a unique identifier (UID) and must be defined as one of the three possible
types: sensor, processor or trigger.
between the components. Due to the Flow Control nature of the TR module, the data is
send to either the Output or Processor module, depending on the type of the
component.
The Processor module consists of three sub modules: Internal, Local and External.
When the data flow reaches the Processor module, the module delegates the data to the
proprietary sub module, based on the type of component used as processor within the
parsed model. In addition, if the component in the active model is defined as an
external processor, such as Google Text to Speech API or Google Translate API, the
data is routed through the External sub module. Furthermore, if the processor is the
Core system, the data is processed from the Local sub module. Likewise, if the pro-
cessor is a connected component, the data is routed through the Internal sub module.
In addition, every sub module provides the Core system with processed data.
The Message Lifecycle in the Core system is shown on Fig. 2. Each HTTP request
is handled by the Message Handler (MH) module. When the message is properly de-
serialized from the JSON object, the request is authenticated and authorized by the
AAA module. After successfully authentication and authorization against the system,
the AAA module loads the corresponding profile from the Database. Then the ML
module loads the deployed model and parses the components, their types and rules for
interaction. The TR module detects the component type originating the request. If the
request is made from a sensor or a trigger type component, the TR composes the
response and sends it to the MH module. However, if the component type is of a
processor type, the request is processed to the Processor module. The Processor module
resolves the type of the processor component and routes the request to the appropriate
sub module. When the corresponding module finishes processing, it sends a response to
the MH module. The MH module then sends back the response to the component.
Every client application deploys SHA1 certificates which are obtained from the
Core service. The Core service verifies that each request originates from a client
application that matches one of the certificate fingerprints, to ensure integrity of clients
and to prevent code injection.
A Data flow diagram is described in Fig. 4. The user defines the desired route for
navigation into the Google Maps plugin, provided by the VNT tool. The VNT then
exposes the values provided from the Accelerometer, Magnetometer and Gyroscope
sensors, also and wireless and bluetooth modules to the Core system. The Core system
calculates the movement based on the provided input data from the VNT tool and maps
the coordinates against the defined route for navigation. The calculations of the
movements are based on techniques for sensor fusion presented in [21]. The Core
system then transforms the calculated direction into voice with the Google Text to
Cloud Based Assistive Technologies 365
Speech API. In addition, the end-user is provided with voice direction messages for
navigation through a defined route.
and choose to react if the alarm is valid and the patient is in danger, or label the feed as
false alarm. If the signal is labeled as a false alarm, the system gathers the behavior data
for the patient of the period of day when the alarm was sounded and adds the data to the
model, thus updating the behavior model. The system gathers all the behavior data
online and updates the model for each 24-h period. If a period had a real alarm, then the
data of that period is ignored.
This subsystem is intended to assist both patients and doctors. The patients receive
a better health service and assistance, while medical personnel get help in the obser-
vation of patients and can care for more patients in the same time. The subsystem is still
in the prototyping phase of development and is being tested on non-patient behavior.
5 Conclusion
This paper describes a work in progress system platform and architecture with several
proof-of-concept modules that offer a glimpse of what modern mobile devices utilizing
the scalable and omnipresent cloud systems can offer. The core processing system
along with several satellite modules are described. Text to sign language animated
avatar is available for the hard of hearing people as a lightweight browser web
application. Anomaly behavior of stay-at-home elderly or patients is developed using
readily available MS Kinect. Voice navigation tool for the blind, not limited to GPS
signals is also presented.
The initial results obtained from the modules in their early stages show promise.
Combining the processing power, low price, scalability and reliability of the cloud
platforms along with the evermore capable sensor available in smartphones and home
environments is a combination we expect to bring forward assisted and enhanced
living. The cloud environment enables easy upgrade of versions and algorithms that
become instantly available to the user of the system. Transferring the burdening pro-
cessing of raw sensor data to the cloud enables advanced algorithms to be used while
saving battery power in the sensors and demanding insignificant processing capabilities
in smartphones and other client devices. All these advantages are expected to promote
this architecture and similar ones into the forefront of ambient assisted living systems.
References
1. Nussbaum, G., Veigl, C., Acedo, J., Barton, Z., Diaz, U., Drajsajtl, T., Garcia, A., Kakousis, K.,
Miesenberger, K., Papadopoulos, G.A., Paspallis, N., Pecyna, K., Soria-Frisch, A., Weiss, C.:
AsTeRICS - toward a rapid integration construction set for assistive technologies. In:
Gelderblom, S., Miesenberger, A. (eds.) Everyday Technology for Independence and Care,
pp. 766–773. IOS Press, Amsterdam (2011)
2. Bashshur, R.L., Reardon, T.G., Shannon, G.W.: Telemedicine: a new health care delivery
system. Annu. Rev. Public Health 21, 613–637 (2000)
3. Wade, V.A., Karnon, J., Elshaug, A.G., Hiller, J.E.: A systematic review of economic
analyses of telehealth services using real time video communication. BMC Health Serv. Res.
10, 233 (2010)
4. Strategic Implementation Plan of the Pilot European Innovation Partnership on Active and
Healthy Ageing - Innovation Union - European Commission. http://ec.europa.eu/research/
innovation-union/index_en.cfm?section=active-healthy-ageing&pg=implementation-plan
5. 3 Million Lives: Recommendations from Industry on Key Requirements for Building
Scalable Managed Services involving Telehealth, Telecare, & Telecoaching. 3 Million Lives
(2012)
6. Levine, S.P., Bell, D.A., Jaros, L.A., Simpson, R.C., Koren, Y., Borenstein, J.: The
NavChair assistive wheelchair navigation system. IEEE Trans. Rehabil. Eng. 7, 443–451
(1999)
7. Pires, G., Honorio, N., Lopes, C., Nunes, U., Almeida, A.T.: Autonomous wheelchair for
disabled people. In: Proceedings of the IEEE International Symposium on Industrial
Electronics, pp. 797–801. IEEE, Guimaraes (1997)
8. Yelamarthi, K., Haas, D., Nielsen, D., Mothersell, S.: RFID and GPS integrated navigation
system for the visually impaired. In: 53rd IEEE International Midwest Symposium on
Circuits and Systems (MWSCAS), pp. 1149–1152. IEEE, Seattle (2010)
9. Kim, S.Y., Cho, K.: Electronic cane usability for visually impaired people. In: Park, J.H.,
Kim, J., Zou, D., Lee, Y.S. (eds.) ITCS & STA 2012. LNEE, vol. 180, pp. 71–78. Springer,
Heidelberg (2012)
10. Bousbia-Salah, M., Fezari, M.: A navigation tool for blind people. In: Sobh, T. (ed.)
Innovations and Advanced Techniques in Computer and Information Sciences and
Engineering, pp. 333–337. Springer, Netherlands, Dordrecht (2007)
11. Höfer, C.N., Karagiannis, G.: Cloud computing services: taxonomy and comparison.
J. Internet Serv. Appl. 2, 81–94 (2011)
12. Grossman, R.L.: The case for cloud computing. IT Prof. 11, 23–27 (2009)
13. Calheiros, R.N., Ranjan, R., Buyya, R.: Virtual Machine Provisioning Based on Analytical
Performance and QoS in Cloud Computing Environments. Presented at the September
(2011)
14. Amazon Web Services (AWS) - Cloud Computing Services, http://aws.amazon.com/
15. Azure: Microsoft’s Cloud Platform, http://azure.microsoft.com/en-us/
16. Google Cloud Computing, Hosting Services & Cloud Support — Google Cloud Platform,
https://cloud.google.com/
17. RFC 2616 - Hypertext Transfer Protocol – HTTP/1.1, http://tools.ietf.org/html/rfc2616
18. RFC 7159 - The JavaScript Object Notation (JSON) Data Interchange Format, http://tools.
ietf.org/html/rfc7159
19. RFC 6750 - The OAuth 2.0 Authorization Framework: Bearer Token Usage, http://tools.ietf.
org/html/rfc6750
Cloud Based Assistive Technologies 369
1 Introduction
Population aging is more and more a global fact, that results from public health
and medical progresses against diseases and injures, but also represents one of
the most challenging phenomena that families, states and communities have to
face. The need of sustaining the relevant part of population represented by older
adults, from a social and an economical point of view, asks for new approaches: a
market segment including people aged 50 and older emerges, the so-called “gray
market” or “silver market”, challenging companies and societies [1] with new
requests and needs. Within this market, an important role is expected to be
played by assistive technologies to help older people to maintain their ability
in performing the activities of daily living (ADLs) and, therefore, their inde-
pendence. Ambient Assisted Living (AAL) and Ambient Intelligence (AI) will
support new generations of older adults, for longer and improved quality living.
Several public institutions, at a national and wider level, are carrying on specific
initiatives to promote the flourishing of new actors in the silver market, to create
sustainable economic systems able to offer products, services and solutions to
face the emerging needs.
In this scenario, this paper presents the approach to multimodal user-system
interaction adopted in the framework of an AAL architecture designed for a
elderly-friendly smart home. Such an architecture, named TRASPARENTE1
1
The acronym TRASPARENTE means transparent and stands for “assistive network
technologies for residential autonomy in the silver age”.
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 373–386, 2015.
DOI: 10.1007/978-3-319-16292-8 27
374 S. Spinsante et al.
covers several aspects of the home living, such as independent living, home
security, health monitoring and environmental control. It is one of the ongoing
projects supported through an action co-funded by the Italian Marche Region
administration, for the development and implementation of AAL integrated plat-
forms, to monitor ADLs and detect any abnormal behavior that may represent
a danger, or some kind of symptoms of an incipient disease.
The smart TV application can be also used to show information about the
power consumption of the appliances, being these data acquired periodically by
meter nodes, and stored in specific databases. Any request towards databases is
not issued directly, but through the server application, as shown in Fig. 2. The
same procedure applies to telemedicine-related services, such as consultation
of the values collected by some electro-medical devices. A further functionality
provides the visualization of automatic reminders for drugs assumption on the
TV screen. It is activated when the application is launched, so while the user
is watching TV, a pop-up window appears in a corner of the screen at the time
of drug intake, indicating name, and quantity of the assumption. The reminders
list is stored in a database and can be managed through the application itself.
To ensure the user’s satisfaction and willingness of using the application
provided, some basic rules [5] have been considered in the design of the smart
TV application, dealing with graphic appearance and feedback on user’s actions:
contents have been organized for an efficient utilization of the available screen
area; text size, background, and icons have been properly selected to better
identify the available functionalities. Other aspects of basic importance are the
insertion of short clear instructions to guide the user, and a robust design with
respect to possible input errors and exceptions. Pop-up messages are shown to
notify the user about the outcomes of the different operations, to identify possible
problems and suggest a solution. Finally, for smart TV equipments supporting
voice recognition, it is possible to define a set of custom commands and associate
them to the corresponding browsing or control functions.
– Layout design: due limited screen size, the use of text is minimal, preferring
key words to long sentences.
Fig. 4. Flow of operations to show product (food or drug) information from the NFC-
based interaction.
composed either through single and multiple tag readings. A multiple command
(scenery control) implements a so-called scenario, i.e. it causes a state change
in several devices and equipments managed by the home automation system,
up to a whole room or the whole building. The Smart Panel has been designed
in different versions, as shown in Figs. 5 (a) and (b), providing the same set
of multiple commands for scenery control, but differing in the section related
to single commands. They also provide an info tag, describing how to use the
home automation system. The supported multiple commands are: (i) Goodbye
Home: turns off all the lights, closes all the shutters and windows. It pilots habit-
ual actions performed when a user leaves the home; (ii) Welcome Home - Night:
turns on the lights of the hall, the living room, and the kitchen. It pilots habitual
actions carried out when a user come back home during the night; (iii) Welcome
Home - Day: raises all dampers. It pilots habitual actions carried out when a
user enters the home during the day; (iv) Good Morning: turns up the shutters
in the kitchen, the bathroom, and the dining room. It pilots habitual actions
performed when a user wakes up in the morning.
detection solution could limit the amount of time that the person remains on
the floor and reduce the onset of complications. Fall detection methods may
resort to three approaches [15]: wearable device based, ambience sensor based,
and vision based. The last one included Red Green Blue (RGB) cameras only
until a few years ago, but nowadays new low-cost devices have enabled different
opportunities, such as the Microsoft Kinect depth-based sensor. Moreover, depth
cameras increase the acceptability of AAL solutions, because people are not
immediately recognisable from the depth maps [16].
The solution proposed exploits raw depth information from a Kinect device
positioned on the room ceiling, to identify people and detect if a fall occurs.
Although some native skeleton models are available for Kinect, they are inop-
erable when the sensor is in top location. A new approach is proposed, based
on the following processing steps: (i) preprocessing and segmentation; (ii) dis-
tinguish object phase; (iii) identification and tracking of people. First, the raw
depth frame is processed to prepare data for subsequent steps. A reference frame,
created when the system is switched on, is exploited to extract the foreground
scene. The distinguish objects phase classifies different groups of pixels, perform-
ing object identification and indexing. The last step analyses objects features and
recognizes if people are present. It mainly evaluates three features, starting from
the point at the minimum distance from the sensor, for each object: (i) head-
ground distance gap; (ii) head-shoulder distance gap; (iii) head size. If a cluster
of pixels verifies all the previous conditions, it is labeled as a person and tracked
in the subsequent frames. The fall event is notified when the distance value of the
central point associated to the person exceeds an adaptive threshold. A detailed
description of the entire solution can be found in [17].
100
200
z [mm]
300
400
500
−100
600 0
−500 −400 −300 100
−200 −100 0 100 200 300 400 500 300
200 y [mm]
x [mm]
Fig. 6. Model of the person for the drink intake monitoring application.
This process is iterated for each frame, to manage the different aspect ratio of
different human bodies.
Differently from supervised neural network solution, this method does not
require any learning phase and it is a relevant advantage in AAL solutions.
Before the first use of the SOM, it is necessary to superimpose the initial model
to the PC, according to the orientation of the person into the scene. The subject
to monitor, indeed, is free to sit at any side of the table, so the orientation issue
must be taken into account by the system. Starting from the recognition of the
table in the depth frame, and using the line version of Hough transform [22],
the algorithm is able to rotate the model in the appropriate direction. Dishes
and glasses are too much tiny or small to be identifiable in the depth frame,
so the RGB stream is used to overcome this limitation. Taking into account that
these objects have usually a specific shape, the circular version of the Hough
transform [23] is exploited. Finally, to allow data fusion of depth and RGB
streams, a mapping algorithm that compensates the different camera’s point of
views is applied. This operation ends with the association of an RGB pixel to
the corresponding one in the depth frame [24].
4 Preliminary Evaluations
Applications for smart TV and portable devices have essentially the same struc-
ture and enable access to the same services, however the interaction to the system
using each tool is different. Referring to the smart TV interface, interaction is
mainly supported by the use of the TV remote controller. On one hand, this
brings a number of advantages, related to the familiarity for all categories of
users. On the other hand, it is not a practical device to use, when dealing with
operations that require many steps, such as text input, or browsing the avail-
able contents. These limitations have been considered during the design stage,
to make the interaction as simple as possible.
The touch-based interaction takes into account the specific needs of an elderly
user. A study by Apted et al. [25] examined how older people interact through
single and multi touch gestures. The results show that older users need a time
382 S. Spinsante et al.
of learning and interaction understanding almost twice than younger users, then
recovered during the use of the system. For this reason the usability criteria set
out in Sect. 2.2 have been met. At the current development stage, the application
for touchscreen portable devices is partially available. The functionality related
to drug therapy is ready and numerous tests have been performed to evaluate
its reliability. The service runs in the background at the start of the device, and
effectively notifies the user when it is time to take medicines, while an Android
activity allows him to add/edit/delete reminders in a simple and intuitive way
(as shown, for example, in Fig. 7), facilitating text insertion by auto-completion.
Part of the application that enables video calls between different SIP clients is
also available and functioning (see Fig. 8). Currently, video entry-phone/tablet,
and tablet/tablet calls are enabled. The system works properly, allows to start/
close communication, sending the audio and video stream, and to open the
entrance door through a relay. The development of the remaining functions,
and their integration in a single application, are ongoing.
The complete architecture is at a developing stage, so not all the functions
designed have been implemented yet. A test environment has been set up in the
laboratory, where the single parts of the architecture are going to be gradually
integrated. As for the smart TV interface, some of the single functionalities have
been tested individually: (i) request and storage of the system configuration;
(ii) home automation system control; (iii) reminders visualization and manage-
ment; (iv) request and visualization of medical parameters. In the future, all the
functions will be implemented and merged in a single application for the overall
validation.
The adoption of NFC both to request information on food and drugs, and
to control the home automation system, simplifies the user interaction, since
it just requires to tap the reader near to the NFC tag area. The communica-
tion among NFC devices, local server, and the involved subsystems, has been
Multimodal Interaction in a Elderly-Friendly Smart Home: A Case Study 383
successfully tested. Figure 9 shows information about drug dosage obtained from
the database, corresponding to a sample medicine.
The fall detection solution provides the distance between the point associated
to the person and the sensor, positioned on the ceiling. Low depth values rep-
resent a standing person while high distances denote a fall. Figure 10(a) shows
the values of this distance, for each frame where a person is identified. When the
evaluated depth overcomes the threshold of 2600 mm, which is automatically
tuned considering the distance between the sensor and the floor, the system
detects a fall. Looking at Fig. 10(a) a person who enters the scene, simulates a
fall, and raises again before exiting can be noticed. Figure 10(b) shows the depth
map for one of the frames where a fall occurred (red spot in Fig. 10(a)), where
the point associated to the person is the magenta spot.
For the food intake monitoring, the algorithm has been tested on a database
of 17 different situations, for a total of 4800 depth frames. The test users, in
good health, are between 21 and 38 years old. In all the realizations the drink
intake action is always recognized (31 out of 31) with a simple threshold of
250 mm applied to the distance between head and hands. Figure 11(a) shows the
PC, for the 300-th frame, of a person drinking, and the corresponding model
calculated by the algorithm described above. Figure 11(b) depicts the distance
between head and left hand for each frame. As visible in the graph, in the frame
384 S. Spinsante et al.
1000
1200 3000
Distance from the sensor [mm]
y
2000 2500
150 2400
2200
2300
2400
200 2200
threshold
2600
2100
2800
100 150 200 250 300 350 400 50 100 150 200 250 300
Frame index x
(a) (b)
Fig. 10. (a) Distance values of the point associated to the person during a simulated
fall, (b) depth map with a fallen person (Color figure online).
600
1500
550
d =169 mm
1600
500
1700
450
Distance [mm]
z [mm]
1800 400
1900 350
300
2000
Threshold
250
2100
200
2200
600 150
400
200 300 200
0 500 400 100
y [mm] 900 800 700 600
200 220 240 260 280 300 320 340 360 380 400
x [mm] Frame number
(a) (b)
Fig. 11. (a) PC with superimposed the skeleton model for the 300-th frame, and (b) dis-
tance between head and left hand for all frames.
5 Conclusion
The paper presented the ongoing activities of an experimental research project
aimed at designing and prototyping an assistive architecture for elderly-friendly
smart homes. Focus has been put on the different interfaces the architecture
will include, to handle both user driven and event driven interaction. They are
Multimodal Interaction in a Elderly-Friendly Smart Home: A Case Study 385
Acknowledgment. This work was partially supported by the Regione Marche - INRCA
project “Casa intelligente per una longevita’ attiva ed indipendente dell’anziano”
(DGR 1464, 7/11/2011). The authors would also like to acknowledge the contribution
of the COST Action IC1303 - AAPELE, Architectures, Algorithms and Platforms for
Enhanced Living Environments. The authors alone are responsible for the content.
References
1. Kohlbacher, F., Herstatt, C. (eds.): The Silver Market Phenomenon - Marketing
and Innovation in the Aging Society. Springer, New York (2011)
2. Spinsante, S., Gambi, E.: Remote health monitoring by OSGi technology and dig-
ital TV integration. IEEE Trans. Cons. Elect. 58(4), 1434–1441 (2012)
3. de Abreu, J.F., Almeida, P., Afonso, J., Silva, T., Dias, R.: Participatory design
of a social TV application for senior citizens – the iNeighbour TV project. In:
Cruz-Cunha, M.M., Varajão, J., Powell, P., Martinho, R. (eds.) CENTERIS 2011,
Part III. CCIS, vol. 221, pp. 49–58. Springer, Heidelberg (2011)
4. Alaoui, M., Lewkowicz, M.: A LivingLab approach to involve elderly in the design
of smart TV applications offering communication services. In: 5th International
Conference, OCSC 2013, Held as Part of HCI International 2013, Las Vegas, NV,
USA (2013)
5. Nielsen, J.: Traditional dialog design applied to modern user interfaces. Commun.
ACM 33(10), 109–118 (1990)
6. Wood, E., Willoughby, T., Rushing, A., Bechtel, L., Gilbert, J.: Use of computer
input devices by older adults. J. Appl. Gerontol. 24(5), 419–438 (2005)
7. csipsimple - SIP application for android devices - google project hosting. http://
code.google.com/p/csipsimple
8. PJSIP - open source SIP, media, and NAT traversal library. http://www.pjsip.org
9. OpenGL - the industry standard for high performance graphics. http://www.
opengl.org
10. Parhi, P., Karlson, A.K., Bederson, B.B.: Target size study for one-handed thumb
use on small touchscreen devices. In: Proceedings of 8th Conference on Human-
computer Interaction with Mobile Devices and Services (2006)
11. Vergara, M., Daz-Helln, P., Fontecha, J., Hervas, R., Sanchez-Barba, C.,
Fuentes, C., Bravo, J.: Mobile prescription: an NFC-based proposal for AAL. In:
Proceedings of 2nd International Workshop on NFC (2010)
12. Isomursu, M., Ervasti, M., Tormanen, V.: Medication management support for
vision impaired elderly: scenarios and technological possibilities. In: Proceedings
of 2nd International Symposium on Applied Sciences in Biomedical and Commu-
nication Technologies, pp. 1–6 (2009)
13. Rashid, O., Coulton, P., Bird, W.: Using NFC to support and encourage green
exercise. In: Proceedings of 2nd International Conference on Pervasive Computing
Technologies for Healthcare, Tampere, Finland (2008)
386 S. Spinsante et al.
14. Centers for Disease Control and Prevention: Falls Among Older Adults: An
Overview (2013)
15. Mubashir, M., Shao, L., Seed, L.: A survey on fall detection: principles and
approaches. Neurocomputing 100, 144–152 (2013). Special issue: Behaviours in
video
16. Demiris, G., Parker Oliver, D., Giger, J., Skubic, M., Rantz, M.: Older adults’ pri-
vacy considerations for vision based recognition methods of eldercare applications.
Technol. Health Care 17(1), 41–48 (2009)
17. Gasparrini, S., Cippitelli, E., Spinsante, S., Gambi, E.: A depth-based fall detection
system using a kinect R
sensor. Sensors 14(2), 2756–2775 (2014). doi:10.3390/
s140202756. http://www.mdpi.com/1424-8220/14/2/2756
18. Scales, K., Pilsworth, J.: The importance of fluid balance in clinical practice. Nurs.
Stand. 22, 50–57 (2008)
19. Haker, M., Böhme, M., Martinetz, T., Barth, E.: Self-organizing maps for pose
estimation with a time-of-flight camera. In: Kolb, A., Koch, R. (eds.) Dyn3D 2009.
LNCS, vol. 5742, pp. 142–153. Springer, Heidelberg (2009)
20. Klingner, M., Hellbach S., Kstner M., Villmann T., Bhme H.J.: Modeling human
movements with self organizing maps using adaptive metrics. In: Proceedings of
Workshop New Challenges in Neural Computation 2012 (2012)
21. Kohonen, T.: The self-organizing map. Proc. IEEE 78, 1464–1480 (1990)
22. Duda, R.O., Hart, P.E.: Use of the hough transformation to detect lines and curves
in pictures. Commun. ACM. 15, 11–15 (1972)
23. Hough circle transform, OpenCV. http://docs.opencv.org/doc/tutorials/imgproc/
imgtrans/hough circle/hough circle.html
24. Herrera, C.D., Kannala, J., Heikkilä, J.: Joint depth and color camera calibration
with distortion correction. IEEE Trans. Pattern Anal. Mach. Intell. 34, 2058–2064
(2012)
25. Apted, T., Kay, J., Quigley, A.: Tabletop sharing of digital photographs for the
elderly. In: Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems, New York (2006)
Analysis of Vehicular Storage and Dissemination
Services Based on Floating Content
1 Introduction
Mobile devices (i.e., smartphones, smart tablets, etc.) have become a staple of
our society. Today, thanks to smartphone sensors and the possibility to combine
their capabilities over advanced wireless technologies, information in the form
of context becomes common input for applications running in our hand. Mobile
applications can offer navigation in unfamilar places (e.g., Google Maps, Waze),
or can socially connect people (i.e., Facebook, Twitter).
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 387–400, 2015.
DOI: 10.1007/978-3-319-16292-8 28
388 M. Ciocan et al.
2 Floating Content
We assume users to be mobile nodes who are interested in the content generated
by other nodes. They use mobile devices able to handle the amount of data being
exchanged during their participation in the ad-hoc network. Also, we assume a
completely decentralized storage/dissemination infrastructure.
We assume nodes are uniformly distributed and travel independently, with
a constant speed. We also assume mobile devices are equipped with wireless
interfaces (Bluetooth or WLAN) to enable network communication. Analysis
390 M. Ciocan et al.
of performance for 802.11p standard displayed in [3] have shown that using
a bitrate of 6 Mbps and a payload of 500 bytes yields a delivery rate of up to
80 %. This indicates the acceptable reliability and performance of IEEE 802.11p,
and confirms the viability of floating up to several megabytes of data (from 10
KB for text messages to 10 MB for photos). This standard is used also in our
simulations, discussed further on. Making intuitive judgements, we can determine
that contacts cannot last more than several tens of seconds, in vehicle case even
less due to their high speeds.
The devices also need to be equipped with accurate systems to determine
their position (e.g. using GPS tracking, cellular base stations, cell tower trian-
gulations using WLAN access points or Wi-fi tracking). Many factors need to
be considered, like accuracy needed, battery consumption etc. In order to provide
the best location based service, the equipment must acquire the most accurate
location coordinates. Finally, nodes need to synchronize their clock time so that
users can process exchanged information (this can be done with the help of GPS
or cellular networks).
As [7] presents, a node generates the information I, which has a size of s(I)
and a defined lifetime (Time to Live, or T T L, determining a validity period for
the information, before it becomes obsolete). The information is tagged with the
anchor zone, defined by its geo-located center P and two radii (see Fig. 1): r
identifies the replication range, inside which nodes replicate the information to
other nodes they meet on their way, and a defines the availability range inside
which the information is still stored with limited probability. These parameters
are specific for each unit of information, and are defined by the creator of each
Fig. 1. Moving nodes inside an anchor zone. Black nodes are information-carrying
nodes, white nodes will eventually get the information from the black ones. The prob-
ability of a node carrying an item tends to 1 inside the replication zone, and decreases
until reaching an availability distance a after which no more copies are found (after [7]).
Analysis of Vehicular Storage and Dissemination 391
particular message. Outside the availability zone there exist no copy of the item
in the node data storage.
When two nodes meet in the anchor zone defined for a particular data, they
share it. As such, all nodes inside the anchor zone should have a copy of the
item (obtained while meeting other nodes already in the anchor zone), while
nodes which are leaving the anchor can remove it at their own discretion. Let us
consider two nodes A and B that meet. Node A does have an item I tagged with
an anchor zone centered in point P and radii a and r. Let h be the distance of
node A from the center P . When node A meets node B, item I gets replicated
to B with the probability pr (h):
⎧
⎪
⎨1 if h ≤ r
pr (h) = R (h) if r < h ≤ a
⎪
⎩
0 otherwise
R(h) is a decreasing function which determines the probability of replication
between the outer replication border and the availability border of the anchor
zone. The area between replication range and availability range acts as a buffer
zone, that prevents immediate deletion of items. As shown in [7] and because of
simplicity and more traceability, in our evaluation we assume no buffer zone.
The features of floating content offer exciting opportunities, but also introduces
problems. The major challenge is that the communications service makes no
guarantees that data will stay around until its lifetime expires. For example,
during the night the content is expected to disappear. People’s daily activities
cause density fluctuations over the day, which may be a problem for the resid-
ing information in certain places. Thus, the information is expected to remain
available for no more than a few hours.
Multiple use cases can emerge from the floating content concept. One of them
can use infrastructure-less local data availability for advertising or selling goods.
This type of market could have a dynamic catalog of available merchandise, being
able to operate updates on the fly. Another one is information sharing between
tourists and visitors about local attractions, or notifications about services a
hotel is offering. Spreading news and keeping it localized, time-bounded and
most important anonymous, can be another use case of floating content for
which best-effort operation perfectly suits the needs.
Overall, floating content can be used in many ways, considering two impor-
tant aspects of it. First, floating information is location-aware, so developers
should consider multiple data-oriented architectures. The second aspect is that
spreading the information while containing it to an anchor zone relies on a best
effort approach, a problem which the Internet infrastructure solves with repair
mechanisms that can recover lost packets. In Floating Content, data that expires
becomes pretty much unrecoverable.
392 M. Ciocan et al.
Similar to our paper, the Floating Content model has been a subject of analysis
in [5]. The most important objective of this work has been finding a pattern to
guarantee that a specific information remains in its anchor zone until the expiry
of its lifetime with a high probability. It is called the criticality condition, and
it depends on aspects such as mobility patterns and replication policies of the
nodes inside the anchor zone.
While moving inside the anchor zone, a node may come in contact randomly
with other nodes. We can assume there are two nodes moving permanently inside
the zone. Let υ be the frequency at which they come in contact with each other.
Assuming the population of nodes in anchor is N , the total number of pairs is
2 N (N − 1) ≈ 2 N and the total rate of encounters is 2 N υ. A part of these
1 1 2 1 2
encounters, more exactly 2p(1 − p), replicate an item to nodes that does not
have it yet, thus the total rate of such events is p(1 − p)N 2 υ. This rate shows
the type of monotonicity of the size of the population which have the item I.
Let µ1 be the time spent by a node in the anchor zone. It results that the total
exit rate of nodes is N μ, and the exit rate of tagged nodes is N pμ. The growth
rate is determined by the formula:
d
N p = N 2 p(1 − p)υ − N pμ (1)
dt
The two terms on the right are equal when in equilibrium, leading to the
stationary value p∗ = 1 − μ/(υN ). In order to have a positive solution, p∗ > 0,
it requires that,
υ
N > 1. (2)
μ
Equation (2) is called criticality condition. The left side value represents the
average number of collisions a randomly chosen node has during its sojourn time.
Considering the sign of the Eq. (1), it can be seen that the solution is stable.
When p > 1 − μ/(υN ), it tends to increase, and when p < 1 − μ/(υN ), it
tends to decrease. The information disappears (even in the fluid model) when
the derivative is negative, leading to the solution p = 0. Moreover, since we need
to prevent accidental disappearance of the information carrying population by
stochastic fluctuations, N p = N − μ/υ must be large.
infected ones (nodes who have a copy, and can send it further to neighbours);
R stand for the removed (nodes which deleted their copy of the message, either
because their availability time expired, or they are out of the anchor range).
As shown in [4], we define r as the infection rate. The number of the infected
is proportional with the current number of susceptibles and infected, or rSI.
We define a as the removal rate. The number of removed is proportional to the
infected nodes only, or aI. The number of each class can be, now, computed
using the following conditions: dS dI dR
dt = −rSI, and dt = rSI − aI, and dt = aI.
These equations ensure that the total population N = S + I + R and
S0 > 0, I0 > 0 and R0 > 0. Also, when a susceptible gets infected i.e.
receives the message, it becomes immediately infectious.
From these equations, at the beginning of information exchanging, when
t = 0 the following equation results:
dI a
= I0 (rS0 − a) ≷ 0 if S0 ≷ =μ (3)
dt t=0 r
There are two cases which emerge from (3): S0 < μ implies that the num-
ber of infectives drop from I0 to 0 and no epidemic can occur, S0 > μ the
number of infectives increases and the information spreads. μ can be considered
as a threshold which determines whether the information will live or not. The
presented model helps us to understand how the information develops in time,
how the number of neighbours and the radius size influence its spread.
selected area), during one week (from 2nd February 2008, to 10th February 2008).
For San Francisco, we simulated the movement of all 500 cabs in the datasets, over
the period of 12 days (from 17th May 2008 to 29th May 2008).
In the end, we are interested in criticality, defined as the product between (i)
the maximum number of nodes that exists at a given time in an anchor zone, (ii)
the average number of contacts in during the sojourn time, and (iii) the average
sojourn time of vehicles in an anchor zone.
The environment for our simulation required by Omnet++ was generated
using OpenStreetMap [2]. We decided to use two different city scenarios
(SanFrancisco and Beijing) to show that floating content in urban environments
is feasible regardless of particularity of road architecture (we have other results
for different cities, such as Erlangen, but here we present only these 2 cities due to
page limits). To analyse the model, we first vary the anchor zone radii, between
[200 m, 500 m]. Although in theory IEEE 802.11p aims to provide both V2V and
V2I communications in ranges up to 1000 m, due to obstacles and inferences we
consider ranges between [200 m, 500 m] to reflect more realistic somewhat-dense
urban traffic conditions [3]. Figure 2 shows how anchor zones are distributed.
4 Experimental Results
We analyzed the feasibility of information sharing using the floating content
model, considering the particularity of the road network as well as the spatial
distribution of vehicles, and also the evolution during the simulation, following
the SIR model.
4.1 Beijing
Figure 3a displays an area within the Beijing city, where we conducted our simu-
lations. The figure is a snapshot that displays the road network from an area of
6.7 × 5.2 km2 . The map is coloured with respect to the amount of time a car has
to wait due to the traffic lights placed in intersections. Road segments coloured in
red have a waiting time of about 40 s, more than the ones coloured in black, where
there is a continuous flow of vehicles (corresponding to a waiting time close to 0s).
Analysis of Vehicular Storage and Dissemination 395
(b) Beijing Criticality Map r = 200m. (c) Average Number of Contacts Map r =
200m.
Figure 3b shows the criticality heatmap, and Fig. 3c presents the average
number of contacts a vehicle encounters during its sojourn time. The similarity
between them is visible. The black (and gray, corresponding to value 0) areas
state that the criticality factor is nearly 0, thus the probability of information
floating is very low. Such areas occur in places between streets or outside the traf-
fic road network, where neither vehicles, nor their radio range can reach them. In
the 200 m scenario, the anchor zones with a higher probability of floating scatter
the map. In the blue coloured areas, results show an average criticality condition,
slightly above 1, which corresponds to the criticality threshold. According to
Fig. 3c, 9 out of 10 vehicles do not have any interaction with other vehicles, during
their sojourn time. The reason is the small amount of time spent in that location,
and lack of neighbours at the time a beacon is sent out.
The bright yellow area, occurs naturally in the road intersection. The expec-
tations for an information to live rise with respect to the average waiting time
on a road segment. The criticality f actor rise well over 8 as well as the average
number of contacts a vehicles has during its sojourn time.
Increasing the anchor range yields better floating results. Figure 4 shows that
information has a high probability of living along the road network. Also, the
small light yellow areas in Fig. 4b reflect that every node has at least one contact
during its sojourn time. The criticality condition increases 4 times than the one
computed for a range of 200 m.
396 M. Ciocan et al.
Figure 5 shows the information evolution within an anchor zone during a sim-
ulation time of 1000 s: infected and healed. We decided to set the ttl parameter
(time-to-live) to 600 s, a realistic number that could be used for short lasting
events and long enough to collect consistent data. We chose an anchor zone with
a high number of information exchanges to provide a detailed picture of its evo-
lution and avoided the ones which disappeared before the expiry time because
of stochastic fluctuations.
We recorded the number of replicas an information has during the simulation
denoted by Replicas label. The other two labels are 2 classes which belong to
the epidemic SIR model: infected denote the information carrying nodes, and
healed the ones which previously had a copy of the information and deleted it.
The deletion could occur if the distance between the vehicle and the geo-location
origin was bigger than the anchor radius or the information availability expired.
It is important to determine whether the information will spread or not. And
if it does, for how long, and how it will develop in time. As we mentioned before,
if S0 > Sc = a/r the content will spread. The plot in Fig. 6 confirms the
condition for a 500 m anchor radius: the threshold μ is around 0 during most
of the information lifetime, and the amount of susceptibles is greater than the
Fig. 5. Information life evolution with 500 m range. The number of copies is equal with
the total number of “infections” minus the total number of “healings”.
Analysis of Vehicular Storage and Dissemination 397
Fig. 6. When S > µ the number of replicas the number of replicas can increase, on
the other hand, when S < µ, the number of replicas decrease towards 0; µ represents
the epidemy threshold; the anchor zone radius is 500 m.
threshold. This means that they could possibly receive a copy in the near future
and become “infected”. The end time period states that the number of suscep-
tibles become less than μ, and thus the replication process reaches the end. The
reason the threshold suddenly becomes larger is due to the application deleting
the item once the lifetime expires. The “infected” will become susceptible again,
fact confirmed by the figure (see the slight increase).
Figure 7 depicts how the information develops with respect to radius size.
Establishing the optimum range enables the equipment and application to save
battery energy and have a better resources management.
Anchor zones with r [100 m, 200 m] yield a short life expectancy. In our
scenario the content lasts for about 100s, just 1/6 of the due time. Such distances
are probably more suitable if used indoor public places like shopping malls, metro
stations, usually overcrowded places were the message exchanging is possible.
Ranges within [300 m, 400 m] increase the life time up to 500 s, but the infor-
mation disappears ahead of time. The range of 500 m keeps the item alive until
its termination time. As long as the range is big enough to hold a considerable
amount of vehicles inside, the information will last.
The area selected from San Francisco to be used in simulations has 4.5 × 2.5 km2 ,
and contains a road network made up from road streets with different numbers
of lanes on each way. We removed the bicycle, pedestrian, cityrail and public
transport segment roads, and left only the segments for regular road vehicles.
Figure 8 depicts the results using an anchor range of 500 m. While the aver-
age number of contacts slightly increases from 2 to 2.5 the criticality reaches
120 in the top spots, nearly 2.5 times than the former approach. This is possible
because the anchor zones now cover larger areas, thus involving more cars in
the exchanging process. The bigger radius make the anchor zones more diffusive
inside the heatmap confirming that a user may receive much more items spread
on a larger area. In the 200 m scenario, the areas are more concentrated, with
the need of being close to the high traffic zones in order for the replication to
be possible. The big values that appear over a big portion of the map are an
indicator that even in the probabilistic system used the information will float.
Again the epidemic model SIR is confirmed, as shown in Fig. 9. During its
lifetime the removal rate is close to 0. Around t = 475 s a spike occurs in the
epidemic threshold μ which is a sign that vehicles deleted their copy of the item
when got out of range. The number of susceptibles is above the threshold, so
epidemy is sustained and the number of replicas stays around 70.
When lifetime expires, the replication rate drops to 0 and the deletion rate
starts increasing. The threshold thus gets over the number of susceptibles. Same
happens with the number of susceptibles, because the information carrying nodes
start information deletion process and transform into susceptibles again. The
anchor zone is crossed by a higher number of vehicles if we take a look at the
number of susceptibles. Another indicator is the fact that the average number of
replicas is much higher, with less than 20 in Beijing and over 70 in the current
scenario.
Analysis of Vehicular Storage and Dissemination 399
Fig. 9. When S > µ the number of replicas the number of replicas can increase, on
the other hand, when S < µ, the number of replicas decrease towards 0; µ represents
the epidemy threshold; the anchor zone radius is 500 m.
Figure 10 describes how the spread develops in time with respect to anchor
radius size. It indicates that when r [300 m, 500 m], the information floats the
entire due time. Intuitively, the distinctive aspect is the average number of repli-
cas proportionally with the range size. A lower radius kills the item prematurely,
existing no peer to exchange messages with.
5 Conclusion
We have presented an analysis of distributed storage and dissemination services
based on floating content design, that works exclusively on mobile devices with-
out relying on a central infrastructure network. We performed our evaluations
using realistic scenarios and the results revealed that floating content could be
feasible in urban environments.
There are several important factors that influence information floatability.
A high density inside the anchor location sustain the life of the information
regardless the radius size (but the number of copies would be small for small
radii). Having a big radius may cluster more vehicles and thus, increase the
400 M. Ciocan et al.
probability of floating. The radio range could also affect information sharing.
A small radio range compared to the information availability range may prevent
the application to spread content in the entire zone and rely on vehicle density.
We believe information sharing applications between mobile entities will
experience a growth in the near future, a prediction supported by the improve-
ments made in wireless communication, the increase spread of mobile phones
worldwide, and the construction of vehicles with communication equipment.
References
1. Bojic, I., Podobnik, V., Kusek, M., Jezic, G.: Collaborative urban computing:
serendipitous cooperation between users in an urban environment. Cybern. Syst.
42(5), 287–307 (2011)
2. (Muki) Haklay, M., Weber, P.: Openstreetmap: user-generated street maps. IEEE
Pervasive Comput. 7(4), 12–18 (2008)
3. Han, C., Dianati, M., Tafazolli, R., Kernchen, R., Shen, X.: Analytical study of
the ieee 802.11 p mac sublayer in vehicular networks. IEEE Trans. Intell. Transp.
Syst. 13(2), 873–886 (2012)
4. Hethcote, H.W.: The mathematics of infectious diseases. SIAM Rev. 42(4),
599–653 (2000)
5. Hyytia, E., Virtamo, J., Lassila, P., Kangasharju, J., Ott, J.: When does content
float? characterizing availability of anchored information in opportunistic content
sharing. In: 2011 Proceedings of IEEE INFOCOM, pp. 3137–3145 (2011)
6. McCluskey, C.: Complete global stability for an sir epidemic model with delay-
distributed or discrete. Nonlinear Anal. Real World Apps 11, 55–59 (2010)
7. Ott, J., Hyytia, E., Lassila, P., Vaegs, T., Kangasharju, J.: Floating content: infor-
mation sharing in urban areas. In: 2011 IEEE International Conference on Perva-
sive Computing and Communications (PerCom), pp. 136–146. IEEE (2011)
8. Piorkowski, M., Sarafijanovic-Djukic, N., Grossglauser, M.: CRAWDAD data set
epfl/mobility (v. 2009–02-24), February 2009. Downloaded from http://crawdad.
org/epfl/mobility/
9. Podobnik, V., Lovrek, I.: Transforming social networking from a service to a plat-
form: a case study of ad-hoc social networking. In: Proceedings of the 13th Inter-
national Conference on Electronic Commerce, p. 8. ACM (2011)
10. Smailovic, V., Podobnik, V.: Bfriend: context-aware ad-hoc social networking for
mobile users. In: 2012 Proceedings of the 35th International Convention MIPRO,
pp. 612–617. IEEE (2012)
11. Yuan, J., Zheng, Y., Zhang, C., Xie, W., Xie, X., Sun, G., Huang, Y.: T-drive:
driving directions based on taxi trajectories. In: Proceedings of the 18th SIGSPA-
TIAL International Conference on Advances in Geographic Information Systems,
pp. 99–108. ACM 2010)
An Approach for the Evaluation of Human
Activities in Physical Therapy Scenarios
Abstract. Human activity recognition has been widely studied since the
last decade in ambient intelligence scenarios. Remarkable progresses have
been made in this domain, especially in research lines such as ambient
assisted living, gesture recognition, behaviour detection and classifica-
tion, etc. Most of the works in the literature focus on activity classifi-
cation or recognition, prediction of future events, or anomaly detection
and prevention. However, it is hard to find approaches that do not only
recognize an activity, but also provide an evaluation of its performance
according to an optimality criterion. This problem is of special interest
in applications such as sports performance evaluation, physical therapy,
etc. In this work, we address the problem of the evaluation of such human
activities in monitored environments using depth sensors. In particular,
we propose a system able to provide an automatic evaluation of the
correctness in the performance of activities involving motion, and more
specifically, diagnosis exercises in physical therapy.
1 Introduction
Human activity recognition is one of the most complex tasks within Ambient
Intelligence [1]. However, in the last few years there have been approaches that
provide a successful gesture and activity recognition in different domains. The
most successful techniques that have provided the best results in this field are
data-driven approaches, and more specifically those based in Hidden Markov
Models [16] or time series matching methods [15,17], among others. The inte-
gration of unobtrusive sensors in our daily life allows smart spaces to detect users
and their activities, with the aim of providing assistance to people, as for instance
in the projects [13,14]. The technologies used to acquire the information of the
human activity are wide: smartphone accelerometer data, passive/active sensors
located in the environment, cameras, and depth sensors. Despite these advances,
c Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 401–414, 2015.
DOI: 10.1007/978-3-319-16292-8 29
402 M.P. Cuellar et al.
few approaches aimed at the simultaneous detection and evaluation of the user
activity, nevertheless, this application is of big interest to fields such as sports
or physical therapy. Recent works have shown that the activity acquired with
depth sensors is not as accurate as other physical activity monitoring technolo-
gies such as Vicon systems [3,4], but they provide enough accuracy for motion
monitorization systems at home [2].
This work focuses in this research line, and describes a system that uses a
depth sensor to acquire information about a performer of diagnosis exercises in
physical therapy, detects the exercise being executed, and provides an evaluation
of its performance. Traditionally, recovery from injuries has been carried out
following a sequence of steps starting with initial medical intervention, a period
of rest, and finally a rehabilitation program monitored by experts [5]. However,
due to staff shortages, economic resources and space limitations, the latter is
usually limited to scheduled sessions, daily or weekly, in physiotherapy centres.
During the last decade, the advances in computer and sensor technologies
have enabled the development of telerehabilitation systems to improve the
patient recovery [6–8]. Projects such as Stroke Recovery with Kinect project
[20], is an example of the interest of this new research field from not only
the academia, but also from the research industry. We can find two types of
approaches: Those focused on technology exploitation either to improve the per-
formance rehabilitation exercise, either to obtain a better monitorization of the
movement to provide the therapist with accurate data for the latter analysis
and diagnosis, and those that are designed to automatically assist the user to
perform the exercises correctly.
For the first type, we cite example projects such as Rehabilitation Gaming
system (RGS) [9,10] and ICT4Rehab [11], between others. In RGS, it is pro-
posed a complete methodology to use videogames for stroke rehabilitation. The
game is designed to automatically adapt the difficulty to each patient, in order
to allow an unsupervised and personalized rehabilitation. The main conclusion of
this work is that they proved the correlation between virtual and real kinematic
movements, therefore enabling games as a powerful tool in rehabilitation. On
the other hand, the project ICT4Rehab uses videogames for muscle spasticity
rehabilitation in children. They develop a set of serious games to exercise reha-
bilitation movements suggested by therapists, which evaluate automatically the
performance of each patient after playing a game. Considering the increasing
number of approaches and articles, we may conclude that serious games have
arisen as a promising tool in rehabilitation research [12]. However, despite the
motivation factor for the patient and the accuracy of movements that sensors
may measure and provide to a computer-aided therapist, these approaches do
not usually provide personalized support to the patient, and the medical expert
is still required during the execution of exercises and the later diagnosis.
In our approach, we attempt to solve this limitation including an evaluation
method for diagnosis exercises performed by patients and monitorized using
a depth sensor camera. Our final goal is to build a decision support system
able to assess the patient semantically and to automatically evaluate his/her
Evaluation of Human Activities in Physical Therapy Scenarios 403
2 System Overview
Our main objective is to develop a system able to acquire and automatically eval-
uate diagnosis exercise performances in physical therapy. To that end, we have
built a prototype application called PReSenS (Proactive Rehabilitation Sensing
System) [19], able to acquire and save template exercises involving motion or
posture holding, to create exercise plans for patients, and acquire and save their
exercise performances. We use a depth sensor to fetch the skeleton joint 3D
rotation data from the human body from both physical therapists and patients
in real-time. The typical system operation comprises 3 stages: First, the physio-
therapist saves exercise templates using the recording editor (1 template for each
exercise). These are stored into a remote database containing the information
about exercises, patient plans and patient performances. In a second stage, the
physical therapist proposes an exercise plan to the patient, who performs the
exercises. In this stage, the system is able to provide the patient with a prelim-
inary automatic evaluation of the plan execution, and stores this information
into the remote database. Finally, in a last stage, both patients and physiothera-
pists can access the system remotely using a web front-end, to make an in-depth
motion signal analysis and/or to obtain suggestions and professional evaluations
to help performers to improve. Figure 1 summarizes these stages and describes
the general architecture of the system.
In this paper, we focus on the problem of automatic performance evaluation
of diagnosis exercises in physical therapy, highlighted in stage 2. According to
interviewed experts in the field, most of these exercises can be classified in two
main categories: motion exercises and posture holding. A physical therapist use
to evaluate the performance of these types of exercises according to different
absolute measures such as the angle between joints, amplitude of moves, etc.,
and asks their patients to perform an exercise either for a number of repetitions
or for a limited period of time before its evaluation. We included these features
in PReSenS so that the system fits the requirements of professionals in the field
as accurately as possible.
As the first step to detect the activity being performed by an user, we selected
time series matching methods to solve the problem of exercise detection because
they provide a set of advantages in the scope of our work: (1) they allow to select
and change the dimensionality of a template/performance exercise without re-
training, (2) they ease the management of specific dimensions (features of interest
being monitorized) for kinematic analysis, (3) they require a single template
to be saved by the therapist before its use, etc. Moreover, their accuracy in
fields, such as gesture recognition and activity detection in previous works, has
provided promising results that make them suitable to be used in the problem
addressed.
To achieve a fast and effective activity detection, the system PReSenS uses
a single template for each exercise, acquired from the monitorization of the
physical therapist using the depth sensor. This template is a recording of a
suitable exercise performance carried out by the therapist, and will be used to
match and find similar activities during the patient performance. It is composed
by a multidimensional signal of features of interest to evaluate the exercise,
which must be previously selected by the therapist. For instance, if an exercise
is Left leg up, only the joints involving the move should be selected to avoid
monitorization of irrelevant body parts, as for example, the head.
After the template is recorded into the system, a performer can execute an
exercise plan including such activity. We detect the associated exercise using
time series matching algorithms. In particular, we have implemented a modified
variant of the subsequence matching Dynamic Time Warping algorithm (DTW)
[18] in PReSenS, with a ground truth segmentation method able to detect the
beginning and the end of the exercise. However, due to space limitations, we
are not able to describe in depth this method in this paper. For this reason, for
the remaining of the manuscript we will assume that we have an algorithm that
provides a matching between the therapist template and the patient’s perfor-
mance and it returns the matching between these two multidimensional signals.
406 M.P. Cuellar et al.
We believe that the simplest way that a system may provide an automatic eval-
uation of an exercise performance is using a score value; for instance, between
0 and 100. The evaluation process will consist of scoring the correctness of a
specific exercise performance, obtaining a degree of similarity after comparing
the performance with the therapist’s template. We are able to carry out this
comparison since in the previous subsection we found a matching between an
exercise performance and its corresponding therapist’s template. Thus, next we
describe how to compare these two signals to compute a final single score value.
We need to evaluate 2 types of exercises: Posture holding and motion. A pos-
ture holding exercise template is represented as a single posture, while a motion
template is an ordered sequence of postures in time. Finally, a patient’s perfor-
mance is always an ordered sequence of postures in time. For posture holding
comparison, each posture in the patient’s performance is compared with the
single template posture, while motion exercises are compared using the match-
ing between data series provided by the method described in the previous section
using DTW. In order to obtain a single evaluation score, we have to evaluate
each posture in a move separately, and then to aggregate all these evaluations
in the move into the final score value.
We use the basis of membership functions and aggregation coming from Fuzzy
Sets theory to compute the evaluation of a posture. More specifically, we use
the Gaussian-shape function (GF) and the Generalized Bell function (GBF)
(Eq. 2). Each function returns a membership value μ ∈ [0, 1] of a number X being
approximately A (Gaussian function), or being between approximately C − A
and C + A (generalized bell function). The σ values help to define each fuzzy
membership function depending on the context and application. Figure 2 shows
examples of both functions with different parameters for a better description.
The paradigm of fuzzy theory helps us here to represent the inherent uncertainty
nature of the addressed application.
(X−A)2
fGF (X) = e σ
1
fGBF (X) = X−C 2σ
(2)
1+| A |
For the posture of the exercise performance at time t, each joint feature
i, Ji (t) (angle/s of interest described in the previous section) in a patient’s
posture is compared and evaluated in respect to its corresponding matching
posture in the template Ji (t). We remark that this value is independent to
signal shifting, since both patient and template signals have been aligned using
DTW. The difference between these features Ji (t) − Ji (t) is input to either a
GF or GBF membership function, selected previously by the therapist during
the exercise template creation, and it returns the degree of similarity of the
feature in the range [0, 1]. The therapist would select the GF function if a single
feature value must be used to match the posture joint, or GBF if a range of
values (angles) is accepted as valid for the maximum evaluation value. After
that, the score of the posture at time t, Pscore (t), is obtained as the aggregation
of these values considering a relevance value of each feature joint to perform the
exercise, ri , according to Eq. 3, where N is the number of features considered in
the problem. The final score of the exercise is then calculated as the average score
of all postures in the patient performance, and its range is the interval [0, 1],
where 1 means a perfect match between the patient motion and the template.
N
Pscore (t) = ri fi (Ji (t) − Ji (t))
i=1
N
subject to : ri = 1 (3)
i=1
T
1
Score = Pscore (t)
T t=1
4 Experiments
correctly every exercise. The data for the experimentation were obtained from a
sample of 10 healthy participants (5 men and 5 women) covering an age range
from 21 to 40 years old.
Participants were asked to perform correctly a set of repetitions of diagno-
sis exercises widely used in physical therapy, covering 3 motion and 2 posture
holding exercises:
Fig. 3. Snapshot of the exercise performance PReSenS view: A template move is shown
in the upper right corner (Flamenco exercise in the image), together with the current
participant performance evaluation (center right) and the remaining time for the exer-
cise. On the left, it is shown the current posture of the patient with an avatar, and a
countdown to start the exercise execution.
Evaluation of Human Activities in Physical Therapy Scenarios 409
4.2 Results
In this section we are looking for answers to the following questions: (1) How
does data series summarization influence in the evaluation of motion exercises
in physical therapy?, and (2) How different is the selection of one membership
function or another to evaluate an exercise performance, in terms of final results
and scoring?. The first question is key to achieve a near real-time human activity
detection and evaluation, since summarization techniques help to reduce the
complexity and computational time of the operations involved in such tasks. On
the other hand, the second question aims at finding a suitable model to evaluate
a posture in a motion performance.
Our first interest focuses in knowing if data signal summarization influences
in the score of an exercise performance. To know that, we applied PAA with
rates 1, 2 and 3 over the signals before the calculation of the score, and per-
formed a paired t-test with 95 % of confidence level for each activity using the
Gauss scoring function, to know if the data distributions differ significantly.
410 M.P. Cuellar et al.
Fig. 4. Score calculation differences between the application of PAA with values 1 and
2 (left), and PAA with values 1 and 3 (right)
Table 1. Time (in secs.) required to obtain a score of an activity performance for each
second, depending of the summarization rate PAA
PAA Arms up Left arm ext. Left leg up Flamenco Cross arms
1 0.1348 0.0813 0.0754 0.0004 0.0004
2 0.0353 0.0218 0.0236 0.0007 0.0007
3 0.0161 0.0099 0.0098 0.0005 0.0005
1
0.95
0.9
0.85
Score
0.8
0.75
0.7
0.65
Arms Up (Ga) Arms Up (GB) L. Arm E. (Ga) L. Arm E. (GB) L. leg Up (Ga) L. leg Up (GB) Flamen. (Ga) Flamen. (GB) C. Arms (Ga) C. Arms (GB)
Fig. 5. Final score distribution for all diagnosis exercises and membership functions
(Ga = Gauss function; GB = Generalized Bell function)
Fig. 6. Matching between a participant performance (blue) and the template (red) for
the Arms up exercise. The picture shows the angle between the left elbow and the left
shoulder (in radians) for 10 repetitions (Color figure online).
5 Conclusions
In this work we have described PReSenS, a system able to acquire motion data of
an user performing diagnosis exercises in physical therapy, using a depth sensor
camera. We have proposed a method for automatic evaluation of exercise per-
formance, providing a simple evaluation value as a score. We have discussed how
Evaluation of Human Activities in Physical Therapy Scenarios 413
References
1. Hein, A., Kirste, T.: Activity recognition for ambient assisted living: potential and
challenges. In: Proceedings of 1st German Ambient Assisted Living - AAL-Congress
(2008)
2. Fernandez-Baena, A., Susin, A., Lligadas, X.: Biomechanical validation of upper-
body and lower-body joint movements of kinect motion capture data for rehabilita-
tion treatments. In: Proceedings of Fourth International Conference on Intelligent
Networking and Collaborative Systems, pp. 656–661. IEEE Press, New York (2012)
3. Bonnechere, B., Jansen, B., Salvia, P., Bouzahouene, H., Omelina, L., Cornelis, J.,
Rooze, M., Van Sint Jan, S.: What are the current limits of the Kinect sensor?
In: Proceedings of 9th International Conference on Disability, Virtual Reality &
Associated Technologies, pp. 287–294 (2012)
4. Bonnechere, B., Sholukha, V., Moiseev, F., Rooze, M., Van Sint Jan, S.: From
Kinect to anatomically-correct motion modelling: preliminary results for human
application. In: Proceedings of the 3rd European Conference on Gaming and Play-
ful Interaction in Health Care, pp. 15–26. Springer Fachmedien Wiesbaden (2013)
5. Beam, J.W.: Rehabilitation including sportspecific functional progression for the
competitive athlete. J. Bodywork 6(4), 205–219 (2002)
6. Zheng, H., et al.: SMART rehabilitation: implementation of ICT platform to sup-
port home-based stroke rehabilitation. In: Stephanidis, C. (ed.) HCI 2007. LNCS,
vol. 4554, pp. 831–840. Springer, Heidelberg (2007)
7. Zheng, H., Davies, R., Hammerton, J., Mawson, S.J., Ware, P.M., Black, N.D.,
Eccleston, C., Hu, H., Stone, T., Mountain, G.A., Harris, N.: SMART project:
application of emerging information and communication technology to home-based
rehabilitation for stroke patients. Int. J. Disabil. Hum. Dev. 5, 271–276 (2006)
8. Zhou, H., Hu, H.: A survey - human movement tracking and stroke rehabilitation.
Technical report, University of Essex (2004)
414 M.P. Cuellar et al.
Abstract. The current paper presents an approach for analyzing the Electronic
Health Records (EHRs) with the goal of automatically identifying morphologic
negation such that swapping the truth values of concepts introduced by negation
does not interfere with understanding the medical discourse. To identify mor-
phologic negation we propose the RoPreNex strategy that represents the adap-
tation of our PreNex approach to the Romanian language [1]. We evaluate our
proposed solution on the MTsamples [2] dataset. The results we obtained are
promising and ensure a reliable negation identification approach in medical
documents. We report precision of 92.62 % and recall of 93.60 % in case of the
morphologic negation identification for the source language and an overall
performance in the morphologic negation identification of 77.78 % precision
and 80.77 % recall in case of the target language.
1 Introduction
The evolution of technology acquaints us frequently with specialized gadgets that can
influence our everyday life. We gain access to devices that manage our eating and
exercise habits, monitor our heart rate and inform us of the calories we burn and
consume or translate our activity to statistical dimensions. The English language became
ubiquitous; we use English terms when referring to our computer components and the
actions we can carry out using them or when sharing our thoughts and feelings on the
social media. The devices nowadays have English imprints on them; when entering a
store we usually find the Open/Closed sign more often than the corresponding infor-
mation for the native language in each country. For the young generation these aspects
do not represent issues as a large percentage of the young population in every country is
familiar with the English language. Issues arise when these devices are used by elderly
people whose existence was not overwhelmed by the adoption of the English language
and the rapid evolution of technology. Usually, the main topic of interest for this
category of population is represented by the development of their health.
As technology evolves, the number of medical devices that we gain access to grows
as well; nowadays, we can easily send our health status by means of these devices to
© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 417–430, 2015.
DOI: 10.1007/978-3-319-16292-8_30
418 I. Barbantan and R. Potolea
our medical doctors that automatically fill up our electronic health records with this
new information. The problem is what to do when the persons that need to use the
devices are not familiar with the language in which the instructions are presented or the
information displayed on them.
The EHRs capture the medical history and current condition with detailed infor-
mation about symptoms, surgeries, medications, illnesses or allergies. They are an
important source of new information and knowledge if exploited correctly. From these
documents we can retrieve new ways of how diseases interact with each other, the
influence of demographics on the patients’ conditions and many more. But in order to
do this, the documents need to be clear, carry trustworthy information and should be
unambiguous. In most cases the EHRs are unstructured documents and may contain
recurrent information. The problem we address in this paper is defining a strategy for
identifying negation in EHRs, towards retrieving relations among medical concepts.
We propose an approach for adapting for the Romanian language to our already
established methodology for English [1]. In both languages negation is of syntactic and
morphologic types and a correspondence between the negation concepts is easily
noticeable.
The main contribution of the paper addresses the existing drawback of several
negation identification approaches that do not consider negation represented using
negation prefixes in both languages. We propose a strategy that includes negation
prefixes in identifying and dealing with the negative concepts in the EHRs. In order to
tackle morphologic negation, our strategy is based on interpreting the structure of the
words and evaluating the existence of the words with and without prefix in the lan-
guage by taking into account the definitions provided in a dictionary specific to the
source respectively the target languages.
The rest of the paper is organized as follows. In chapter II we present similar
systems dealing with EHRs and negation. In chapter III the EHRs are briefly introduced
along with the two most common negation types. Our solution is detailed in chapter IV
where we describe the RoPreNex strategy and present the experiments we performed in
chapter V. The last two chapters include the conclusion of our work and future
enhancements for the approach we propose.
2 Related Work
Both in Romanian and English languages the task of identifying negation is mostly
focused on negation expressed with specific words like nu, fara, nici in Romanian or
not, without, nor, the English corresponding words. Morphologic negation is disre-
garded and there are even cases when it is otherwise inferred. For example, the authors
in [3] talk about negation prefix when dealing with the word not and refer to it as
negation prefix, whereas they are dealing with syntactic negation based on Givon’s
negation classification [4]. They present a system that identifies the n-words that
represent negative quantifiers in order to determine the negative concord for the
Romanian language.
A cross-lingual approach for document summarization is proposed in [5]. The
authors evaluate how a cross language approach could help in spreading the news
Towards Cross Language Morphologic Negation Identification 419
around the world when dealing with ordinary but not breaking news that are easily
propagated among websites. The authors use as source language Romanian and
translate the summarized information into English. The translation is performed using a
bidirectional English-Romanian translation tool. The authors evaluate the performance
of their approach by asking a set of questions to judges. The questions are regarding the
Romanian summarizations and then the same set of the questions were asked for the
translated summaries. An accuracy of 43 % is reported in the case of giving correct
answers for the summarized documents. Most of the questions that could not have been
answered are due to the fact that the translated summaries were not clearly understood.
Negation in medical documents is subject of interest in the medical domain as the
diagnosis process the stated and missing or denied symptoms are weighted differently.
There are several approaches dealing with identifying and labeling negation, like
NegEx [6], Negfinder [7], or the tool presented in [8] developed for the BioScope
negation annotated corpus. The main drawback of these tools is the absence of treating
morphologic negation, hence leaving out several negated terms, especially when
dealing with medical documents.
One reason for not considering morphologic negation is motivated by the authors in
[9] by the few occurrences of these terms or by considering the prefixes as not
determining negations [10]. In [7] negation is defined only when the negation terms
negate subjects or object concepts (no, without, negative) and specify that even though
there are concepts that have negative connotations (like akinesia) they are disregarded
and report these cases as miscellaneous errors. These approaches are valid when
dealing with data that is not domain dependent or in cases when the negation algorithm
is meant to find all concepts that can be determined by a single negative identifier. In
the case of medical records, (domain dependent documents) the negations are prevalent
as in the medical language negation prefixes are broadly used. In medical documents is
it expected that negation is clearly formulated as these documents should be clear and
carry as few ambiguous terms as possible.
The analysis of morphologic negation is presented as a future enhancement for the
work of the authors in [9], where they predict a growth in the performance of identi-
fying the scope and focus of negation by removing the prefix and determining the
validity of the obtained word. They introduce how negation in Natural Language is
characterized and present an approach of automatically determining the scope and
focus of negation. The frequency of the negation-bearing words in the corpus they use
leads to considering negation only the determiners not and n’t. The scope of negation
was identified with 66 % accuracy.
Negation in medical documents is approached by Averbuch et al. in [11] that report
that including negation in information retrieval improves precision from 60 % to 100 %
with no significant changes in recall. They also state that the presence of a medical
concept in the record, like a symptom, does not always imply that the patient actually
suffers from that condition as the symptom can be negated.
Capturing word’s semantics and relationships with the help of a dictionary in order
to categorize a text is presented in [12]. The text is disambiguated and represented as
features using the concepts and hypernymy relations in WordNet. The authors compare
the results of text categorization when using a bag of words approach for document
representation and when using the WordNet information for selecting the features.
420 I. Barbantan and R. Potolea
They evaluate the methods on two datasets and notice that the WordNet approach
exceeds in all test conditions the bag of words approach.
Negation has application in sentiment analysis when the opinion (positive, negative
or neutral) is in question. In sentiment analysis the goal is to identify the polarity of
assertions that can be positive or negative [13]. Usually this is done using specific
words for the polarity categories [14]. The BioScope negation annotated corpus is used
for evaluation in order to extract the polarity of the sentences using a Conditional
Radom Field approach and a dependency parser [9]. The authors report achieving a
75.5 % F1 score on the BioScope corpus, a medical corpus and 80 % F1 score when
using a product reviews corpus.
3 Theoretical Background
This section attempts to set the background of the work presented in this paper by
introducing the main concepts we operate with: EHRs and the need to structure them,
the role of handling negation in EHR concept extraction and structuring and the cross
language strategies.
3.2 Negation
When performing the anamnesis for a patient the medical doctors are interested
whether the patient suffers or not from different symptoms, and based on the responses
from the patient, a diagnosis is established. When querying a data source for patients
with similar conditions, it is important for the machine to distinguish between the
negated and affirmed symptoms. Negation can be expressed in different ways as the
patient can state he has fever or he is afebrile. In this case it is important to treat all
types of negations that occur in documents.
Towards Cross Language Morphologic Negation Identification 421
Givon classifies negation as syntactic negation in the case of explicit negation and
morphologic negation when using prefixes [4]. Expressing the symptomatology of a
patient the following three sentences can be used.
• The patient has no symptoms.
• The patient is asymptomatic.
• The patient doesn’t have symptoms.
As underlined in the three examples, negation can be expressed using explicit terms
like no and n’t but can also be expressed with a prefix a (aymptomatic).
Syntactic Negation is introduced by specific negation terms like no, without, deny,
not, rule out in case of the English language or fara, neaga, eliminia in case of
Romanian are commonly found in natural language (the two lists of terms are not
correspondent). Unlike morphologic negation that is associated with a single word,
syntactic negation can determine several words like in the case of an enumeration of
symptoms: The patient presented without fever, neck pain or tiredness.
Morphologic Negation is expressed with specific negation prefixes placed in front
of the words to alter their meaning by swapping their truth value. They support
enhancement of the vocabulary by increasing the number of words. The prefixes can
also be used in learning new terms as presented in the study in [16]. The separation of a
word into prefix and root form helps in understanding the meaning of the words.
Table 1 captures the negation prefixes in the English and Romanian languages and their
correspondence.
EHRs. However, the approach is not limited to specific languages and/or tasks. Once in
a source language an efficient solution for text processing has been identified, the
approach defines a way of adapting it to the target language.
A cross language strategy to perform sentiment analysis for identifying subjective
sentences is proposed in [17]. While translating queries from English to Indonesian the
authors in [18] show that using a collection of dictionaries rather than a single dic-
tionary significantly improves the results.
The English language covers a large amount of everyday life subjects, so, we
should try to benefit in any possible way from this. The concepts used in Computer
Science especially but also in other scientific and professional fields tend to be English.
The medical domain makes no exception. Terms like bypass or follow-up became
familiar to every single one of us [19].
In our work so far, we have implemented several strategies for identifying negation in
medical documents (needed to further structure the EHRs). In [20], we employed a
vocabulary of terms and a binary bag of words feature vector, while in [1] we replaced
the vocabulary obtained with a dictionary of the English language. Our current work
proposes cross-language strategy that deals with identifying morphologic negation in a
target language based on an already established strategy for a source language.
1
http://wordnet.princeton.edu/.
2
http://dexonline.ro/.
Towards Cross Language Morphologic Negation Identification 423
The false positives introduced by the proposed solution are “infusion”, “absolute”,
“intensity” or “another”.
4.3 Dataset
In order to evaluate our approach we used a dataset of English EHRs provided by [2].
These are semi-structured documents that contain medical information about
424 I. Barbantan and R. Potolea
hospitalized patients. They present the evolution of the patient from the point they were
admitted in the hospital to the point of discharge. The documents capture the symptoms,
medical history, the procedures performed and the administered medication. There are
cases when the patient is required to return to the hospital for a follow-up examination,
in which case the conditions and details about the appointment are also established in the
document.
As our current goal is to analyze medical documents for the Romanian language,
we propose an adaptation of the morphologic negation identification proposed for
English. As we want to make sure that the documents we send for analysis are com-
pliant with the medical standards, we translated the documents to Romanian. Also, the
amount of available annotated EHRs for Romanian is not satisfactory for a reliable
analysis. In order to obtain the Romanian version of the EHRs, we used an online
translation tool to obtain the correspondence of the medical documents between the two
languages. There were cases where the translation tool employed could not translate all
terms due to the fact that the words were not found in the English dictionary or in the
English-Romanian dictionary. This issue was encountered in the case of the word
nontender or nonfasting which are domain specific terms, in our case, the medical one.
The proposed methodology of evaluating the negation identification for the
Romanian language follows the steps in Fig. 1. First, the corpus of documents used in
the English language is translated for a reliable comparison. Then we preprocess the
documents and apply the proposed negation identification rules. The last step employed
is the proposed adapted strategy for identifying negated concepts.
There are several changes that had to be applied to our PreNex algorithm. As the
DexOnline dictionary also contains definitions for the words that are not frequently
used in the language (like regionalism or rural expressions) we must include an
additional verification step such that these words do not interfere with our search. In
case of the rural expressions, the words are truncated or the first letters may be removed
in which case we might deal with a false case of root word.
The Romanian lemmatization tools are less efficient than the ones existing for
English. Moreover, they should be accessed via a web service, which induces time
overhead (also a less reliable solution), so we propose a lemmatization process that is
able to bring the words in the documents to their dictionary form. The approach works
as follows. For each word in the documents that was selected as possible negated
concept, we remove its prefix and before determining its truth value, we preprocess it.
The approach we propose considers the termination of the words. Usually the differ-
ence between the words in the document and their dictionary form appears in the added
termination that announces the inflections (e.g. e is added for the plural of nouns or em
in the case of verb tenses). When a match between the preprocessed word from the
document and the words in the dictionary, we send to the negation identification rules
the currently preprocessed word.
The RoPreNex strategy is presented in the algorithm below. For the morphologic
negation identification task we proposed the following rules Literal words, Definition
content, Undefined prefixed word.
Literal words. The DexOnline dictionary also contains definitions for the words that are
not used in common language like regionalism or rural expressions. In this case the words
are shortened and the first letters are removed when expressing the words, case in which
they could falsely represent prefixes. This rule is a preprocessing step applied for the
words in the dictionary. The following rules are the actual negation identification rules.
426 I. Barbantan and R. Potolea
Definition content. The definition content rule identifies negation based on the defi-
nition of the word. First, we identify the prefix, remove it from the word, and obtain the
root of the word. If the root and the prefixed word exist in DexOnline, we check
whether the word’s definition contains at least one negation identifier.
Undefined prefixed word. The undefined prefixed word rule is applied in the cases
when the prefixed word is not defined in the dictionary as it could represent a domain
specific term. In this case we remove the prefix and determine whether the root of the
word is defined in the dictionary.
Algorithm notations:
ω – the possible prefixed word with negation prefix
ῶ – the root of ω
ρ – the prefix of ω { anti, dez, des, de, ne, in, a, im, contra }
definition(ω) – the definition of a word
defined(ω) – the word is defined in dictionary
literal(ω) – the word is in its literal form
The algorithm of determining whether ω is prefixed with a negation prefix works as
follows:
Towards Cross Language Morphologic Negation Identification 427
This section presents the experiments performed with the proposed morphologic
negation identification strategy for Romanian medical documents, and a comparison
with PreNex, our original solution for English [1]. We evaluate our approach on the
translated MTsamples dataset, presented in more detail in Sect. 4.3.
In our proposed strategy, most of the prefix negated concepts are identified by the
Definition content rule, as most of them are defined in both representations: as prefixed
word and root form, Table 3, line 2. A smaller percentage of the prefixed concepts are
not defined in the dictionary, and for this case we had to introduce the second rule
Undefined prefixed word, Table 3, line 3 which covers 21.43 % of the correctly
identified concepts.
Table 5. Performance of negation identification strategy for Romanian and English strategies.
Approach Precision (%) Recall (%)
Romanian approach (RoPreNex) 77.78 80.77
English approach (PreNex) 92.62 93.60
5.2 Discussion
The reported performance of our proposed approach is promising, and we encountered
the following problems addressed at the level of translation tool employed, at the word
level, and at dictionary level.
Translation level issues. The translation tool we employed in our approach did not
manage to perform a one-on-one translation. There were cases when in the translated
document we encountered English words like nontender or nonfasting.
Word level issues. Other issues we came across were related to the fact that for the
Romanian language we could not employ a lemmatizer that could help in normalizing
the words such that we could obtain their dictionary format. Using our lemma
implemented approach we managed to increase the recognition rate, but when the
inflectional form of the word changes the root’s structure it still remains an issue that
has to be addressed. We also found cases when the words are shortened in the source
language and the translation tool could not translate the word in the target language,
like in the case of the word noncontrib.
Dictionary level issues. The DexOnline dictionary we used in our approach is pop-
ulated with most of the words that exist in the Romanian language and also with the
newest terms entered in the language. But, there still are cases when the dictionary fails
to capture information about specialized terms like atraumatic.
The false positives introduced by our algorithm are usually represented by words
that have a negation specific prefix but are not actually negated words. Like in the case
of the word informat whose English correspondent is informed. In this case the word
matches all rules we defined as the word and its root are both defined in the dictionary,
but do not represent a negated entity.
medical related concepts. Our current work consists in enhancing the identification
performance of morphologic negation for the Romanian language.
We consider improving the performance of this strategy by first preprocessing
the documents by employing a spell checker for each language or a distance measure
algorithm that could correct the form of the misspelled words. Another task that
we consider is represented by treating the abbreviations and word shortenings as the
medical terms can appear with a code mostly in the case of the diseases.
Acknowledgments. The authors would like to acknowledge the contribution of the COST
Action IC1303 - AAPELE.
References
1. Barbantan, I., Potolea, R.: Exploiting Word Meaning for Negation Identification in
Electronic Health Records, IEEE AQTR (2014)
2. MTsamples: Transcribed Medical Transcription Sample Reports and Examples. Last
accessed on 23.10, 2012
3. Iordachioaia, G., Richter, F.: Negative concord in Romanian as polyadic quantification. In:
Muller, S. (ed.) Proceedings of the 16th International Conference on Head-Driven Phrase
Structure Grammar Georg-August-Universitat Gottingen, pp. 150–170. CSLI Publications,
Germany (2009)
4. Givon, T.: English Grammar: A Function-Based Introduction. Benjamins, Amsterdam, NL
(1993)
5. Orasan, C., Chiorean, O.A.: Evaluation of a cross–lingual romanian–english multi–
document summariser. In: Proceedings of LREC 2008 Conference, Marrakech, Morocco
(2008)
6. Chapman, W., Bridewell, W., Hanbury, P., Cooper, G.F., Buchanan, B.G.: A simple
algorithm for identifying negated findings and diseases in discharge summaries. J. Biomed.
Inform. 34(5), 301–310 (2001)
7. Mutalik, P.G., Deshpande, A., Nadkarni, P.M.: Use of general-purpose negation detection to
augment concept indexing of medical documents: a quantitative study using the UMLS.
J. Am. Med. Inform. Assoc. 8(6), 80–91 (2001)
8. Vincze, V., Szarvas, G., Farkas, R., Móra, R., Csirik, J.: The BioScope corpus: biomedical
texts annotated for uncertainty, negation and their scopes. In: Natural Language Processing
in Biomedicine (BioNLP) ACL Workshop Columbus, OH, USA (2008)
9. Blanco, E., Moldovan, D.: Some issues on detecting negation from text. In: Proceedings of
the Twenty-Fourth International Florida Artificial Intelligence Research Society Conference
(2011)
10. Councill, I.G., McDonald, R., Velikovich, L.: What’s great and what’s not: learning to
classify the scope of negation for improved sentiment analysis. In: Proceedings of the
Workshop on Negation and Speculation in Natural Language Processing, Uppsala, pp. 51–59
(2010)
11. Averbuch, M., Karson, T.H., Ben-Ami, B., Maimond, O., Rokachd, L.: Context-sensitive
medical information retrieval, In: Proceedings of AMACL, pp. 282–286 (2003)
12. Elberrichi, Z., Rahmoun, A., Bentaalah, M.A.: Using WordNet for text categorization. Int.
Arab. J. Inf. Technol. 5(1), 16–24 (2008)
430 I. Barbantan and R. Potolea
13. Turney, P.D.: Thumbs up or thumbs down? Semantic orientation applied to unsupervised
classification of reviews. In: Proceedings of the 40th Annual Meeting of the 14. Association
for Computational Linguistics (ACL), Philadelphia, pp. 417–424 (2002)
14. Rokach, L., Romano, R., Maimon, O.: Negation recognition in medical narrative reports.
Inf. Retrieval 11(6), 499–538 (2008)
15. Fischetti, L., Mon, D., Ritter, J., Rowlands, D.: Electronic Health Record – system
functional model, Chapter Three: direct care functions (2007)
16. Stahl, S.A., Shiel, T.G.: Teaching meaning vocabulary: productive approaches for poor
readers. Read. Writ. Q. Overcoming Learn. Difficulties 8(2), 223–241 (1992)
17. Riloff, E., Wiebe, J.: Learning extraction patterns for subjective expressions. In: Proceedings
of the Conference on Empirical Methods in Natural Language Processing (EMNLP-03)
(2003)
18. Hayuran, H., Sari, S., Adriani, M.: Query and document translation for english-Indonesian
cross language IR. In: Peters, C., et al. (eds.) Evaluation of Multilingual and Multi-modal
Information Retrieval. Lecture Notes in Computer Science, vol. 4730, pp. 57–61. Springer-
Verlag, Heidelberg (2007)
19. Frinculescu, I.C.: An overview of the english influence on the Romanian medical language.
Sci. Bull. Politehnica Univ. Timişoara Trans. Mod. Lang. 8, 1–2 (2009)
20. Barbantan, I., Potolea, R.: Towards knowledge extraction from electronic health records -
automatic negation identification. In: International Conference on Advancements of
Medicine and Health Care through Techonology. Cluj-Napoca, Romania (2014)
Energy Consumption Optimization Using
Social Interaction in the Mobile Cloud
Abstract. This paper addresses the issue of resource offloading for energy
usage optimization in the cloud, using the centrality principle of social networks.
Mobile users take advantage of the mobile opportunistic cloud, in order to
increase their reliability in service provision by guaranteeing sufficient resources
for the execution of mobile applications. This work elaborates on the
improvement of the energy consumption for each mobile device, by using a
social collaboration model that allows for a cooperative partial process off-
loading scheme. The proposed scheme uses social centrality as the underlying
mobility and connectivity model for process offloading within the connected
devices to maximize the energy usage efficiency, node availability and process
execution reliability. Furthermore, this work considers the impact of mobility on
the social-oriented offloading, by allowing partitionable resources to be executed
according to the social interactions and the associated mobility of each user
during the offloading process. The proposed framework is thoroughly evaluated
through event driven simulations, towards defining the validity and offered
efficiency of the proposed offloading policy in conjunction to the energy con-
sumption of the wireless devices.
1 Introduction
of applications and services [1]. Users are connecting to social networks by using small
mobile devices, such as smart phones and tablets that are able to form opportunistic
networks. Such networks form a potential infrastructure for increased resource avail-
ability to all users in the network, especially to those that face reduced resource
availability (e.g. energy, memory, processing resources etc.). Opportunistic wireless
networks exhibit unique properties, as they depend on users’ behavior and movement,
as well as on users’ local concentration. Predicting and modeling their behavior is a
difficult task but the association of the social interconnectivity factor may prove part of
the solution, by successfully tapping into the resources they are offering. Resource
sharing in the wireless and mobile environment is even more demanding as applica-
tions require the resource sharing to happen in a seamless and unobtrusive to the user
manner, with minimal delays in an unstructured and ad hoc changing system without
affecting the user’s Quality of Experience (QoE) [2]. This forms a highly ambitious
objective as on one hand wireless environments cannot reliably commit to sharing
resources for establishing reliable communication among users since there is no way of
guaranteeing resource allocation and on the other hand, if that was to be overcome their
limited capabilities exacerbate further the problem. The mobility factor imposes
additional constraints as network topology is constantly producing fluctuation in
bandwidth usage and resource availability. The dependency on device capabilities
restricts solutions to particular devices, lacking generality in its applicability. In this
context and by considering all the above-mentioned issues, this work uses social
interactivity as a method for modeling and achieving resource sharing in the wireless
mobile environment.
As social platforms are used by a staggering majority of 87 % of mobile users for
communication and message exchange, they form an underlying web interconnecting
mobile users and possibly enabling reliable resource sharing [3]. Using social con-
nectivity and interactivity patterns, we should be able to provide adaptability to device
capabilities and operating environment, enabling devices to adapt to frequent changes
in location and context. One of the ever lacking resources in the wireless mobile
environment is that of energy. As energy is stored in batteries, it forms the only source
for mobile device operation and as new and more power demanding applications are
created every day, energy usage optimization forms a challenging field, approached by
both hardware and software solutions.
This work proposes a model of energy usage optimization for mobile devices in an
opportunistic wireless environment, using the social interaction model. The social
interaction model is based on the social centrality principle. With the social centrality
principle users are able to share resources when a shared contact threshold is satisfied.
Energy intense processing and other actions are disseminated using the proposed model
enabling nodes running low on energy resources to extend or alleviate their energy
demands and thus extend their life and availability. In the proposed model, the cen-
trality principle and the “ageing” timing rule are applied, in order produce a more
efficient use of the available energy. Thus, opportunistic energy conservation takes
place enabling efficient management of the energy available to other wireless peer
users, and guaranteeing end-to-end availability for the longest time possible, in a
wireless mobile environment.
Energy Consumption Optimization Using Social Interaction 433
This introduction of the social interaction model for achieving optimum resource
usage forms the key innovation of the proposed framework. The framework evaluates
the energy state of each node, according to its type, energy demands and usage
combines this with its social centrality, determining if the node is to receive or provide
energy to the network. Through the proposed framework, the ability to adaptively
perform tasks for another node increases and depends on the node’s current energy
state, as well as on its “friendship” degree. Furthermore, the proposed framework
strengthens or relaxes the energy usage and the task allocation scheme, according to the
social contacts and the user’s interaction parameters. In Sect. 2, we describe the related
work, while Sect. 3 elaborates on presenting the proposed social-enabled mechanism
for opportunistic and socially oriented energy sharing and process off-loading.
Section 4 presents the performance evaluation of the proposed scheme through the
experimental evaluation and Sect. 5 concludes this paper, by proposing future potential
directions for further research.
2 Related Work
Social networking started as an online tool for forming connections and information
sharing. Its appeal and huge popularity primarily came from the fact that the social
activity was enhanced in the online line environment with the use of multimedia, giving
users instant access to information. Another aspect of the online environment was the
ability of the social network users to share their location with others, instantly
advertising their present coordinates either using programs such as FourSquare or
having automatic tracking, by exploiting the mobile devices GPS capabilities. The use
of user mobility in opportunistic networks will help to realize the next generation of
applications based on the adaptive behavior of the devices for resource exchange. The
problem of energy usage optimization that considers energy as a finite resource that
needs to be shared among users, providing most processing power whilst maintaining
group connectivity, will greatly benefit by using the social centrality model. Oppor-
tunistic networks will greatly benefit from the capability of the mobile devices to gather
information from any hosted application, in order to better utilize network resources.
The task allocation and load balancing can be strictly or voluntarily associated with the
social communication. Works such as [4] propose architectures, which rely on the local
information derived by the devices and their local views, in optimizing load balancing
and energy management, as well as even some self-behaving properties, like self-
organization. In [4] resource manipulation optimization is offered. However, this
occurs without considering social parameters, such as friendship, contact rate or the
temporal parameters (i.e. users’ location).
The contribution of this work is to combine the energy management scheme with
the proposed social parameters and model for each node, in order to optimize the
energy management and load sharing process. In the game theoretic approach [5],
the energy usage optimization problem is translated to a contention game, where the
nodes compete to access the energy resources, reaching to the Nash equilibrium; an
approach that improves on the random and individualized approach. In [5] the pro-
posed system supports fine grained offload to minimize energy savings with minimal
434 K. Papanikolaou et al.
burden on the programmer. The model decides at runtime which methods should be
remotely executed driven by an optimization engine that achieves the best energy
savings possible under the mobile devices current connectivity constraints. In [6]
energy offloading is viewed as potentially energy saving but the overheads of privacy,
security and reliability need to be added as well. The integration of social connectivity
into the process is an unexplored area. Social connectivity takes into consideration
users associations, location profiles and social interactions as a basis for creating an
index for users’ resources over time for subsequent resource offloading.
In this work, a social-oriented methodology is used for minimizing energy con-
sumption for highly demanding applications with high memory/processing require-
ments. The social-oriented model with the associated friendships as the basis for social
mobility, utilizes the introduced social-centrality, for selecting and offloading energy
hungry partitionable tasks (parts of executable applications and processes) under the
availability optimization objective. In addition, this work considers the motion coef-
ficients for each user (using normalized [0..1] parameter) and encompasses these
characteristics into the proposed energy utilization scheme for enabling maximum
temporal node availability without reducing the processing capabilities of the system as
a whole. The proposed scheme uses both the pre-scheduled opportunistic offloading [7]
and the social interactions that take place among the collaborative users and their
associated strength of friendship. The scheme improves on predicting user mobility
under the end-to-end availability. In order to assess the effectiveness of the proposed
scheme, exhaustive simulations take place considering the offered energy by the social-
collaborative network within the mobility context. The results of these lead to thorough
measurements of the energy consumption optimization for mobile nodes/users.
Wireless mobile networks allow unrestricted access to mobile users under a changing
topology. The implications of mobility cannot be determined over time as the network
topology is dynamically changing. In our work, the mobility model used is based on
the probabilistic Fraction Brownian Motion (FBM) where nodal motion is done
according to certain probabilities in accordance with location and time. Assume that we
need to support a mobile node that is low on energy reserves and requires an energy
heavy application to run. This implies that in a non-static, multi-hop environment, there
is a need to model the motion of the participating nodes in the end-to-end path such that
the requesting nodes can move through the network and conserve its resources. We
also assume a clustered-mobility configuration scenario presented in [2], where each
node has its own likelihood for the motion it follows. To predict whether a node will
remain within the cluster, we aggregate these probabilities. This also shows the
probabilities for the other nodes remaining in the cluster. The mobility scenario used in
this work is modelled and hosted in a scheme that enables the utilization of social
feedback into the model. Unlike the predetermined relay path in [7] and the known
location/region, the mobility scenario used in this work is a memoryless FBM [8],
with no stationary correlation among users’ movements. FBM can be derived
Energy Consumption Optimization Using Social Interaction 435
probabilistically from the random walk mobility model and can be expressed as a
stochastic process that models the random continuous motion. The mobile node moves
from its current location with a randomly selected speed, in a randomly selected
direction in real time as users interact. However, in real life the real time mobility that
the users exhibit, can be expressed as an ordinary walk, where the users spot-out some
environmental stimuli and are attracted to them. Their decisions may be relayed to their
respective social communication. In the proposed scenario, the walking speed and
direction are set for the mobile users and are both chosen from predefined ranges,
[vmin, vmax] and [0, 2π), respectively [9]. The new speed and directions are maintained
for an arbitrary length of time randomly chosen from (0, tmax]. The node makes a
memoryless decision for new speed and direction when the chosen time period elapses.
The movements can be described as a Fractional Random Walk on a weighted graph
[1], with the total likelihood PLi,j in Ln.
We model the movement of each device using a graph theoretical model, in which a
device can move randomly according to a topological graph G = (V,E), that comprises of
pair of sets V(or V(G)) and E (or E(G)) called edges. The edges join different pairs of
vertices. This walk considers a connected graph with n nodes labeled {1, 2, 3, …,n} in a
cluster Ln with weight wij ≥ 0 on the edge (i,j). If edge (i,j) does not exist, we set wij = 0.
We assume that the graph is undirected so that wij = wji. A node walks from a location to
another location in the graph in the following random walk manner. Given that node i is
in reference, the next location j is chosen from among the neighbors of i with probability:
wij
pLij ¼ P ð1Þ
k wik
where in (1) above the pij is proportional to the weight of the edge (i, j), then the sum of
the weights of all edges in the cluster L is:
X
wLij ¼ wij ð2Þ
i;j:j [ 1
wLi
pLi ¼ ð3Þ
2w
where, it can be seen that the preceding distribution satisfies the relationship pP ¼ p,
when the movement is performed for a node/device i to location j (stationary distri-
bution of the Markov chain as each movement of the users usually has a selected
predetermined path (i e. corridor etc.)) associated as follows:
X X wi wij
X 1
1 X wj
pi Pij ¼ ¼ wij ¼ wij ¼ ¼ pj ð4Þ
i i
2w wi i
2w 2w i 2w
express the track of requests as a function of the location (i.e. movements and updates
pLij ) as: Ri(Iij, pLij ) where Ri is the request from node i, Iij is the interaction coefficient
measured in Eq. 2. We use the representation of the interactions by utilizing notations
of weighted graphs (Eq. 1).
Different types of links or involvements are expressed in different ways in social
connectivity modeling. Consequently, several types of centralities are defined in the
directed or undirected graphs [1]. Users may have or not a certain type of association
with any other user in the global network and this is modelled with the concept of the
social network. Nodes carry weights that represent the degree of associativity with
other nodes. These weights are associated with each edge linking two nodes and are
used to model the interaction strength between nodes [10]. This models the degree of
friendship that each node has with the other nodes in the network. The weights are
assigned and used to measure the degree of the strength of the association of the
connecting parts. Consequently the degree of social interaction between two devices
can be expressed as a value in the range of [0, 1]. A degree of 0 signifies that the two
nodes/devices are not socially connected and therefore no social interaction exists
between them. As social interaction increases so does the weight reaching 1 indicating
very strong social interaction. The strength of the social interaction and the energy state
of each node will form the basis for offloading processes to other nodes in the network.
In this work, we propose such a model for efficient energy management prolonging
node lifetime based on the social association scheme.
We propose that the strength of social interaction will also affect the offloading
process, which as the next sections show will affect the energy conservation mechanism.
The social interaction can be represented by the 5 5 symmetric matrix (Eq. 2 matrix is
based on the social population in the network), the names of nodes correspond to both
rows and columns and are based on the interaction and connectivity. The latter matrix,
forms the Interaction Matrix which represents the social relationships between nodes.
The generic element i,j represents the interaction between two individual elements i and
j, the diagonal elements represent the relationship an individual has with itself and are
set to 1. In (5), the Iij represents all the links associated to a weight before applying the
threshold values which will indicate the stronger association between two nodes.
2 3
1 0:766 0:113 0:827 0
6 0:132 1 0:199 1 0:321 7
6 7
Iij ¼ 6
6 0 0:231 1 0:542 0:635 7
7 ð5Þ
4 0:213 0 0 1 0:854 5
0 0 0:925 0:092 1
d
fi!j ¼ norm½cðtÞ PðtÞ0::1 8i; j ð6Þ
d
Where fi!j is defined as the direct friendship evaluation from node i to node j, Pðkt Þ
is the probability P(k) of a node being connected to k other nodes at time t in the
network decays as a power law, given by: PðkÞ ¼ k c where for the value of the power
c is estimated as follows 2\c\3 as explored in various real networks [11]. This
results in a large number of nodes having a small node degree and therefore very few
neighbors, but a very small number of nodes having a large node degree and therefore
becoming hubs in the system. c(t) consists of the duration of the communication among
“friends”, and is determined as a function of the communication frequency and the
number of roundtrip “friendships”. The roundtrip “friendships” are determined by
the “hop-friendships” of the node i to a node k, as Fig. 1 presents. These are the
“friends-of-friends” where according to the node i any “friend-of-friend” can reach–on
a roundtrip basis- the node i again.
Fig. 1. Roundtrip “friendship” of a node i via other peers, and the “reach-back” notation to the
node via the intermediate peers.
d
Then, the cðtÞ of any of the “friendship” peers can be evaluated as the: cðtÞ ¼ N1 fi!j ,
d
where N is the number of peers away from i, for reaching a friendship within fi!k for a
specified time slot t. Each element in the Iij is re-estimated and varies through time
according to the enhancement of the relation of the individuals as follows:
Iij þ DIij
Iij ¼ ð7Þ
1 þ DIij
Where Iij is the association between two individuals that is strengthened or weak-
ened (if less than rIij ¼ IijðsÞ Iijðs1Þ Þ and rI represents the difference from the previous
Iij association between i, j. As associations and friendships vary over-time resulting in
the strengthening or weakening of different links we incorporate this element by adding
a time-varying parameter enabling an association to fade if two individuals are not in
contact for a prolonged time period. This is expressed using the flowing equation:
a
DIij ¼ þ b; 8tage \TRL ð8Þ
tage
where tage is the time that has passed since last contact and is measured until the
individuals abandon the clustered plane L. The empirical constants a and b are chosen
438 K. Papanikolaou et al.
be the network designer [12] with typical values of 0.08 and 0.005 respectively. The
proposed model encompasses the impact of the mobility on the interaction elements Iij
as the derived matrix consisting of the elements of wLij and Iij as follows:
where the element wij derived from the pLij matrix of the plane area L, is the likelihood
of an individual to move from i to a certain direction to j, as Fig. 1 shows.
P
j
Paj!ak
1
bai ¼ ð11Þ
P
k
Pij 8P 2 ai
1
Energy Consumption Optimization Using Social Interaction 439
with Paj!ak representing the number of paths in the cluster via which the requested
memory/capacity resources can be served between nodes aj and ak and Pij represents
the number of paths in the social cluster that include ai, 8P 2 ai. Based on the latter, we
introduce the social-oriented stability parameter rcðtÞ for a specified time t, as:
Rijjt ð1 normðbai ÞÞ NCði!jjtÞ
rC ðtÞ ¼ mij ðtÞ ð12Þ
infðCr Þ RCðtÞ
where Rij is the normalized communication ping delays between i and j nodes at time t,
bai is the normalized [0..1] social betweenness centrality showing the strong ability to
interact with other nodes in the cluster L, NCði!jjtÞ is the successfully offloaded
capacity/memory units over the total allowed capacity, Cr is the multi-hop channel’s
available capacity, mij(t) is the interaction measures derived from Eq. 8 at the time
interval t, and RCðtÞ is the end-to-end delay in the cluster’s pathway. The social-oriented
stability parameter rc ðtÞ indicates the capability and transmitability of the node i to
offload a certain process according to the ranked criteria of each process in L for time t.
C
Erðaj Þ ¼ EC aj ð13Þ
Sa j
where C is the parameter indicating the number of instructions that can be processed
within Tt, Saj represents the processing time at the server-device and EC aj represents
the relative energy consumption which is expressed as:
CostCðri Þ
EC ðri Þ ¼ WC ð14Þ
SC ðri Þ
440 K. Papanikolaou et al.
where SC is the server instruction processing speed for the computation resources,
CostC the resources instruction processing cost for the computation resources and Wc
signifies the energy consumption of the device in mW.
Each mobile device should satisfy an energy threshold level and a specified cen-
trality degree in the system in order to proceed with process execution offloading.
By using N devices within 2-hops vicinity coverage which is evaluated based on
the measurements regarding the maximum signal strength and data rate model [14]) the
following should be satisfied:
Costcðri Þ Costcðri Þ
Wc jri [ Wc j1;2::N ð15Þ
Scðri Þ Scðri Þ
d
Wri [ Wc 8fi!j devices ð16Þ
The energy consumption of each device should satisfy the (15)–(16) for each of the
resources (executable processes) running onto the device MNm1 hosting the ri
resource. The r1; r2 ; r3 ; ::ri parameters represent the resources that can be offloaded to
run onto another device based on the resources’ availability as in [15]. In this respect,
the ri with the maximum energy consumption is running in a partitionable manner to
minimize the energy consumed by other peer-devices. These actions are shown in the
steps of the proposed algorithm in Table 1.
The resource allocation will take place, towards responding to the performance
requirements as in [2] and [15]. A significant measure in the system is the availability
of memory and the processing power of the mobile cloud devices, as well as the server-
based terminals. The processing power metric is designed and used to measure the
processing losses for the terminals that the ri will be offloaded, as in (17), where aj is an
application and Tkj is the number of terminals in forming the cloud (mobile and static)
rack that are hosting application aj and Taj ðrÞ is the number of mobile terminals hosting
process of the application across all different cloud-terminals (racks).
Tkj d
Caj ¼ P 8 minðEc ðri ÞÞ 2 fi!j ð17Þ
Taj ðrÞ
k
Equation 17 shows that if there is minimal loss in the capacity utilization i.e.
Caj ffi 1 then the sequence of racks Taj ðrÞ are optimally utilized. The latter is shown
through the conducted simulation experiments in the next section. The dynamic
resource migration algorithm is shown in Table 1 with the basic steps for obtaining an
efficient execution for a partitionable resource that cannot be handled by the mobile
device in reference and therefore the offloading policy is used to ensure execution
continuation. The entire scheme is shown in Table 1, with all the primary steps for
offloading the resources onto either MNm1 neighbouring nodes (or–as called- server
nodes (as in [15])) based on the delay and temporal criteria of the collaborating nodes.
Performance evaluation results encompass comparisons with other existing schemes for
offered reliability degree, in contrast to the energy conservation efficiency. The
mobility model used in this work is based on the probabilistic Fraction Brownian
Motion (FBM) adopted in [15], where nodes are moving, according to certain prob-
abilities, location and time. The simulated scenario uses 80 nodes that are randomly
Fig. 2. (a) and (b). Friendship degree with the completed requested offloads and the CCDF with
the degree of friendship.
442 K. Papanikolaou et al.
initialized with social parameter and through the transient state during simulation the
system estimates the social betweenness centrality in regards to the ability to interact
with other nodes in L, and successfully offload memory or processing intense processes
to be partially executed onto socially-collaborating peers based on the criteria depicted
in Table 1 pseudocode.
“Friendship” degree with the completed requested offloads is shown in Fig. 2 (a) for
three different schemes. It is important to mark out that by using the social interactions
the number of completed offloading processes are greater and outperforms the applied
scheme with no social interactions at all. In Fig. 2(b) the Complementary Cumulative
Distribution Function (CCDF or tail distribution) with the degree of “friendship” is
shown within the respective values of ageing factor (Eq. 8).
The proposed social-enabled scheme allows the distribution of partitionable
resources to be offloaded to “friendship” peers, whereas the degree of the “friendship”
among peers plays a catalytic role for offloading executable resources in respect to the
location of each user. These measures were extracted for social centrality parame-
ter >0.6. In addition, when resources are offloaded, a critical parameter is the execution
Fig. 3. (a)–(d). Comparative evaluations and results obtained for the social offloading regarding
the (a) Execution time through simulation; (b) Successful delivery rate with the End-to-End
resource offloading capacity based on the “friendship” model; (c) Average node’s lifetime
extensibility with the number of mobile devices for three different schemes in the evaluated area
(evaluated for the most energy draining resources); and (d) Energy Consumption (EC) with the
number of mobile users participating during an interactive game.
Energy Consumption Optimization Using Social Interaction 443
time, while nodes are moving from one location to another. Figure 3(a) shows the
execution time during simulation for mobile nodes with different mobility patterns and
it is evaluated for GSM/GPRS, Wi-Fi/WLAN and for communication within a certain
Wi-Fi/WLAN to another Wi-Fi/WLAN remotely hosted. The latter scenario -from a
Wi-Fi/WLAN to another Wi-Fi/WLAN- shows to exhibit significant reduction, in
terms of the execution time duration, whereas it hosts the minimum execution time
through the FBM with distance broadcast mobility pattern. Figure 3(b) shows the
Successful Delivery Rate (SDR) with the End-to-End resource offloading capacity
based on the “friendship” model whereas in Fig. 3(c) shows that the proposed scheme
extends the average node’s lifetime significantly when the number of mobile devices
increases.
As interactive game playing requires resources in GPU/CPU-level, the lifetime is
an important metric for the evaluation of the overall performance of the scheme and the
impact on nodes lifetime. Measurements in Fig. 3(c) were extracted for the total
number of 150 mobile terminals that are configured to host interactive gaming appli-
cations, using Wi-Fi/WLAN access technology. The proposed scheme outperforms the
other compared schemes, by significantly extending the lifetime of each node. This is
as a result of the offloading procedure incorporated into a social centrality framework
that takes place on each node, which evaluates the energy consumption of each device
according to the Eqs. 15–17 for the associated cost for each one of the executable
processes. It is also worthy to mention that the proposed scheme outperforms the
scheme in [15] by 11–48 %, extending the lifetime of the mobile devices, when devices
reach 150 by a maximum of 48 %. The Energy Consumption (EC) with the number of
mobile users participating during an interactive game (demanding in GPU/CPU pro-
cessing) is shown in Fig. 3(d). During the interactive game-playing process, the
processing requirements of each device dramatically increase. Figure 3 presents
the evaluation for the energy consumed (EC) for three schemes, including a non-Cloud
oriented method for 150 mobile terminals. The proposed scheme outperforms the other
compared schemes, with the associated EC to be kept in relatively low levels.
5 Conclusions
Acknowledgment. The work presented in this paper is co-funded by the European Union,
Eurostars Programme, under the project 8111, DELTA “Network-Aware Delivery Clouds for
User Centric Media Events”.
The research is partially supported by COST Action IC1303 Algorithms, Architectures and
Platforms for Enhanced Living Environments (AAPELE).
References
1. Hu, F., Mostashari, A., Xie, J.: Socio-Technical Networks: Science and Engineering Design,
1st edn. CRC Press, 17 November 2010. ISBN-10: 1439809801
2. Mavromoustakis, C.X.: Collaborative optimistic replication for efficient delay-sensitive
MP2P streaming using community oriented neighboring feedback. In: 8th Annual IEEE
International Conference on Pervasive Computing and Communications (PerCom 2010),
Mannheim, Germany March 29–April 2 (2010)
3. Tang, J., Musolesi, M., Mascolo, C.C., Latora, C., Nicosia, V.: Analysing information flows
and key mediators through temporal centrality metrics. In: 3rd Workshop on Social Network
Systems (SNS 2010), Paris, France, April (2010)
4. Sachs, D. et al.: GRACE: A Hierarchical Adaptation Framework for Saving Energy.
Computer Science, University of Illinois Technical Report UIUCDCS-R-2004-2409,
February 2004
5. Cuervo, E. et al.: MAUI: Making smartphones last longer with code offload. In: 8th
International Conference on Mobile Systems, Applications, and Services MobiSys 2010,
pp. 49–62. ACM, New York (2010)
6. Shaolei, R., van der Schaar, M.: Efficient resource provisioning and rate selection for stream
mining in a community cloud. IEEE Trans. Multimedia 15(4), 723–734 (2013)
7. Khamayseh, Y.M., BaniYassein, M., AbdAlghani, M., Mavromoustakis, C.X.: Network size
estimation in VANETs. J. Netw. Protoc. Algorithms 5, 136–152 (2013)
8. Camp, T., Boleng, J., Davies, V.: A survey of mobility models for Ad Hoc network research.
Wireless Commun. Mobile Comput. (WCMC) 2(5), 483–502 (2002). Special Issue on
Mobile Ad Hoc Networking: Research Trends and Applications
9. Lawler, G.F.: Introduction to Stochastic Processes. Chapman & Hall, London (1995)
10. Scott, J.: Social Networks Analysis: A Handbook, 2nd edn. Sage Publications, London
(2000)
11. Mavromoustakis, C.X., Dimitriou, C.D., Mastorakis, G.: On the real-time evaluation of two-
level BTD scheme for energy conservation the presence of delay sensitive transmissions and
intermittent connectivity in wireless devices. J. Adv. Netw. Serv. 6(3 & 4), 148–161 (2013)
12. Mastorakis, G. et al.: Maximizing energy conservation in a centralized cognitive radio
network architecture. In: Proceedings of the 18th IEEE International Workshop on Computer
Aided Modeling Analysis and Design of Communication Links and Networks (CAMAD),
Berlin, Germany, 25–27 September 2013, pp. 190–194 (2013)
13. Mavromoustakis, C.X., Dimitriou, C.D.: Using Social Interactions for Opportunistic
Resource Sharing using Mobility-enabled contact-oriented Replication. In: The Proceedings
of the 2012 International Conference on Collaboration Technologies and Systems (CTS
2012), in Cooperation with ACM, IEEE, Internet of Things, Machine to Machine and Smart
Services Applications (IoT 2012), Denver, Colorado, USA, pp. 195–202 (2012)
Energy Consumption Optimization Using Social Interaction 445
1 Introduction
While 3D printers have been present since the 1980s, mass usage and production
started around 2010 and their popularity has increased rapidly since. The first 3D
printer was made in 1983 by Chuck Hull who patented it using the word stereoli-
thography [1]. In those days, 3D printers used to be large, expensive and highly limited
in their performance in contrast to today’s 3D printers that are easily affordable, not of
such a massive physical scale and have satisfactory performance.
© Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2015
R. Agüero et al. (Eds.): MONAMI 2014, LNICST 141, pp. 446–456, 2015.
DOI: 10.1007/978-3-319-16292-8_32
3D Printing Assistive Devices 447
In general, the printing is done in three phases. The first phase is modelling and
results in creation of 3 dimensional model that the printer is going to “print” or
additively manufacture. This model can be generated in almost all of the 3D modelling
software’s available (such as Maya, Blender, 3D Studio max…). We have utilized
Autodesk Maya and we are recommending it for usage, although every modelling
software can be used for creating the meshes used.
The second phase is printing. In this phase the printer reads the design from the
computer model, usually as an .STL file and creates the final shape. Most commercial
3D printers use plastics as a material to build the final shape in this phase. There are
three types of plastics: PLA (Polylactic Acid) is a biodegradable thermoplastic that has
been derived from renewable resources such as corn starch and sugar canes and is the
easiest material to use, ABS (Acrylonitrile butadiene styrene) is the second easiest
material and is robust and solid, PVA (Polyvinyl Alcohol Plastic) is the hardest type of
plastic and is the kind of plastic that we are using in the process of printing the hand
immobilizer and assistive devices. Also, there are various types of materials that dif-
ferent printers can use. Those include powders, resins, titanium, stainless steel, silver,
gold etc. [2, 3].
The third phase is the completion, when all the supports for building the object are
removed and assembling the model is made in case it is made out of more than one
part.
The design of forearm and thumb orthoses (immobilizers) is currently limited by
the methods used to fabricate the devices, particularly in terms of geometric freedom
and potential to include innovative new features. 3D printing technologies, where
objects are constructed via a series of sub-millimeter layers of a substrate material, may
present an opportunity to overcome these limitations and allow novel devices to be
produced that are highly personalized for the individual, both in terms of fit and
functionality. The immobilizer’s that we present are showing an easier way to handle
temporal disability of the hand.
Digital technology has made a large impact on hearing instrument processing and
fitting, and it is now making a large impact on hearing aid shell manufacturing. Current
manufacturing processes of custom-fit shells of hearing aids may be highly intensive
and manual process, and quality control of the fitting of hearing aids may be tough.
Using additive manufacturing technologies the process of creating hearing aid shells
for people with hearing disabilities can be facilitated. The measures could be taken
from modern ear 3d scanners such as OTOSCAN [4].
The third sort of accessories that are covered in this paper are assistive devices for
people with temporal or permanent fine motoric handicap. We manufactured various
types of customized assistive devices that facilitate the process of eating, washing, hair
brushing, writing and other everyday activities to people with disabled hands. Also, we
manufactured assistive technology gadgets that would help people that are having
complete arm disability to use modern communication devices such as tablets and
computers. The main contributions of this paper are related to the modeling process and
developing 3D printed devices that are customized to fit every person’s hand.
448 A. Stojmenski et al.
2 Background Work
Although 3D printers were not commercialized, scientists have been working on many
issues that can be solved using 3D printing, including aiding medicine. Back in 2003, a
team of scientists started researching artificial bone replacement using inkjet 3D
technologies. They printed the bones layer by layer with bio printing using regenerative
medicine [5]. When the first 3D printers that can print metal appeared, their method
was implemented for a jaw later implanted on an 83-year-old woman [6]. As tech-
nology improved, a general improvement in the field of medicine and 3D printing has
appeared, printing human tissue that can lead to printing and replacement of human
organs [7]. Recently, a number of papers have been published presenting foot orthoses
(FOs) and ankle foot orthoses (AFOs) fabricated using AM techniques, successfully
demonstrating the feasibility of this approach [8, 9].
Another useful application domain is generation of graspable three-dimensional
objects applied for surgical planning, prosthetics and related applications using 3D
printing or rapid prototyping [10]. Graspable 3D objects overcome the limitations of
3D visualizations which can only be displayed on flat screens. 3D objects can be
produced based on CT or MRI volumetric medical images. Using dedicated post-
processing algorithms, a spatial model can be extracted from image data sets and
exported to machine-readable data. That spatial model data is utilized by special
printers for generating the final rapid prototype model.
There are several examples of the use of 3D printing in biomedicine in the recent
past. Titanium printed pelvic was implanted into a British patient [11]. In order to
create the 3D printed pelvis, the surgeons took scans of the man’s pelvis to take exact
measurements of how much 3D printed bone needed to be produced and passed it
along to a 3D printing company. The company used the scans to create a titanium 3D
replacement, by fusing layers of titanium together and then coating it with a mineral
that would allow the remaining bone cells to attach.
Besides titanium, plastic printing is also used. A plastic tracheal splint has been
printed for an American infant [12]. In an infant with tracheobronchomalacia, they
implanted a customized, bio-resorbable tracheal splint, created with a computer-aided
design based on a computed tomographic image of the patient’s airway and fabricated
with the use of laser-based three-dimensional printing. Also, 3D printed skull
replacement has been used on a woman in the Netherlands. The patient has fully
regained her vision, she has no more complaints, she’s gone back to work and there are
almost no traces that she had any surgery at all.
The hearing aid and dental industries are expected to be the biggest area of future
development using the custom 3D printing technology.
3 Immobilization
Getting the anatomically accurate measures is crucial for creating good immobilization.
The data required for this step can be acquired using different methods:
3D Printing Assistive Devices 449
immobilizer that doesn’t always fit their arms (or any other body part that is being
prototyped). Having those in mind and the fact that a fracture occurs every 3 s and that
2 people out of 100 are likely to have a fracture per year [15], we made an effort to
facilitate those 4 weeks to the people that experience an undislocated fracture on the
distal part of the typical place of radius. There are fractures, which don’t need repo-
sition of the fracture fragment, but only immobilization during 4 weeks in the correct
position causing temporal disability and need of assistive devices. Because we think
that there is chance for additional compression on the soft tissues of the distal part of
forearm, we performed 2 mm bigger anatomical model to prevent compression.
Also, on the end parts of the model we used a soft material, which cannot make stasis
in the forearm. We first made a 3D scan of the hand so that we are sure that the immobilizer
will fit perfectly in the patient’s hand. Afterwards, we adjusted our model to the dimen-
sions we received by the 3D scanner as shown on Fig. 1 and we flushed it to printing.
The model is printed with lightweight, but solid plastic that is hypoallergenic and
not itchy. The lightweight plastic is thin enough to fit every piece of clothing that fits
the regular hand. Our model has gaps so that the skin is able to breathe and in order to
control the degree of swelling of the arm and the soft tissue damage or wounds of the
skin. Also, those gaps make the washing of the hand easier. The model is easily
applicative and can be removed just as easy because it is divided in two parts (upper
and lower) with supports on both of the sides as shown on Fig. 2. Also, it is fully
recyclable so there is no medical waste.
created a cuff that would stick to a palm with partial hand disability. Then we started
attaching different parts of helping devices for everyday usage. We created a spoon for
eating as shown in Fig. 3A.
Fig. 3. (A) 3D spoon model attached to a cuff in Maya; (B) Penholder 3D model in Maya
We made a toothbrush that could not be used as 3D printed because of the firmness
of the material. Instead, a regular toothbrush can be used and attached to the holder
with a screw. Almost every regular toothbrush can be attached to the model. The holder
itself is ergonomic and custom designed to a specific persons palm or hand. The
toothbrush is based on the same holder as the spoon. It can be made generic to be able
to hold any tools similar to a toothbrush including hair brush, shoe brush and other
devices for eating such as fork, knife.
We manufactured a holder that would facilitate writing with pen or pencil to the
same group of people with partial hand disabilities. We could not use the model of
the holder that we described because we needed different positioning of the hand and
the thumb. The model that we made enables easy writing with the holder while feeling
no discomfort on the hand. The model of the holder can be seen on Fig. 3B.
452 A. Stojmenski et al.
5 Hearing Aids
A hearing aid is an electronic device that picks up sound waves with a microphone and
sends them to the ear through a tiny speaker. Hearing aids are frequently formed in an
attempt to fit a given situation. A typical custom-fitting process starts with taking ear
canal measures of a patient at the office of an Otolaryngologist. The sample is then
shipped to a manufacturer’s laboratory. At the manufacturer’s laboratory, skilled
technicians using manual operations make each shell. The quality and consistency of
the fit of each shell vary significantly with the technician’s skill level.
A 3D scanner scans the ear impression using a laser light source that has precision
in microns. Because a laser loses its effectiveness in marking the contour of the
impression if it is transparent, powder and liquid materials should not be used to take an
ear impression. Rather, an opaque substance, like silicon, should be used to take ear
impressions. The material is presented in Fig. 4 and the color of the silicon is light
green so it could be easily scanned because the darker is the object the harder it is to
scan using our 3D scanner.
After the model is obtained using 3d scanning technologies, the same could be
printed using additive manufacturing technologies, facilitating the waiting time for the
patient and the developing time for the manufacturer.
6 Evaluation
fully anatomically correct, fits very good on the arm and may be widely applicable in
their everyday work. This can be concluded from the survey that we’ve made about the
experience that the doctor’s had with the immobiliser. The results that are shown on
Fig. 5 are from a questionnaire in which we gathered information from a test group
consisted of five doctors. Each question was in a form of statement graded by the users
with one of the grades: Strongly Disagree, Disagree, Undecided, Agree, Strongly
Agree. The questions that we used could be seen in the Appendix.
It is very important that the place where the bone fracture took part is more densely
covered with smaller polygonal objects so it can be more firm in contrast to the other
part of the model where we want elasticity to relax the hand and also handle big tissue
swelling. Using the model, the doctors confirmed that there are no signs of any local
irritation or allergic reactions.
The assistive devices covered in this paper were given for usage in the Macedonian
non-profit organisation from where we got the descriptions of general needs for the
disabled people. We made a questionnaire and the devices that we manufactured were
evaluated positively since the visitors there already used similar, but not customized
devices. The overall test group that responded to the questionnaire contained 10 people
that had hand disabilities. The results of the questionnaire are shown in the following
figure (Fig. 6).
The ear impression presented in this paper eases the traditional work of the man-
ufacturer in terms that he could easily specify the thickness of the wall of the hearing
aid device and also specify different areas with different thickness using computer aided
design.
454 A. Stojmenski et al.
be even avoided when taking an ear impression. Therefore, an accurate ear impression
is the first crucial step in ensuring an accurately fit hearing aid shell.
Appendix: Questionnaire
The questions in the questionnaire for the hand immobilizer were as follows:
A1. There are signs of any local irritation or allergic reaction.
A2. Those immobilizers may replace the traditional ones.
A3. There is no itchiness while wearing the immobilizer.
A4. The immobilizer provides good skin respiration, washing and tissue control.
A5. The immobilizer is fully anatomically correct.
A6. 3D printed immobilizers may be widely applicable in my everyday work.
The questions in the questionnaire for the assistive holder were as follows:
References
1. 30 Years of Innovation. www.3dsystems.com, http://www.3dsystems.com/30-years-innovation
2. What materials do 3D Printers use? Find out now at 3D Printer Help. http://www.
3dprinterhelp.co.uk/what-materials-do-3d-printers-use/
3. pramoddige: 3D Printing Materials 2015–2025: Status, Opportunities, Market Forecasts.
http://3dbusinesses.com/listings/3d-printing-materials-2015-2025-status-opportunities-
market-forecasts/
4. OTOSCAN® 3D. http://earscanning.com/
5. Saijo, H., Igawa, K., Kanno, Y., Mori, Y., Kondo, K., Shimizu, K., Suzuki, S., Chikazu, D.,
Iino, M., Anzai, M., Sasaki, N., Chung, U., Takato, T.: Maxillofacial reconstruction using
custom-made artificial bones fabricated by inkjet printing technology. J. Artif. Organs Off.
J. Jpn. Soc. Artif. Organs. 12, 200–205 (2009)
6. 83-Year-Old Woman Gets the World’s First 3-D Printed Jaw Transplant. http://www.popsci.
com/technology/article/2012-02/83-year-old-woman-gets-worlds-first-3-d-printed-jaw-
transplant
7. Mironov, V., Boland, T., Trusk, T., Forgacs, G., Markwald, R.R.: Organ printing: computer-
aided jet-based 3D tissue engineering. Trends Biotechnol. 21, 157–161 (2003)
456 A. Stojmenski et al.
8. Faustini, M.C., Neptune, R.R., Crawford, R.H., Stanhope, S.J.: Manufacture of Passive
Dynamic ankle-foot orthoses using selective laser sintering. IEEE Trans. Biomed. Eng. 55,
784–790 (2008)
9. Mavroidis, C., Ranky, R.G., Sivak, M.L., Patritti, B.L., DiPisa, J., Caddle, A., Gilhooly, K.,
Govoni, L., Sivak, S., Lancia, M., Drillio, R., Bonato, P.: Patient specific ankle-foot orthoses
using rapid prototyping. J. Neuroeng. Rehabil. 8, 1 (2011)
10. Rengier, F., Mehndiratta, A., von Tengg-Kobligk, H., Zechmann, C.M., Unterhinninghofen,
R., Kauczor, H.-U., Giesel, F.L.: 3D printing based on imaging data: review of medical
applications. Int. J. Comput. Assist. Radiol. Surg. 5, 335–341 (2010)
11. UK Surgeon Implanted A 3D-Printed Pelvis - Business Insider. http://www.businessinsider.
com/uk-surgeon-implanted-a-3d-printed-pelvis-2014-2
12. Zopf, D.A., Hollister, S.J., Nelson, M.E., Ohye, R.G., Green, G.E.: Bioresorbable airway
splint created with a three-dimensional printer. N. Engl. J. Med. 368, 2043–2045 (2013)
13. 3D-DOCTOR, medical modeling, 3D medical imaging. http://www.ablesw.com/3d-doctor/
surgmod.html
14. van Overveld, K., Wyvill, B.: Shrinkwrap: an efficient adaptive algorithm for triangulating
an iso-surface. Vis. Comput. 20, 362–379 (2004)
15. Facts and Statistics. International Osteoporosis Foundation. http://www.iofbonehealth.org/
facts-statistics
Author Index