Mobile Edge Caching
Mobile Edge Caching
Abstract—With the widespread adoption of various mobile less than 1 millisecond of network latency as compared to 4G
applications, the amount of traffic in wireless networks is grow- networks [5].
ing at an exponential rate, which exerts a great burden on mobile The major standardization bodies of 5G consist of
core networks and backhaul links. Mobile edge caching, which
enables mobile edges with cache storages, is a promising solu- the 3rd Generation Partnership Project (3GPP) (which
tion to alleviate this problem. In this paper, we aim to review provides complete system specifications including core
the state-of-the-art of mobile edge caching. We first present an network, radio access network and service capabili-
overview of mobile edge caching and its advantages. We then ties), Telecommunication Standardization Sector of the
discuss the locations where mobile edge caching can be realized International Telecommunications Union International
in the network. We also analyze different caching criteria and
their respective effects on the caching performances. Moreover, Mobile Communications 2020/Study Group 13(ITU-T
we compare several caching schemes and discuss their pros and IMT2020/SG13) (which is responsible for all 5G no-radio
cons. We further present a detailed and in-depth discussion on network segments including overall network architecture,
the caching process, which can be delineated into four phases softwarization and management), Internet Engineering Task
including content request, exploration, delivery, and update. For Force (IETF) (which covers 5G no-radio network segments
each phase, we identify different issues and review related works
in addressing these issues. Finally, we present a number of chal- including network slicing), the European Telecommunications
lenges faced by current mobile edge caching architectures and Standards Institute (ETSI) (which is responsible for network
techniques for further studies. function virtualization, mobile edge computing, next gener-
Index Terms—Mobile edge computing, mobile edge caching, ation protocols), and Institute of Electrical and Electronics
5G, content management, content delivery. Engineers (IEEE) (which provides the WiFi and WiMAX
standards) [6]. In order to achieve lower network latency
and larger network capacity, 5G solutions in radio access
I. I NTRODUCTION network (RAN) include modification of physical subframe,
OWADAYS, mobile data traffic is experiencing explo- polar coding, turbo decoding, quick path-switching methods,
N sive growth due to pervasive mobile services, ubiqui-
tous social networking, and resource-intensive applications.
control channel sparse encoding, outer-loop link adaptation,
mmWave based air interface, location-aware communication,
According to Cisco [1], mobile data traffic is predicted to and cloud radio access networks [7]. The 5G core network
increase 500-fold in the next decade. It is estimated that a solutions include software defined networks (SDN), network
mobile user will download around 1 terabyte of data in a year function virtualization (NFV) and mobile edge computing
by 2020 [2]. The increasing mobile traffic is mainly incurred (MEC) [7].
by the newly emerging applications of mobile devices, such Mobile devices are now running more effective and power-
as the Internet of Things (IoT), Internet of Vehicles (IoV), ful applications, which require more computational capacity,
e-healthcare, machine to machine (M2M) communications storage, bandwidth and energy. These applications gener-
and virtual/augmented reality, which require higher network ally include computationally and data intensive tasks, like
throughput and stricter network latency [3]. These use cases computer vision, image processing, face recognition, optical
accelerate the standardization process of the fifth generation character recognition, and augmented reality. However, the
(5G) wireless networks in aspects of the network capac- performance of mobile devices is often impoverished due to
ity and latency, which cannot be fulfilled by the current the limited computation, storage capacity and battery life. An
fourth generation (4G) wireless networks [4]. 5G wireless outstanding solution to address these limitations is to offload
networks are planned to be standardized by 2020, and are some computation to the cloud [8]. This solution is referred to
supposed to provide 1000 times more network capacity and as mobile cloud computing (MCC) [9], where the terminology
“cloud” refers to a collection of servers usually located in a
Manuscript received May 2, 2018; revised October 8, 2018, January 8, distant data center that provide adequate computing, storage
2019, and February 21, 2019; accepted March 22, 2019. Date of publication and networking resources for mobile devices.
March 29, 2019; date of current version August 20, 2019. (Corresponding
author: Nirwan Ansari.) Since the delay of MCC is contributed by the core network,
J. Yao and N. Ansari are with the Advanced Networking Laboratory, radio access network (RAN) and the backhaul links between
New Jersey Institute of Technology, Newark, NJ 07102 USA, and also with them, the long latency between users and cloud becomes a
the Helen and John C. Hartmann Department of Electrical and Computer
Engineering, New Jersey Institute of Technology, Newark, NJ 07102 USA challenge. New network architecture is required to satisfy the
(e-mail: [email protected]; [email protected]). network latency required by 5G wireless networks for MCC.
T. Han is with the Department of Electrical and Computer Engineering, MEC, where cloud servers are placed in close proximity to
University of North Carolina at Charlotte, Charlotte, NC 28223 USA
(e-mail: [email protected]). users to provide computing and storage capacities, is proposed
Digital Object Identifier 10.1109/COMST.2019.2908280 to address this challenge [12]. The architecture of MCC and
1553-877X c 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2526 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2527
Fig. 2. Classification structure and differences from the two most related works (flag: [10], star: [11]).
TABLE I
S UMMARY OF R ELATED W ORKS
popularity on caching strategies. We summarize the overall still required. To fill this gap, we conduct a survey regard-
related survey papers in Table I, where we list various aspects ing different aspects of issues in mobile edge caching. For
of caching in our work and compare ours against existing sur- better readability, we summarize all abbreviations used in this
veys. We also construe the structure of our work and detail paper in Table II. Hereafter, the “content”, “file” and “data” are
the differences between our work and the two most related used interchangeably since all of them refer to the cacheable
works [10] and [11] in Fig. 2. The flag marker represents the content. Note that MEC makes no assumption on the underly-
reference [10] and the star marker reflects the work [11]. ing mobile network infrastructure [26]. The MEC system can
Although many works have attempted to address various be deployed in the existing mobile networks and transited to
issues in mobile edge caching, a comprehensive summary is future 5G (which is specified in [27]) by software upgrading
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2528 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
TABLE II
S UMMARY OF A BBREVIATIONS compare several caching schemes and analyze their
advantages and disadvantages.
4) We summarize the caching process into four phases
including the content request phase, content exploration
phase, content delivery phase and content update phase.
For each phase, we identify the related issues and review
the corresponding publications.
5) We identify and analyze the challenges and research
directions to motivate further research.
The rest of the survey is organized as follows. Section II
provides an overview of mobile edge computing and mobile
edge caching. Section III summarizes the possible locations
where cache storages can be located. In Section V, several
caching schemes are introduced. The content request analysis
is discussed in Section VI. Section VII presents several issues
in the content exploration phase. The content delivery problem
is described in Section VIII. In Section IX, content update
problems are discussed. Section X discusses several challenges
faced by current mobile edge caching. Section XI concludes
the survey.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2529
B. Mobile Edge Caching performance of mobile edge caching, there are several basic
Mobile edge caching, which utilizes the storages provided criteria that should be followed. First, the cache hit probabil-
by mobile edge servers, is a use case of MEC [34]. A cache- ity should be high. Cache hit probability refers to the ratio
enabled mobile edge server can be an independent server of the number of cached files requested by the users over the
attached to a mobile edge node or the storage of the edge total number of files in the caches. Second, SE and EE are
node. Without mobile edge caching, content requests of mobile major performance metrics in 5G, caching schemes should be
users are usually served by remote Internet content servers pro- designed to improve both of them. Third, minimizing content
vided by the content providers (e.g., Web servers). When users retrieving delay should be accounted for, as it directly relates
retrieve the same popular contents from remote servers, remote to user quality of experience (QoE). Fourth, caching popular
servers have to send the same files repeatedly that may lead to contents at the edge can offload traffic from backhaul links,
duplicate traffic and network congestion. However, by caching and hence maximizing traffic offloading can be one of the
popular contents closer to users, the latency for retrieving caching criteria.
contents can be greatly reduced and the duplicate transmis- Several caching schemes have been studied recently.
sions from content servers to the cache-attached network nodes Reactive and proactive caching are proposed with regard
can be avoided. Moreover, as the cost of storages is lower- to deciding whether to cache a content after or before it
ing, deploying caches at wireless edge becomes more cost is requested. Based on where the caching decisions are
effective [35]. made, caching schemes can be classified as centralized and
In mobile edge caching, content requests, which are issued distributed caching. Centralized caching uses a central con-
by user equipments (UEs), are responded by one of the nodes, troller to determine all content placement schemes while
which contain the requested content. Usually, the domain name distributed caching is only aware of neighboring UEs’ or
system (DNS) redirects the request of the user to the near- neighboring BSs’ information and makes decisions with
est cache capable of satisfying the content. There are several respect to local caches. Since the caching space at the
advantages brought by mobile edge caching. First, as mobile edge node is limited, designing caching policy for each
edge caching is facilitated at the network edge which is closer node individually may cause insufficient utilization of caches.
to users than the remote Internet content servers, it reduces the Cooperative caching copes with this problem because dif-
latency of acquiring user requested contents. Second, mobile ferent caching nodes can share contents with each other.
edge caching avoids the data transmissions through the back- The under-utilized caches can be used by other nodes
haul links, and hence reduces the backhaul traffic. Third, and hence the utilization rates of all caching nodes can
mobile edge caching helps reduce energy consumption. For be improved. Coded caching utilizes the network coding
example, when the requested data are cached at the small techniques where data messages are aggregated (encod-
cell base station, the energy consumption for transmitting data ing), forwarded to the same destination and then sepa-
from the macro base station can be avoided. Fourth, the spec- rated into different messages (decoding). This technique can
trum efficiency can be improved by mobile edge caching. increase network throughput and reduce delays by reduc-
For instance, when multiple users request the same content, ing the number of transmissions. Probabilistic caching deals
the serving BS can transmit the cached file by multicast- with the problem of uncertainty and movement of user
ing, which shares the same spectrum [11]. Fifth, mobile edge locations. Game theory based caching is used to analyze
caching can leverage the network information collected by the the cooperations and competitions among different service
mobile edge servers (e.g., user preferences, file popularities, providers (SPs).
user mobility information, user social information and channel Deciding what to cache requires the awareness of content
state information) to improve caching efficiency. For exam- request patterns. Content types consists of multimedia files
ple, the user social relationships can be explored to cache and (e.g., videos and files) and IoT data, which exhibit more
disseminate contents via D2D communications. dimensions and shorter lifetimes. It is challenging to make
decisions in an optimal manner without knowing the request
patterns. In order to obtain the probability of a particular
C. Issues Regarding Mobile Edge Caching content being requested, we need to estimate the content pop-
The problems of where, how and what to cache are the ularity and user preference. User mobility pattern affects how
key research issues in mobile edge caching. Where to cache users are associated with BSs and can further modify the
refers to the selection of caching locations. Caching schemes request patterns received by mobile networks. The user associ-
can be implemented on UEs by utilizing their own storages. ation problem [37] determines which user is served by which
The cached content on UEs can be shared via D2D commu- BS and hence can affect the requests received by each BS.
nications. Popular contents can also be cached at BSs, e.g., The practical mobile edge caching process carries out the
relays, femto base stations (FBSs), pico base stations (PBSs), following phases.
small base stations (SBSs), and macro base stations (MBSs). 1) Content request phase (Section VI): requests are initiated
In cloud radio access network (C-RAN) [36], the content can by users.
be cached at both remote radio heads (RRHs) and baseband 2) Content exploration phase (Section VII): the requested
unit (BBU) pools. content is searched in mobile network to determine
How to cache involves the problem of choosing caching whether the content has been cached at one of the cache
criteria and designing caching schemes. To improve the storages.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2530 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
3) Content delivery phase (Section VIII): the requested content. If none of nearby devices possess the requested con-
content is delivered to users either from a cache node tent, the BS then provides the content through downlink
or from remote content service providers. transmission. Different UEs can form different clusters and
4) Content update phase (Section IX): caches are updated usually only the UEs within one cluster can communicate.
by evicting old ones and caching new ones according to Obviously, D2D caching can inherit all the benefits promised
the popularity information. by D2D communications. These benefits include improving
In the content exploration phase, when the mobile network spectrum utilization, energy efficiency and throughput, and
receives a request, it searches for the content through the enabling new peer-to-peer and location-based applications and
network. How to search for the content is referred to as services [10]. Furthermore, UEs can cache contents accord-
the content query problem. Where to find the content is ing to their own preferences, thus providing higher caching
determined by the content placement problem. The con- flexibility as compared to caching at network entities [22].
tent placement problem addresses which content is cached Caching at UEs has long been applied as a technique
at which caching storage. In the content delivery phase, to improve user QoE [38]. Pyattaev et al. [39] proposed
the multicast transmission is usually utilized to improve an integrated two-tier wireless network including 3GPP LTE
the network throughput. Furthermore, the content delivery is in licensed frequency bands and D2D system in unlicensed
closely related to the content placement results, thus justify- bands. In their network system, UEs can obtain content either
ing the joint optimization of them. As network traffic and user by downloading from the BS or by establishing D2D links
demands are dynamic, utilizing overly expired information with other proximate UEs. They considered the characteris-
may not guarantee the caching performance. Hence, caches tics of mass events (e.g., festivals, concerts and sport events)
should be updated every time interval, which is specified and focused on a particular isolated cellular cell. They also
according to the variance of the traffic (e.g., one week for assumed that the frequency channels of the D2D and cellular
movies and two or three hours for news). In the content systems are non-overlapping and hence there is no interference
update phase, the content replacement problem should be well across two tiers. In the D2D system tier, however, transmis-
designed with regard to determining when to update the caches sions from different neighboring UEs may still interfere with
and which content should be removed from the caches. each other. Instead of a hierarchical two tier architecture,
Ji et al. [40] proposed a wireless D2D grid network formed by
III. C ACHING L OCATIONS UE nodes on the unit square without a central BS. Only the
UE nodes, which are within a certain pre-defined distance, can
We classify the existing research according to the caching communicate with each other. Each UE node arbitrarily caches
locations in this section. As elicited in Fig. 3, the cache nodes several packets of a file from the file library and obtains the
can be deployed at UEs, MBSs, SBSs, PBSs, FBSs and relays file by exchanging packets with one another. Their subsequent
in traditional cellular network, and RRHs and BBU pools in work [41] extended their previous grid network with clusters
C-RAN. and considered hybrid D2D links including microwave links
at 2.45 GHz carrier frequency and high capacity unlicensed
A. Caching at UEs millimeter-wave links at 38 GHz. They assumed that only UEs
Current smartphones are becoming more sophisticated within one cluster can communicate. If a UE finds its requested
with enhanced computing and storage capabilities. Therefore, file in its cluster, it first checks whether the millimeter-wave is
mobile user devices can act as caches to store content locally available (because it suffers from strong path loss and is easily
and share content with other UEs directly via D2D links blocked by obstacles) and obtain the file by millimeter-wave
using licensed-band (e.g., LTE) or unlicensed-band protocols links, and otherwise by microwave D2D links. If the requested
(e.g., Bluetooth and WiFi). This caching strategy is called file cannot be obtained by D2D communications, the BS will
D2D caching (Fig. 4) in which the BS usually keeps track transmit the file by downlinks at 2.1 GHz carrier frequency.
of the caching status (e.g., availability of cached content and One challenge of D2D caching is attributed to the relatively
caching storage) in each UE and also directs requests from small storage capabilities and limited batteries of UEs that may
one UE to suitable nearby devices which have the requested degrade the caching performance. Sheng et al. [42] proposed
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2531
TABLE III
a multilayer caching and delivery architecture consisting of C OMPARISON OF BS S
both SBSs and UEs. They utilized multihop D2D communi-
cations to help reduce the energy consumption and battery
life of UEs. Multihop D2D communications can reduce the
transmission coverage for each hop and hence the transmis-
sion power of UEs, which results in decreasing the UE energy
consumption and battery life. In order to improve the effi-
ciency of caching storage usage, Ji et al. [43] investigated the
coded caching scheme in D2D wireless networks. Different
from uncoded caching, coded caching does not require to store
the whole complete file from the file library, thus provision-
ing UEs with more flexility and storage efficiency. Moreover,
Zhang et al. [44] studied cooperative D2D caching. In their including the social network layer and the physical network
system model, if two UEs both need to cache two files, instead layer. In the social network layer, the network links denote
of downloading both two files for each UE, one UE downloads the social relationships between UEs. The network links in
one file and the other UE downloads the other file, and then the physical network layer reflect the physical connections of
they share the files with each other. Therefore, cooperative network infrastructures (e.g., BS and UEs). Furthermore, the
caching can greatly reduce the storage consumption of both social network layer is divided into two sublayers including
UEs. online sublayer (interest intimacy) and offline sublayer (user
Another challenge of D2D caching is that D2D transmis- mobility). Wu et al. [48] then formulated a submodular func-
sions are easily interfered by other D2D pairs within the tion maximization problem to maximize the cache hit ratio in
collaboration distance. Hence, there is a tradeoff between mobile social networks. The user interest similarity matrix and
cache hit probability and interference. A smaller collabora- contact probability matrix are defined to characterize the social
tion distance introduces less interference and hence increases relationships. A semigradient-based algorithm is designed to
the frequency reuse and potential throughput. On the contrary, obtain the optimal content placement strategy iteratively. They
a larger collaboration distance introduces more interference demonstrated that their semigradient-based algorithm provides
but can increase the probability of finding the requested file lower computational complexity and faster convergence than
cached at nearby UEs. This tradeoff was studied in [45], where the classical scheme proposed in [49].
a closed-form expression for the optimal collaboration dis-
tance was derived. In their work, they tried to maximize the B. Caching at BSs
frequency reuse for a given popularity distribution and stor- As mobile users require higher throughput and lower
age capacity. However, their work is limited because they network latency, caching at BSs, which brings contents closer
only consider a fixed number of uniformly distributed UEs to mobile users, becomes a promising solution. Caching at
in the grid-based D2D wireless network. Altieri et al. [46] BSs faces the challenges of limited coverage, uncertainty of
proposed a stochastic geographic model to maximize the num- wireless connections and inter-cell interference (ICI) [50].
ber of requests served by D2D caching. UEs are distributed Caching at BSs becomes more complicated when consid-
as a Poisson point process (PPP) and can exchange contents ering overlapped densely deployed SBSs and heterogeneous
through D2D links. In their model, interference can be avoided networks.
by the time division multiple access (TDMA) scheme, in Heterogeneous networks (HetNets), which augment macro
which time is divided into equally-length time slots and only base stations (MBSs) by deploying small base stations (SBSs)
one request is served in each time slot. with small coverage areas, are introduced to increase the
Owing to the characteristic of D2D communications, the network coverage and throughput [51]. In HetNets, SBSs, e.g.,
social relationships have a great impact on the D2D caching pico base stations (PBSs) and femto base stations (FBSs), are
problem, especially on the content delivery routing path selec- densely deployed. We compare the configurations of different
tion. Social characteristics determine the user mobility pattern BSs in Table III and discuss caching at different types of BSs
and willingness of sharing resources, and hence they can help in the following subsections.
predict future requests. Wang et al. [47] proposed a traf- 1) Caching at MBSs: Many works have studied the frame-
fic offloading framework for a 5G system by exploiting user work of caching at SBSs with the whole file library at
social relationships by assuming that UEs can cache contents the mobile core network. MBSs, as parts of the wireless
according to their own preferences or the group’s demands. network entities, can also serve as cache nodes in cache
Their framework can measure and analyze user access patterns enabled networks. Zaidi et al. [52] presented a two-tier cel-
and delays, and then disseminate contents of interest. They lular network including the MBS tier and SBSs tier with the
concluded that user social behaviors have several properties: objective to maximize the cache hit probability. All MBS and
1) a small number of users can significantly impact other users; SBSs are equipped with cache storages. Users can retrieve
2) clusters of users, where they transfer and share contents, are the requested contents via four schemes: 1) directly from the
usually formed by interests; 3) users, who are geographically serving SBS; 2) directly from the serving MBS; 3) from the
close, have higher trends of sharing information. Wu et al. [22] MBS via the relay of the serving SBS; 4) from other SBSs
proposed a two-layer cache-enabled social network model via the relay of the MBS. Chang et al. [53] proposed an
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2532 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
algorithm based on the college admission matching to assign 3) Caching at PBSs: Caching at PBSs can help reduce
contents to cache entities including both MBSs and SBSs in network backhaul traffic load. Li et al. [60] proposed a
heterogenous small cell networks. The drawback of this work weighted network traffic offloading problem with cache stor-
is that they assumed that each content can only be cached age located at PBSs. In their system model, the MBS, acting
at one cache entity. This assumption will limit the caching as a central controller which determines the content place-
performance when users at different geolocations request the ment strategies, is connected with geographically distributed
same content and hence several users have to suffer from cache-enabled PBSs. The users can obtain the requested con-
great path loss if the content is only cached at a remote SBS tents from the caches at their serving PBSs or from their
or MBS. neighboring PBSs because PBSs are assumed to be able to
2) Caching at SBSs: Bringing small BSs closer to the users share contents with each other. Meanwhile, Cui and Jiang [61]
can potentially support the low transmission power and high jointly optimized the caching and multicasting in a two-tier
data rate. In SBS caching, each SBS is equipped with limited HetNet including the MBS tier and the PBS tier to maximize
storage for caching, and users can retrieve contents directly the successful transmission probability. The location distribu-
from the SBS instead of from the remote servers. This can tions of MBSs and PBSs follow PPPs with different densities.
offload some traffic from the backhaul and alleviate back- They considered two caching schemes, i.e., identical caching
haul congestion. Golrezaei et al. [54] first proposed caching in the MBS tier and random caching in the PBS tier. Identical
at SBSs, which only have low-bandwidth backhaul links but caching means that each MBS stores the same set of files
are equipped with large storage capacity. They studied the while random caching enables PBS to store different files ran-
content placement problem at SBSs to minimize the average domly. The intuition of the hybrid caching schemes is that
delay of all mobile users. Blasco and Gündüz [55] also stud- user requests are either served by its nearest MBS or multiple
ied the content placement problem in the cache-enabled SBS nearby PBSs. Therefore, identical caching in MBS tier does
network. Meanwhile, they considered the cache replacement not lose much optimality when the density of MBSs is low
phrase in which the cache content can be refreshed at each and random caching in PBS tier can take advantage of multiple
time interval according to the varying file popularity. They spatially distributed PBSs by multicasting content delivery.
designed a learning-based algorithm to maximize the system 4) Caching at FBSs: An FBS (also called helper node),
reward, which was defined as the bandwidth alleviation of which usually has low bandwidth backhaul links (e.g., wire-
backhaul links. However, caching and transmission policies are less backhaul links) and large storage capacities, is one kind
considered separately among these works. Gregori et al. [56] of SBSs. They are more flexible and cost efficient to be
jointly optimized caching and delivery strategies to minimize deployed than traditional BSs. Caching at FBSs is usually
the MBS energy consumption. In their system model, SBSs referred to as femtocaching. Golrezei et al. [49] presented
and UEs are equipped with cache storages. A SBS can serve a femtocaching architecture for video contents dissemination
multiple users simultaneously and UEs can share data through in wireless networks. Femtocaching equips FBSs with cache
D2D communications. However, these works do not consider storages for storing popular video contents. They also demon-
the interference between different cells. Khreishah et al. [57] strated femtocaching improves the system throughput by one
proposed a coordinated SBS cellular system where each SBS to two orders of magnitude as compared with the architecture
can use a set of secondary channels to communicate with other without helper nodes. Shanmugam et al. [62] discussed the
SBSs, and the MBS stores all files and can always serve users content placement problem in a femtocaching network with the
if a requested content is not found at any of the SBSs. They objective to minimize the expected download time of all files.
jointly optimized the channel allocation and the content place- The matching theory was utilized to match UEs to helpers in
ment problem to maximize the system throughput. In order a bipartite graph formed by UEs and helpers. They further
to address the channel interference, they introduced a con- extended their work in [63], which introduced the concept of
flict graph and then formed several independent sets where dynamic femtocaching by considering the user mobility and
channels do not interfere with each other in one independent changing topology.
set.
Although the content caching and spectrum sharing have
been widely exploited recently, most works only consider them C. Caching at Relays
separately. Tamoor-ul-Hassan et al. [58] characterized the out- Wireless relays are usually deployed to extend the wireless
age probability in cached-enabled SBS networks, defined as coverages and improve the spectrum efficiency. They can also
the probability of not satisfying users’ requests over a given be used as urban hot spots where contents can be cached [64].
coverage area, as a function of cache size and SBS density. Wang et al. [65] proposed the cache-enabled relay cellular
However, they did not discuss the impact of spectrum on the network with one BS and multiple relays installed at the cell
outage probability. Their following work [59] jointly opti- edge to serve users within their coverages. They provided
mized the spectrum allocation and caching in cache-enabled insights on how and when to cache by a Markov decision pro-
SBS networks where the SBSs are distributed following a cess aiming at minimizing the energy consumption of all relays
homogeneous PPP. Their objective is to minimize the cache and the BS. In their work, the caches can be refreshed by drop-
miss probability (defined as the probability of requests not ping the least popular contents. However, they assumed that
fulfilled in a given area) under the constraint of cache storage the transmit power at relays is adjustable, which is impractical
capacities. when relays are serving as hot spots. For a similar network
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2533
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2534 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
similar network architecture, Tao et al. [73] studied a con- relays in traditional cellular networks, and RRHs and BBU
tent transmission design by integrating both multicasting and pools in C-RAN [81].
caching. In their work, RRHs can form multiple RRH clusters 1) Caching at UEs: Caching at UEs is also referred to
and each UE is served by all RRHs in one cluster. Meanwhile, as D2D caching. D2D networks allow direct communications
UEs, with similar content preferences, can act as a multicast between UEs using license-band (e.g., LTE) or unlicensed-
group in which UEs can receive data concurrently by multicast band protocols (e.g., bluetooth and WiFi). The devices are
transmission. They addressed the problem of how to form often organized into clusters, which are controlled by the BS.
RRH clusters and UE multicast groups with the objective A user’s content request can only be satisfied by other users
to minimize the system cost incurred from fronthaul traf- within the same cluster. The physical locations of users play
fic cost and RRH energy consumption. Mosleh et al. [74] an important role in designing D2D caching strategies, so that
jointly optimized the content placement and cooperative trans- a user can easily find its requested content at one of the neigh-
mit beamforming to minimize the system cost (i.e., fronthaul boring users. Designing caching strategies for D2D networks
cost and transmission power) under the constraints of QoS, faces four major challenges:
peak transmission power and cache capacity. They further sep- • Each user may find its requested content at multiple
arated this joint problem into two subproblems including the neighboring users and each content may also be requested
content placement problem and beamforming design problem, by multiple neighboring users. How to establish the
and then designed heuristic algorithms for both subproblems. D2D communication links to transmit content in order
Stephen and Zhang [75] tried to optimize the energy efficient to maximize the active D2D links is very complicated
transmission in C-RAN which uses the orthogonal frequency and challenging.
division multiple access (OFDMA) scheme. The constraints • UEs usually have limited storage and battery capacity,
in their problem include the fronthaul capacity and required which may degrade the caching performance. Multihop
minimum user data rates. D2D communications can be adopted to reduce the
The above works only consider caching at the RRH level, energy consumption of each UE.
and there is no collaboration among these caches. Tran and • The D2D transmission is easily interfered by those
Pompili [76] proposed a novel caching framework, Octopus, from different neighboring UEs within the collaboration
which equips RRHs with distributed edge-caches at RRHs and distance. Larger collaboration distance introduces more
BBUs with cloud cache. Their overall system aims to provision interference while increasing the probability of finding
optimized caching at multiple layers such that the total con- the requested file; this leads to a tradeoff between the
tent access delay is minimized. Yao and Ansari [77] addressed interference and content finding probability. An optimal
the content placement problem in C-RAN at both the RRH collaboration distance should be explored to address this
and BBU level with the objective to minimize the average tradeoff.
file download latency. Different from most works on C-RAN • User social relationship plays an important role in D2D
caching, multiple BBU pools are included in their network caching because it determines the user willingness to
system. They further investigated the joint optimization of share contents. How to exploit the user social relation-
content placement and storage allocation problem in [78]. ships is also a challenging issue.
Most existing works in the context of caching in C-RAN 2) Caching at BSs and Relays: Caching at BSs can reduce
neglect the effect of user’s mobility and social relationships. backhaul traffic by installing caching storages at MBSs, SBSs,
With the similar architecture of joint caching at both BBU PBSs and FBSs, because users can obtain the requested con-
and RRH level, Chen et al. [79] addressed the content place- tents directly from the BSs rather than from the remote Internet
ment problem to minimize the network delay in C-RAN and content servers. As MBSs usually have larger cache storages
introduced a framework of echo state networks to predict the and coverage areas than SBSs, the objective of caching at
user request and mobility patterns with the aid of machine MBSs usually involves minimizing backhaul traffic, network
learning technologies. By extracting the information (e.g., age, latency and successful transmission probability. However, the
job and location) from user content requests, the echo state caching storages of SBSs are relatively small. Hence, coop-
network can track the current network status and predict the erative caching (i.e., different SBSs can share contents with
future information. From the perspective of users’ social rela- each other) is usually adopted. In addition, in order to make
tionships, Wang et al. [80] discussed the impact of mobile the most use of the limited caching storages of SBSs, cached
social networks on the performance of edge caching schemes contents should be updated more frequently according to
in F-RANs. They aimed at minimizing the bandwidth con- the dynamic content popularities. PBSs and FBSs are spe-
sumptions of both fronthaul links and RANs by caching at cial SBSs. PBSs require high-speed backhaul links connected
RRHs and UEs. UEs can share contents according to their to the MBS that may incur high traffic cost. Caching at
social ties and behaviors. PBSs reduces the traffic through the backhaul links and hence
reduces the traffic cost. In contrast, FBSs usually have smaller
backhaul links than those of PBSs, and may even have wireless
backhaul links. Hence, FBSs are flexible and cost efficient to
F. Summary and Discussion be deployed but within smaller coverages. Relays are usually
In this section, we classify the existing research according to deployed to extend the wireless coverages or act as urban hot
the caching locations at UEs, MBSs, SBSs, PBSs, FBSs and spots. When designing caching schemes for relays, the energy
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2535
TABLE IV
consumption of relays along with their locations should be S UMMARY OF C ACHING C RITERIA
taken into consideration.
HetNet improves area spectral efficiency by densely deploy-
ing SBSs. It consists of MBSs, SBSs (e.g., FBSs and PBSs),
and relay nodes, where the coverage of macrocells overlaps
with those of the small cells and relays and so they can share
the same spectrum. Hence, the interferences between SBSs and
the MBS cannot be avoided. In a typical cache-enabled HetNet
model, the MBS can act as the central controller to determine
the caching strategies. The users can obtain the requested con- 69
89
tents from their serving SBSs (or relays) or their neighboring 90 92
ones if cooperative caching is adopted, in which different SBSs
and relays can share contents with each other. In the HetNet, 93
users may frequently pass through different small cells. Hence,
user mobility should be taken into consideration in designing
caching schemes.
3) Caching in C-RAN: C-RAN aggregates the computing cache hit probability. They discussed the content placement
capabilities to the BBU pools by adopting cloud computing problem by designing a decentralized algorithm. By further
technology, which introduces flexibility and agility for wire- considering the scarcity of bandwidth resource, the D2D com-
less networks. The densely deployed RRHs at cell sites help munications technique is introduced in the caching strategy
improve the network capacity and coverage. The fronthaul design [83] to improve the cache hit probability. Chen and
links, connecting RRHs to the BBU pools, may be congested Kountouris [84] compared the performances of D2D caching
due to the growing mobile traffic. Caching at RRHs can reduce and SBS caching in terms of cache hit probability. They con-
the traffic in the fronthaul links and also the backhaul links cluded that D2D caching performs better when user density is
which connect BBU pools to the mobile core network. Caching high because more user requests can be served simultaneously
at BBU pools help reduce the traffic in the backhaul links. The through short-distance D2D communications. Otherwise, SBS
caching strategies in C-RANs are usually designed to mini- caching is more beneficial because cache storages of SBSs
mize the traffic in fronthaul and backhaul links, and the energy are usually larger and hence the cache hit probability can be
consumption of RRHs. In C-RANs, users are usually served by improved. Blaszczyszyn and Giovanidis [85] studied the con-
a group of RRHs by using the coordinated multipoint transmis- tent placement policy in cellular networks with the objective to
sion technique to increase the network capacity. Hence, how to maximize the cache hit ratio. In their network system model,
form groups of RRHs to serve users, which is referred to as the a user can be covered by multiple cache-enabled BSs and con-
RRH clustering problem, is critical. Owing to the coordinated nect to any of the BSs. They considered several BS coverage
multi-point transmission technique, each RRH is equipped models. The first model is the signal-to-interference plus noise
with multiple antennas and several RRHs in a multicast group ratio (SINR) based model where a user can only connect to the
can cooperatively transmit contents to users using multicast BS with the received SINR larger than a pre-defined thresh-
beamforming techniques. Hence, how to design the beam- old. The second model defines that each BS has a coverage
forming vectors to minimize the backhaul cost is a critical radius and can only connect to the users within its radius. In
issue. the third model, when multiple network operators (e.g., 3G/4G
BS and WiFi hotspots) coexist over an area, the user connects
IV. C ACHING C RITERIA to different operators following predefined probabilities.
When designing a caching scheme, we should try to improve
the caching performance in terms of several caching criteria. B. Spectrum Efficiency
We summarize these criteria in Table IV and discuss them in
SE is referred to as the supported data rate over a given
this section.
frequency bandwidth. To improve the area SE, network den-
sification by deploying more SBSs in a macro cell has been
A. Cache Hit Probability utilized. Caching can also improve SE by reducing network
The cache hit probability refers to the ratio of the number traffic and improve network throughput. Liu and Yang [86]
of cached files requested by the users over the total num- studied the performances of the area spectral efficiency in a
ber of files in the caches. A higher cache hit probability two-tier HetNet including helper nodes (with high capacity
means that more user requests are satisfied by the cached caches but without backhaul connections) and PBSs (without
contents. Increasing the cache size can improve the cache hit caches but with limited backhaul links). The HetNet consists
probability, and hence lower the required backhaul capacity. of the MBS tier and the helper node/PBS tier, where the MBS
Therefore, there is a tradeoff between the cache size and the can serve multiple users at each time slot while both the PBS
required backhaul capacity. Pantisano et al. [82] proposed a and helper node can only associate with one user. The essen-
collaborative framework, where SBSs can form a collation tial difference between a PBS and a helper node is that the
and share contents with each other in order to improve the downlink transmission rate for the PBS can be limited by
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2536 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
the PBS’s weak backhaul links while it only depends on the (PSO), where a number of particles are utilized to iteratively
wireless channel between the helper node and the user. They calculate the optimal solution. In the first iteration, each parti-
assumed that each helper caches the most popularity files until cle calculates its optimal solution (i.e., local optimal solution)
it reaches its cache storage capacity and then derived the closed in its search space, and then the global optimal solution is
forms of area spectrum efficiency for the two kinds of HetNets. the maximum (or minimum) local optimal solution among all
They concluded that 1) deploying PBSs and helper nodes particles. In the next iterations, all particles move to other posi-
can both double the spectrum efficiency as compared with tions to attain another local optimal solutions, and the global
only MBS; 2) deploying helper nodes achieves more spectrum optimal solution can be obtained by taking the maximum (or
efficiency improvement and requires less cost of deployment minimum) local optimal solution. After several iterations, the
and management as compared with PBSs; 3) increasing cache overall optimal solution of the problem is determined by the
capacities of helper nodes can achieve comparable spectrum maximum (or minimum) of all global optimal solutions among
efficiency improvement as deploying more helper nodes. all iterations.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2537
and bound algorithm to find the optimum spectrum alloca- cache-enabled helpers and users are modeled as two indepen-
tion and caching distribution. Without a central coordinator, dent PPPs. The users can acquire contents from helpers, other
Liu et al. [91] proposed a distributed algorithm to address the UEs or MBS. Helpers can transmit data to users within their
joint content placement and transmission problem to minimize coverage ranges and only UEs within a certain distance can
the average download delay. In their system model, BSs are establish D2D connections. They derived the expressions of
equipped with cache storages. Upon receiving a user request, the cache hit ratios at helpers and UEs as Ph and Pu , respec-
the serving BS decides to either transmit the requested content tively. Then, the traffic offloading probability can be calculated
directly, or cooperate with other BSs and transmit the content as 1 − (1 − Ph )(1 − Pu ).
to the user with cooperative beamforming. If none of the BSs
cache the file, the serving BS will retrieve the content from the
content server via backhaul links and then transmit the content G. Summary and Discussion
to the user. They assumed that the channel gains are identi- In this section, we classify the existing research according to
cally and independently distributed over the time slots. Hence, different caching criteria including cache hit probability, spec-
the content delivery rate, which is related to the channel gain, trum efficiency, energy efficiency, network throughput, content
is a random variable. In each time slot, they chose the trans- retrieving delay and offloaded traffic.
mission scheme with less transmission delay between direct The cache hit probability reflects the percentage of cached
transmission scheme and cooperative beamforming transmis- files that are used and is an important metric to evaluate the
sion scheme. They then calculated the expectation of delays performance of caching placement algorithms. Hence, most
from all time slots as the average download delay. existing works try to design a caching placement policy, which
Most works do not take the request forwarding problem determines the content to be cached at different caches, to
into consideration. Dehghan et al. [92] jointly optimized the maximize the cache hit probability. Another important factor
content placement and user request forwarding problem to that affects the cache hit probability is the caching storage
determine which content should be cached at each cache and size. A larger cache size implies that more contents can be
how user requests are forwarded to minimize the average con- cached, hence increasing the cache hit probability. How to
tent access delay. In their network model, there are several allocate caching storages to different locations is still under
caches and an one-hop backend server which can always serve investigation. Intuitively, the locations with more users should
the users. User’s requests can be forwarded either to the back- be allocated with a larger cache size. Furthermore, the cache
end server by costly, congested uncached paths or to the caches hit probability can also be increased by cooperative caching,
by cheaper and faster cached paths. However, if the cache, to which allows multiple caches share contents with each other.
which a user routed, does not cache the requested content, it For example, different BSs can cache different contents and
will first download the content from the backend server and share with each other to serve users, thus improving the cache
then transmit the content to the user, resulting in additional hit probability.
content access delay. They discussed two delay models: 1) the SE reflects how much network capacity can be provided by a
congestion-insensitive model in which the delay is indepen- unit of spectrum resource. The area spectrum efficiency (ASE)
dent of the traffic load and can be considered as a constant; is also measured to indicate how many users in a certain area
2) the congestion-sensitive delay model in which the delay is can be accommodated. In wireless networks, HetNet is usually
related to the traffic congestions and the paths are modeled deployed to improve spectrum efficiency by deploying several
as M/M/1 queues with fixed request arrival rates and service SBSs in each macro cell. Hence, most caching policies, aiming
rates. They also demonstrated that for both delay models, the at maximizing the SE, is in the context of HetNet. As the MBS
joint optimization problem is NP-complete and hence cannot and SBSs share spectrum resources, how to allocate the spec-
be solved in polynomial time and approximation algorithms trum resources is a challenging issue. Caching helps increase
were designed to address the problem. the network capacity to accommodate more users, thus increas-
ing the SE. On the other hand, increasing the density of SBSs
also helps improve the SE. Therefore, the SBS density can be
F. Traffic Offloading traded off by increasing the cache size to achieve a targeted SE.
Mobile edge caching can offload the traffic from backhaul In addition, D2D caching with unlicensed-bands increases the
links. Maximizing traffic offloading leads to better caching SE by establishing direct communications among UEs instead
performance. Li et al. [60] tried to maximize the traffic offload- of obtaining contents from BSs via downlinks. In that case,
ing by caching in HetNets. They formulated their problem maximizing the SE is equivalent to maximizing the number of
as a minimization of expected sum of traffic loads subject D2D communication links; this has been widely investigated
to cache storage capacities and backhaul link limitations. in D2D networks.
The traffic loads are incurred between PBS and user, PBS EE reflects the supported data rate per given energy con-
and PBS, MBS and user, as well as mobile core network sumption. The energy consumption of a BS is attributed to
and MBS. A suboptimal greedy algorithm was designed to both the BS transmission and the backhaul traffic. Caching at
obtain the optimum content placement decisions. By incor- BSs helps reduce the backhaul traffic and hence reduces the
porating user mobility, Rao et al. [93] explored the content energy consumption. The energy consumption of BS downlink
placement problem in a two-tier wireless caching network to transmission can be reduced by optimizing the BS transmis-
maximize the traffic offloading probability. The locations of sion power. Note that, in HetNet, caching at SBSs can achieve
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2538 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
TABLE V
higher EE than caching at MBSs because the limited backhaul C OMPARISON OF C ACHING S CHEMES
capacity of SBSs (i.e., links between MBSs and SBSs) may
limit the network throughput when caching at MBSs and may
hence limit the EE. However, caching at SBSs suffers from the
limited cache storages of SBSs. Therefore, the content place-
ment problem, which places the contents at both SBSs and
MBSs should be well designed, is a crucial issue to be inves-
tigated in order to maximize the EE. Caching policies should
also consider the impact of interference because too many
interferences may degrade the network throughput and hence
reduce the EE. In addition, the EE is very critical for D2D
caching because UEs are usually battery constrained. How to
minimize the energy consumption of UEs while satisfying all
user’s content requests is still an ongoing research.
The network throughput reflects the maximum data rate that
the network provides. It affects the user QoS especially in the
video streaming application, where users usually require the
minimum network throughput. In order to improve the network
throughput, the contents should be cached as close to the users
as possible. Otherwise, any hop of the multiple network hops
between the content and users could be the bottleneck and
limit the network throughput. The network throughput is also
highly related to the caching storage size. A larger cache size
implies more contents can be cached and hence higher network
throughput. The cache storage allocation problem to maximize
the network throughput is still under investigation.
Content retrieval delay directly affects the user QoS. Most
delay-sensitive services (e.g., disaster response and virtual
reality) usually define strict deadlines for content retrievals.
In wireless networks, content retrieval delay usually consid-
ers the downlink delay (i.e., the delay between the caches
or content servers and users) and neglect the uplink delay
(i.e., the duration of forwarding the requests). The one-hop the offloaded traffic maximization problem into the traffic load
network delay usually consists of three parts including the minimization problem. The second way is to evaluate the traf-
data transmission delay, queueing delay and propagation delay. fic offloading probability which is defined as the number of
Most works only consider the data transmission delay. The requests served by one of the caches over the total number of
content retrieval paths may include several network hops, requests. Hence, a higher cache hit probability usually leads
which depend on where the content is located. If the con- to a higher traffic offloading probability.
tent is obtained from the mobile core network, the content
retrieval delay includes the backhaul delay and the wire- V. C ACHING S CHEMES
less transmission delay. The backhaul delay is depends on
We will next discuss different caching schemes, delineating
the link length, backhaul capacity and traffic load while the
their advantages and disadvantages as summarized in Table V.
wireless transmission delay depends on the wireless channel
conditions, bandwidth, SINR and interferences. For cooper-
ative caching, the content transmission latency between two A. Proactive Caching
cache nodes should also be considered. The content retrieval The reactive caching policy determines whether to cache
delay is affected by both the content placement strategy (which a particular content after it has been requested. It typi-
determines where the contents to be placed) and the content cally happens when the network is at peak-traffic hour and
delivery strategy (which decides how the contents are deliv- cannot effectively cope with the peak traffic. On the other
ered). Hence, minimizing the content retrieval delay is usually hand, a proactive caching policy determines which contents
the objective of the content placement and delivery problems. should be cached before they are requested based on the
Offloaded traffic reflects how much traffic can be reduced prediction of user demands [95]. Proactive caching usually
by caching in the network. The traffic in the backhaul links utilizes the estimations of request patterns (e.g., user mobil-
is usually measured to evaluate the offloaded traffic. More ity patterns, user preferences and social relationships) to
offloaded traffic implies better caching performance. Offloaded improve caching performance and guarantee QoS require-
traffic is highly related to the cache storage sizes. A larger ments. As machine learning and big data analytics advance, it
cache size implies more offloaded traffic. The offloaded traffic is advantageous to cache popular contents locally before the
is mainly evaluated in two ways. The first way is to transform requests truly arrive [32], [49]. Proactive caching improves the
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2539
caching efficiency by pre-downloading popular contents dur- to minimize the average file downloading delay. Their network
ing off-peak times and serving predictable peak-hour demands. architecture consists of a MBS and cache-enabled SBSs
Bastug et al. [32] proposed a proactive networking paradigm to which UEs’ requests are preferentially forwarded. They
which leverages social networks and content popularity distri- divided the files in the file library into several file groups
butions to improve the caching performance in terms of the and assumed that each SBS can only cache one file group.
number of satisfied requests and the offloaded traffic. They A distributed BP algorithm was proposed with the aid of a
demonstrated that proactive caching performs better than reac- factor graph, which is a bipartite graph consisting of factor
tive caching. Tadrous et al. [96] considered the system in nodes and variable nodes. A factor node refers to the utility
which the popularity of services can be predicted. Cache nodes function of a user and is related to the average file down-
can proactively cache services during off-peak hours accord- load delay. Each variable node indicates a cache status vector
ing to their popularities. They explored the proactive caching of each SBS. Only if a UE is under the coverage of a SBS,
scheme by considering the resource allocation to maximize the there can be an edge connecting the UE (factor node) to the
cost reduction which is related to the offloaded traffic incurred SBS (variable node). The BP algorithm is then implemented
by proactive caching. by iteratively passing messages between the factor nodes and
To further improve the performance of proactive caching, variable nodes. In each iteration, the message is represented by
it is desirable to jointly optimize the caches among multiple a probability mass function based on the UE’s utility function;
nodes. Hou et al. [97] exploited a learning-based approach each variable node updates its message to be sent to connected
for proactive caching to maximize the cache hit ratio. In factor nodes and each factor node updates its message to be
their system model, different caches can share information and sent to connected variable nodes. The BP algorithm termi-
contents. They first estimated the content popularity by a learn- nates when the messages do not change. Different from [101]
ing method and then designed a greedy algorithm to obtain which assumed that each UE can only served by one BS,
the suboptimal content distribution solutions. However, the Liu et al. [91] proposed a distributed BP algorithm to minimize
caching performance highly depending on the prediction accu- the average download delay in cellular networks where each
racy is the major drawback of proactive caching. Prediction user can be served by multiple cache-enabled BSs. The data
errors can gravely degrade the caching performance [98]. transmission scheme depends on the cache placement. If only
one BS caches the requested file, the BS will transmit the file
to the user directly; otherwise, multiple BSs can transmit the
B. Distributed Caching file via cooperative beamforming. In their BP model, each BS
Centralized caching uses a central controller, which pos- iteratively collects local information (e.g., user requests and
sesses a global view of all network status, to determine caching CSIs), runs computations, and exchanges messages with the
schemes. The central controller usually tracks the information neighboring BSs until convergence. They demonstrated that
of user mobility patterns and the channel state information the distributed BP algorithm requires less calculations than
(CSI) by extracting and analyzing the received requests. the centralized one.
Hence, the centralized caching is able to achieve the opti-
mum caching performance with optimum caching decisions
(e.g., content placement). However, obtaining full network C. Cooperative Caching
information is challenging especially in the context of dynamic Since the caching space in a BS is relatively small, design-
5G wireless networks, which are expected to serve an increas- ing a caching policy for each BS independently may result in
ing number of mobile users [99]. Furthermore, the central an insufficient utilization of caches. This happens when some
controller has to process a large amount of traffic, which incurs of the caches are overly used while others have many vacant
a great burden on the controller as well the links between the spaces. In order to address this issue, cooperative caching poli-
controller and network entities. In that case, the central con- cies have been proposed to improve the caching efficiency. In
troller can be the bottleneck of the mobile caching system. the cooperative caching, BSs are able to share cached con-
In distributed caching, which is also referred to as decen- tents with each other [98]. However, the delay of searching
tralized caching, cache nodes make decisions (e.g., content and retrieving contents from other caches may also be signifi-
placement and update) only based on their local information cant and hence should be taken into consideration. In order to
and the information from adjacent nodes. Distributed caching actualize cooperative caching, network nodes should be aware
is applied in [49] where adjacent BSs are jointly optimized to of the caching status of other nodes by information exchanges
increase the cache hit probability. By fetching contents from that may induce significant signaling overheads. Hence, we
multiple neighboring caches, the total cache size seen from need to find a solution to share the caching status with the
the user can be increased. minimum overhead. Jiang et al. [102] developed the coopera-
The believe propagation (BP) method has been proposed tive caching policy for HetNets where users can fetch contents
as an efficient way to distributively solve the resource alloca- from FBSs, D2D communications or MBS. They formulated
tion problems in wireless networks. In BP, the complex global the cooperative content placement and delivery problem as an
optimization problem is usually decomposed into multiple integer linear programming (ILP) problem to minimize the
subproblems, which can be effectively addressed in the par- average downloading latency. A Lagrangian relaxation algo-
allel and distributed manner. A tutorial of BP can be found rithm was then designed to decouple the original problem into
in [100]. Li et al. [101] discussed the file placement problem two smaller subproblems which can be solved more efficiently.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2540 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
Additionally, the content delivery problem was also formulated layer and then combined the two layers by providing coded
and solved by the Hungarian algorithm. multicasting opportunities across different layers.
Most researches on cooperative caching assume the static
popularity; the joint consideration of the cooperation and E. Probabilistic Caching
learning of the time-varying popularity still requires further
Different from wired networks with fixed and known topolo-
investigation. Song et al. [103] explored the content caching
gies, wireless networks face the uncertainty about which user
problem with an unknown popularity distribution. They incor-
will connect to which BS due to undetermined user locations
porated the learning of the popularity distribution, and then
and the variance of user requests. Caching in wireless networks
jointly optimized the content caching, content sharing and cost
becomes more complex when a user moves from one cell to
of content retrieving.
another during the content delivery. An approach to solve this
problem is to employ a probabilistic caching policy in which
the content can be placed in the caches according to some
D. Coded Caching random distributions. To reflect the uncertainty, Blaszczyszyn
and Giovanidis [85] modeled the user locations as a spatial
In a traditional switching network, the network node for-
random process. They optimized the probability of each con-
wards packets one after another: two packets are present in the
tent being cached at each BS with the aim to maximize the
node at the same time; one of the two packets is forwarded
cache hit probability. They also demonstrated that the widely
while the other one is queued even if both are headed for
used greedy algorithm, which caches the most popular files,
the same destination. This traditional packet forwarding mech-
cannot always guarantee optimization in a general network
anism requires separate transmissions and hence decreases
unless no BS coverage overlaps exist. Ji et al. [43] discussed
the network efficiency. Network coding is a technique which
the random caching strategy with the aid of coded multicast-
merges two separate messages into one coded message and
ing in D2D networks where UEs are uniformly distributed in
forwards them to the destination. After receiving the coded
a grid network and can share contents with each other. They
message, the network node separates them into two original
pointed out that the drawback of deterministic caching is that
messages. To enable the network coding technique, transmit-
the optimal cache placement cannot always be implemented
ted data are encoded at network nodes and then decoded at
without errors especially when D2D caching is considered.
the destinations. Hence, the network coding technique requires
They demonstrated that their random caching strategy, where
fewer transmissions to transmit all the data. However, this
users make arbitrary requests for files, performs better as the
scheme requires coding and decoding processes, and hence
network size grows.
incurs more processing overheads to the network nodes. The
complexity of network coding can be lowered by efficient
packet transmissions [104]. F. Game Theory Based Caching
In network coded caching, files in the file library are usually In wireless networks, multiple parties coexist, including the
divided into coded packets and then any linear combina- service providers (SPs) who provide contents, mobile network
tion of these code packets can reconstruct the entire original operators (MNOs) who manage the radio access networks
object [43]. For example, the file library has the file C, which (RANs), and mobile users who consume different contents.
is divided into C1 ⊕ C2 . Owing to the cache storage lim- When applying a specific caching strategy, the benefits of dif-
itation, a user, who requests file C for the first time, only ferent parties could conflict with each other. For example,
caches packet C1 after having received file C. When the user bringing more contents to BSs is beneficial to users while
requests the same file C for the second time, the BS only increasing the cost of MNOs due to the additional storages
needs to transmit C2 to the user. On the contrary, in uncoded and power consumption. Since each party only cares about
caching, file C has to be transmitted for both the first and its own profit, competitions among them are unavoidable. To
second time. Therefore, coded caching helps reduce network effectively cope with the competition and guarantee high over-
traffic (C + C2 < C + C ). Maddah-Ali and Niesen [105] all user experience, game theory is adopted to analyze the
jointly optimized the caching and coded multicast delivery and interactions among these parties. An auction game is suitable
demonstrated that the joint optimization problem can improve to characterize the competition among SPs. In this setting, the
the caching gain when the demand for the cached content is cache storages are considered as objects to be auctioned and
uniformly distributed. They further showed the near-optimal the price should be paid to MNOs by SPs. MNO should be
performance of coded caching achieved by a random caching the one who is in charge of the auction process.
scheme [106]. They also presented that caching gain can be Hu et al. [109] applied game theory to analyze how the
exploited from coded multicast transmissions in [107]. They selfishness of different parties may impact the overall caching
proposed in [107] a decentralized coded caching scheme and performance by considering the relations and interactions
discussed how to handle scenarios with asynchronous user among different parties. They considered two scenarios includ-
demands, nonuniform content popularity, and online cache ing the SBS caching and D2D caching. In the former one,
updating. Most works only consider the single layer coded multiple SPs aim to cache their own contents into SBSs with
caching, Karamchandani et al. [108] proposed a hierarchical limited cache storages, and an auction game is proposed to
coded caching scheme by considering a two-layer hierarchical solve the problem. For the latter one, they adopted a coali-
cache. They first utilized the coded caching schemes in each tion game to analyze how a cooperative group can be formed
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2541
to download contents together. They extended their work by all network status, distributed caching usually cannot obtain
introducing the concept of caching as a service in [12], where the optimal solutions. Hence, designing distributed caching
they utilized the wireless network virtualization technology strategies with performance guarantee still requires further
and each SP has to pay for the SBS cache storages owned by research.
the MNO. A multi-object auction mechanism was proposed Cooperative caching allows multiple caches to share con-
to characterize the competition among SPs. Since all SPs tend tents with each other, and hence it can alleviate the shortage
to cache more contents to improve the service performance, of caching storages. In cooperative caching, a cache is usu-
they intented to act as the bidders and compete for limited ally aware of the caching status of its neighboring caches by
cache storages. The utility function is related to the average exchanging information; this may incur significant signal over-
content download file. Their mechanism was carried out by heads. Most works on cooperative caching do not consider
a series of auctions, which are solved by the market match- these overheads. Hence, further research is needed to minimize
ing algorithm [110]. Hamidouche et al. [111] assumed that all the content retrieval latency while minimizing the overhead.
SBSs in a cache-enabled small cell network could choose their Coded caching allows files to be divided into coded chunks
backhaul link types among wired links, mmW and sub6 GHz with coding, which are then cached in different cache nodes.
bands. They formulated a backhaul management minority Users obtain different coded chunks from different cache
game where the SBSs are the players and independently decide nodes and then decode these chunks into a complete requested
their backhaul link types and the numbers of files to down- file. Coded caching is usually coupled with a multicast tech-
load and cache from the MBS without sacrificing the current nique to provision content delivery. However, coded caching
requests’ QoS. The characteristic of a minority game is that aggravates the network system complexity and introduces
players prefer the action selected by the minority group. The more processing at network intermediary and terminal nodes.
existence of a unique Nash equilibrium was then proved. This drawback of coded caching is neglected by most works
By considering the social ties among UEs, and hence needs further investigation.
Hamidouche et al. [112] utilized the game theoretic approach Probabilistic caching allows contents to be cached at differ-
to determine the content placement strategies to SPs. A many ent caches with different probabilities. It is proposed to address
to many matching game was formulated between SPs and the uncertainty of wireless networks caused by varying wire-
SBSs, where each file in SPs can be matched to a set of less channel conditions and user mobilities. The objective of
SBSs. SPs specify their preferences based on the average file probabilistic caching is usually to maximize the cache hit prob-
download delay while SBSs prefer to store more popular files. ability by optimizing the probabilities with which contents are
The stable solution can be obtained by iteratively update the cached at different locations. Most works assume the deter-
matching solution according to SPs’ and SBSs’ preferences, ministic network status and so probabilistic caching is still an
until neither of them can find a better preference. ongoing research.
Game theory based caching investigates the interactions of
multiple coexisted parties (e.g., service providers and mobile
G. Summary and Discussion network operators). Each party selfishly optimizes its own ben-
In this section, we survey several caching schemes and efits which may conflict among different parties. A typical case
compare their pros and cons, including proactive caching, is the auction game where the service providers act as the bid-
distributed caching, cooperative caching, coded caching, prob- ders and compete for the limited caching storages in order to
abilistic caching and game theory based caching. improve their own caching performances. Most works only
Proactive caching, contrary to reactive caching, caches the consider the non-cooperative game, and so the cooperative
contents prior to receiving the requests. It helps improve the game requires further investigation.
caching efficiency by pre-downloading popular contents dur-
ing off-peak hours and serving users during peak hours. Hence,
accurate prediction of user demands plays an important role in
VI. C ONTENT R EQUEST A NALYSIS
proactive caching. Most works characterize the user demands
by estimating user mobility patterns, content popularity distri- In the next four sections, we will discuss the specific caching
butions and user social relationships via machine learning and processes as categorized in Fig. 7. The benefits and solutions
big data analytics. Further research is still required to provide of caching highly depend on the traffic characteristics (e.g.,
higher estimation accuracy. user demand profiles). In this section, we discuss the request
Distributed caching, contrary to centralized caching, does types and patterns.
not rely on the central controller to make caching deci-
sions. Hence, it avoids the great burden on the single control
node. In distributed caching, the caching strategies are usu- A. Content Types
ally made based on the local information (e.g., user requests Contents such as files and videos are the most commonly
and CSIs) and that from the neighboring ones, and hence cached ones. File downloading (e.g., software or data library
can be addressed in the parallel and distributed manner. Most updates, music or video downloads) is applicable for delay
works investigate how to utilize the local and neighboring tolerant data because files can only be used after they have
information to solve the content placement problem. However, been delivered. On the other hand, video streaming requires
unlike centralized caching, which owns a global view of a low initial delay, high video quality and few stalls during
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2542 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
B. Request Patterns
It is challenging to make optimum decisions for caching at
the mobile edge without the knowledge of user request pat-
terns. Request patterns can be extracted and analyzed to predict
future requests and provide insights on the proactive caching.
Observation of past request arrivals can be a feasible solu-
tion to obtain request patterns. However, it is challenging to
either predict or model the request arrivals in the real world.
Hence, intelligent algorithms based on predictions and stochas-
tic models are required. In order to characterize the request
patterns, most works assumed that the request arrivals follow
the Poisson process [118]. Urgaonkar et al. [119] discussed the
service migration problem from the cloud to the mobile edge
clouds. They provided insights on where and when to migrate
the services by incorporating the user mobility and request
variation. They modeled request arrivals as a Markov process
and proposed algorithms to minimize operational costs.
C. Content Popularity
The content popularity is defined as the ratio of the num-
Fig. 7. Caching process. ber of requests for a particular content over the total number
of requests from users. It is usually obtained for a certain
region during a given period of time. The key feature of the
content popularity is that most people are interested in a few
playback, and users prefer to start playing the video immedi- popular contents within a certain time period and hence these
ately after sending the request; video streamed data are thus few contents account for major traffic loads. Li et al. [120]
considered as time sensitive data. illustrated that top 5% videos in Youku contribute over 80%
Next generation 5G networks will be enabling and empow- of contents inside mobile networks in China. In general, the
ering various emerging IoT applications [31], [113]. IoT content popularity distribution changes at a relatively slow
applications generate a large amount of monitoring, measure- speed [49]. Hence, the content popularity distribution is usu-
ment, and automation data. Requesting for these data may ally considered as a constant over a long time (e.g., one week
incur enormous traffic to the network. Furthermore, frequently for movies, and two or three hours for news) [49]. In addition,
activating IoT devices to fetch data drains their batteries [114]; the global popularity in a large region like in a city or a coun-
this is a major challenge. Therefore, it is beneficial to cache try is often different from local popularity in a small region
IoT data to reduce the frequency of activating IoT devices like in a campus [95]. The popularity prediction has become
and alleviating traffic loads in the network [80]. IoT data have an active research field recently because it can be incorporated
much shorter lifetimes. Therefore, more intelligent caching into many applications such as caching, online marketing, rec-
strategies are called for to address this issue. Vural et al. [115] ommendation system, and media advertising. Many prediction
first proposed caching IoT data at network routers according methods have been proposed such as the cumulative statistics
to the data lifetime and hops between the respective router and from the popularity correlation over time [121].
the data source. A shorter data freshness indicates that caches Several time series models have been proposed, e.g., autore-
should be updated more frequently and more requests should gressive integrated moving average [122], regression mod-
be sent to the IoT devices, thus resulting in more traffic load els [123] and classification models [124]. These model-based
in the network. They further exploited the tradeoff between forecasting schemes are usually carried out by machine learn-
the traffic load and data freshness. Niyato et al. [116] dis- ing methods. Leconte et al. [125] utilized a dynamic request
cussed a novel caching scheme for IoT sensing service where model, the shot noise model (SNM) for mobile edge caching,
IoT devices sense the ambient environment and send the data to maximize the cache hit ratio. In SNM, each shot is
to the users. Considering the rapidly changing environment, a associated with one content and is characterized by four
timer was set up to determine the freshness of IoT contents. dimensions including shape, duration, shot arrivals, and vol-
The caches should be removed when they exceed the timer ume. For the shape, they utilized the rectangular pulse whose
threshold. In order to maximize the cache hit ratio, a thresh- height represents the content popularity; the duration reflects
old adaptation algorithm was then designed to find the optimal the content life span; shot arrivals follow the Poisson pro-
threshold. Yao and Ansari [117] investigated caching schemes cess; volume is determined by a power-law distribution [121].
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2543
Famaey et al. [126] designed a content placement strategy respectively [135]. We discuss these two categories in the
based on the estimated content popularity. They proposed a following subsections.
general popularity prediction algorithm which learned from the 1) Spatial Property: The spatial property refers to features
historical request patterns and chose the best fit functions from regarding the physical location. The commonly used model
constant function, power-law distribution, exponential distribu- for user mobility in mobile networks is the random way-
tion and Gaussian distribution. Then, future request patterns point model [136]. In this model, a user randomly moves to
can be estimated based on the chosen best fit function. a destination point (i.e., waypoint) following a straight line
The content popularity is reported to follow a Zipf distribu- towards the waypoint at a constant speed. After some wait-
tion which belongs to the power law distribution [121]. The ing time in the waypoint, the user moves to another waypoint.
Zipf distribution [127] defines the probability of a user in Hence, the transition time and length between two waypoints
requesting the f th file as are important parameters to measure. The Markov model can
also be used to model the user mobility [137]. In the Markov
f −δ
p f = N , (1) model, a state represents the serving BS for each user and a
f −δ
j =1 j transition probability indicates the probability of the network
status moving from one state to another. Lv et al. [138]
where δ is the skewness parameter. δ reflects the different
investigated the effect of living habits on the models of spatio-
levels of skewness of the Zipf distribution. A larger skewness
temporal prediction and next-place prediction based on the
implies larger deviations among different content popularities,
hidden Markov model.
i.e., most users request a small number of popular contents.
2) Temporal Property: Temporal property characterizes the
Particularly, all the contents have the same popularity if δ
time-related features. The parameters to describe the tempo-
equals 0, i.e., users request all contents with equal probability.
ral property include the contact time and inter-contact time.
However, the content popularity is time-varying and the fresh
Content time refers to the time duration where users are within
view of the system is required to know the content popularity.
the transmission range and can be connected with each other.
The massive data collection and processing are needed and
The inter-contact time refers to the time intervals between two
hence the content popularity prediction is a complex task to
contact times. Wang et al. [38] proposed a caching placement
handle.
strategy with the aim to maximize the data offloading with the
Liu et al. [128] proposed a Hadoop-based distributed com-
consideration of user mobility. They modeled the user mobility
puting platform for monitoring large-scale network traffic data
as an inter-contact model which collects the user connectivity
and demonstrated that their platform is efficient and cost-
information. Users within the transmission range are defined to
effective for analyzing user behaviors. Zeydan et al. [129]
be in contact and can share contents with each other by D2D
also focused on the deployment of a Hadoop-based big data
communications. Rao et al. [93] jointly optimized caching in
processing platform inside a mobile core network in order to
D2D caching and caching at helpers to maximize the traf-
monitor the performance gains of caching with real data tri-
fic offloading probability. They considered the impact of user
als. They utilized machine learning algorithms to predict the
mobility which was modeled as the contact time and inter-
content popularity and demonstrate improvements of QoE by
contact time. They assumed that user locations remain static
the proactive caching at the edge.
during the contact time and move to another location after the
contact time.
D. User Mobility Pattern
Mobility is an important factor to be considered in mobile
networks because it impacts mobile network topologies (e.g., E. Summary and Discussion
the varying user-BS association) over time [130]. The latency In this section, we discuss the request types, patterns,
of mobile networks can be attributed to unpredictable topology content popularity estimation and user mobility pattern.
changes owing to mobility [131]. User mobility also contains The traditional cached contents are mainly files and videos.
many helpful information (e.g., social relationships and traf- Files are usually time tolerant data while videos are time sen-
fic patterns), which can be utilized to improve the caching sitive data. On the other hand, the IoT data have much less
performance. Poularakis and Tassiulas [132] discussed the lifetimes because they are usually sensed data obtained by
storage allocation problem in Femtocaching networks. As the the IoT sensors. Stale IoT data cannot reflect the actual envi-
future position after the movement is highly related to the cur- ronment status. Hence, caching strategies should consider the
rent position, user mobility can be modeled by a Markov chain. freshness of the IoT data.
As users with more similar mobility patterns tend to have Request patterns are usually extracted and analyzed to
stronger social relationships, the user mobility pattern relies predict future request patterns, which provide information for
on social relationships [133]. Musolesi et al. [134] proposed a proactive caching. Most works assume that the request arrivals
user mobility model based on user social relationships. They follow the Poisson process or Markov process. However, these
first created a social graph containing all social groups and mathematical models are not necessarily applicable to real traf-
then mapped these social groups to physical connected groups. fics. Machine learning or big data analysis techniques may be
The user mobility pattern is generally classified into two cat- utilized by observing the past requests.
egories including the spatial and temporal properties, which The content popularity is an indispensable factor for the
reflect the location-based and time-related characteristics, content placement problem. It reflects the probabilities of users
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2544 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2545
backhaul but high storage capacity or the BS which pos- C. Summary and Discussion
sesses the whole file library. They transformed the average In this section, we discuss the content exploration phase of
download delay minimization problem into the cache hit ratio the caching process, including the content query problem and
maximization problem. However, their work did not con- content placement problem.
sider the wireless channel conditions and assumed the delays The content query problem determines how the requests are
between users and helpers are constant. Song et al. [143], forwarded in order to find the contents and which network
instead, considered the wireless channel conditions in femo- nodes should serve the users (i.e., user association problem).
caching networks. They formulated the content placement In general, the requested content is first searched in neigh-
problem as an ILP model and demonstrated that the closed- bor UEs, then in BSs (e.g., FBSs, PBSs and MBSs), and
form solution of this model was difficult to obtain. They finally from the mobile core network or remote Internet con-
then designed a greedy algorithm to obtain the subopti- tent servers. Unlike the traditional user association problem in
mal solution. In their simulations, they demonstrated that wireless network, users may not be associated with the nearest
caching the most popular files was not always the best BSs in mobile edge caching. Instead, it is more beneficial to
choice and wireless channel conditions also affected the associate users to the caches that have the requested content.
caching performance. On the other hand, Peng et al. [144], Since the cache locations affects the user association strategies,
by considering the backhaul constraints, studied the con- the user association problem is usually jointly optimized with
tent placement problem in a cache-enabled wireless network the content placement problem in most works. In cooperative
where BSs are equipped with caches and a central con- caching where caches share contents, the joint optimization of
troller can transmit files to users if files are not cached. content placement and user association has not been addressed
They aimed to minimize the file transmission delay con- yet and needs further study.
sisting of the delay in the backhaul links and wireless The content placement problem, which determines how to
transmissions. They first formulated this problem as a mixed- distribute contents to different caches, is the most important
integer nonlinear programming problem and then designed a issue in caching related studies. The objective of this problem
relaxation-based heuristic algorithm to obtain the suboptimal is usually to minimize the average file download delay and
solutions. maximize the cache hit probability. As the placement deci-
User social relationships can impact the content place- sion involves the 0-1 variables, most works formulate this
ment strategies in mobile edge caching. Wu et al. [22] problem as an ILP model which is usually solved by design-
explained that the challenges of social-aware caching root ing heuristic algorithms to obtain suboptimal solutions. The
in the fact that the contact among UEs is usually oppor- content placement problem in D2D caching usually consid-
tunistic in practical systems owing to the short-range D2D ers the user social relationships which affects the user contact
communications. Because of the scarce spectrum resources times and location distances. However, most works neglect the
and contact durations, the transmitted contents can be very additional traffic overhead caused by caching the contents. The
limited. They also discussed four content placement strate- content placement strategies considering the overhead are still
gies: 1) global content placement based on the user con- required.
tact information; 2) social aware content placement based
on the respective community; 3) individual content place-
ment based on the user’s own interest; 4) random content VIII. C ONTENT D ELIVERY
placement. The content delivery problem addresses the issues of how
User mobility is also a critical factor of the content place- to transmit contents to the users. Specifically, these issues
ment problem. Users can move from one location to another involves the locations the content should be transmitted from,
and stay in one cell for a certain time (i.e., cell sojourn time). the transmission power and the frequency bands the content
During the cell sojourn time, the user is associated with one should be transmitted. Hence, the specific physical condi-
BS. Therefore, the user can only receive the data from certain tions (e.g., available spectrums, wireless channel conditions
BS during that time. Hence, the cell sojourn time can impact and interference) should be taken into consideration when
the content placement strategy in BSs. Wang et al. [135] designing content delivery strategies.
investigated the content placement problem by utilizing the
user mobility information in mobile edge caching. They dis-
cussed caching at BSs and UEs separately with the objective A. Multicast Transmission
of maximizing the traffic offloading probability. Caching at Multicast enables simultaneous content transmission to
BSs is constrained by the wireless transmission rate and the multiple destinations by broadcasting. Via multicast, a BS
sojourn time. The content placement problem for caching can concurrently serve multiple users who request identical
at BSs is formulated as a convex optimization problem for contents. Hence, multicast can help reduce duplicated trans-
uncoded caching and a mixed integer programming (MIP) missions and energy consumption. Users’ requests may be
problem for coded caching. Caching at UEs is constrained initiated at different times. For the delay-tolerant file down-
by the UE transmission distance and user contact times. loading, users can endure some initial delays so they can
The content placement problem for caching at UEs falls wait for each other before initiating the multicast transmis-
into the problem of maximizing the monotone submodular sion. On the other hand, users may suffer from the initial
function. delays and the QoE degradation for delay sensitive video
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2546 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
streaming. Therefore, multicast schemes should consider dif- Hence, the content delivery is carried out group by group. They
ferent data types. Maddah-Ali and Niesen [105] proposed a discussed the problem of how to form efficient groups to min-
novel coded multicast strategy, which coded parts of popular imize the power consumption of all UEs. They demonstrated
contents and pre-cached them. In the content delivery phase, by simulations that the total power consumption can be saved
the content server can serve multiple requests with a single significantly by grouping UEs in multihop D2D networks.
multicast transmission. However, this method scales badly in
practice because the coding complexity grows exponentially. C. Joint Optimization of Content Placement and
Poularakis et al. [145] proposed a caching policy for small Content Delivery
cell networks to minimize the service cost which is related
In practice, the content placement and content delivery can
to the incurred traffic. In their system model, the SBSs can
impact each other. On one hand, the content placement deter-
use multicast to transmit cached contents to users under their
mines the distribution of contents and impacts the content
coverages, and the MBS can also transmit contents to users by
delivery paths. On the other hand, the statistics of the content
multicast transmission but incur more service cost than that of
delivery over a long time can be utilized to explore the popular-
SBSs. They designed several heuristic algorithms to address
ity distribution. The caches can be periodically updated based
this problem and then demonstrated that the multicast-aware
on these statistics. Therefore, studying the coupling between
caching scheme performs 52% better than the multicast-
the content distribution and delivery is very important.
agnostic one. Liao et al. [146] utilized the multicast technique
Maddah-Ali and Niesen [105] jointly optimized the caching
to deliver content from MBS to SBSs in a cache-enabled small
and coded multicast delivery, and demonstrated that the joint
cell network. They designed caching policies to minimize the
optimization of caching and delivery can improve the caching
long-term average backhaul load for two different networks
performance. Cui and Jiang [61] jointly considered the caching
(i.e., small scale networks and large scale networks). For the
placement and multicast delivery in a cache-enabled two-tier
small scale networks, they proposed a greedy algorithm to
HetNet consisting of the MBS tier and the PBS tier. They con-
provision content delivery. For the large scale networks, a
sidered identical caching in the MBS tier and random caching
multicast-aware in-cluster cooperative caching algorithm was
in the PBS tier. In the MBS tier, all MBSs cache the same
developed in which different SBSs can share content with each
set of files. In the PBS tier, all PBSs randomly cache dif-
other.
ferent files except the files that have already been cached at
Instead of utilizing multicast to optimize the caching pol-
MBSs. Both MBSs and SBSs can transmit files to user by
icy, Zhou et al. [147] tried to optimize the multicast content
multicasting. They formulated the joint problem as a mixed
delivery policy under a given content distribution in HetNets
discrete-continuous optimization problem with the objective to
consisting of a MBS and SBSs to minimize the average
maximize the successful transmission probability and designed
network delay and power. The MBS provides full coverage
a near optimal algorithm to solve it. Gregori et al. [56] jointly
of the network and generates higher power consumption. The
optimized caching and transmission policies to minimize the
SBS cells are not overlapped and there is no interference
MBS energy consumption. In their system model, SBSs and
among all SBSs. Considering the special structure of mono-
UEs are equipped with cache storages. A SBS can serve
tonicity of the value function, they developed an approximate
multiple users simultaneously by multicasting and UEs can
dynamic programming algorithm to solve this problem. They
share data through D2D communications. They formulated this
extended their work in [148], where they additionally con-
joint problem as a finite dimensional convex problem and then
sidered the impact of content sizes and the wireless channel
designed a projected subgradient algorithm to solve it.
conditions on the multicast scheduling policy. They formulated
the optimization problem as a Markov decision process and
designed a suboptimal algorithm to solve this problem. D. Summary and Discussion
In this section, we discuss the content delivery problem on
how to transmit contents to users.
B. Relay Multihop The delivery strategies can be generally classified into uni-
The D2D delivery enables one UE to retrieve contents via cast and multicast transmission. In unicast transmission, the
D2D links from nearby UEs. A higher UE density can lead to BS uses different time and frequency resources to serve differ-
a higher probability that the required contents can be pro- ent users and content delivery strategies are usually designed
vided. The multi-hop delivery through D2D relays allows to minimize the network latency and backhaul traffic. In con-
neighboring UEs to serve as relays for content delivery. This trast, multicast transmission enables the BS to simultaneously
relay-based mechanism allows a broader range of content serve multiple users, who request the same content in the same
delivery. Moreover, when the requested contents are cached in cell, at the same time and using the same frequency resources.
multiple UEs, they can cooperatively deliver contents to pro- However, requests do not necessarily arrive in time. Hence,
vide a higher transmission rate. Xia et al. [149] investigated the serving BS has to delay some requests, collect multiple
the scenario where UEs cooperate to download the content requests in a certain time window, and then serve multiple
via multihop relaying. Specifically, their scheme grouped UEs users at the same time. Therefore, how to determine this time
into multicast groups which can download files from the BS by window is critical in order not to compromise the user QoS.
wireless multicast transmissions. Then, each multicast group A longer time window implies that more users can be served
can act as the relay to transmit files to other multicast groups. at the same time and thus increases the spectrum efficiency.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2547
On the contrary, a shorter time window leads to a shorter method (PopCaching) to learn and estimate the content popu-
delay and hence higher user QoS. Therefore, there is a trade- larity. They then determined which content should be evicted
off between spectrum efficiency and user QoS in determining from the cache. They demonstrated that PopCaching converges
the time window; this tradeoff requires further research. quickly and approximates the optimal cache hit rate.
The content placement and delivery strategies couple with
each other. Content placement determines the locations of the B. Summary and Discussion
cached content, and further affects the content delivery, which In this section, we discuss the content update phase of
decides the transmission paths from the caching locations to the caching process, which involves the content replacement
the users. On the other hand, the statistics of content deliv- problem.
ery paths can be leveraged to explore the content popularity, Caches should be updated, i.e., the unpopular content should
which impacts the content placement strategies. Moreover, if be replaced with the popular ones, because the content popu-
multicast is adopted for the content delivery strategy, different larity varies with time. Caches prefer to maintain the popular
BSs prefer to cache different contents to increase the cache contents so that more users can be served. The content replace-
hit ratio (i.e., an important metric to evaluate the content ment problem determines which contents should be removed
placement strategy). and when to remove them. Conventional content replacement
strategies include LFU and LRU which maintain the least fre-
quently used and least recently used contents, respectively.
IX. C ONTENT U PDATE
As the content popularity is an important factor in updating
Caching strategy using the outdated information (e.g., con- caches, machine learning can be utilized to learn and esti-
tent popularities, user locations, network traffic loads, etc) mate the content popularity. The frequency of content update
may degrade performance because it may not reflect the cur- is also a pressing issue to investigate. Updating contents too
rent network status. Hence, it is critical to update caches frequently introduces heavy traffic to the network. In con-
at intervals. The cache update process generally takes place trast, the caches may not satisfy the user demands without
after the content delivery is completed. The content replace- updating caches for a long time. Hence, the tradeoff between
ment problem is about what contents should be removed from the network traffic and content update frequency is yet to be
caches, when to remove them and how to cache new contents. resolved.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2548 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2549
XI. C ONCLUSION
can manage network resources more efficiently and flexibly.
Mobile edge caching is a practical means to reducing dupli-
Coordinated decisions can be made by the controller and guide
cated traffic through wireless backhaul links and improving
the network to optimal operating conditions. Hence, integrat-
user QoE for 5G networks, which require stricter latency and
ing SDN into mobile networks has been studied in many
higher throughput. In our work, we have conducted a com-
researches [165]. Furthermore, the user mobility information
prehensive survey regarding different aspects of mobile edge
can also be obtained and used for designing caching schemes
caching. We began with a brief introduction of mobile edge
to achieve better performance. The architecture of SDN based
computing and mobile edge caching. We have discussed the
mobile edge caching is illustrated in Fig. 10. However, the
caching schemes based on different caching locations and dif-
centralized SDN controller faces high risks of a single point
ferent performance criteria. In addition, we have delineated
of failure and can be a target for attacks. If the SDN con-
the caching process, summarized as four phases including the
troller is broken down, how to guarantee service continuity
content request, exploration, delivery, and update, respectively.
still remains a challenging issue.
At the end, we have enlisted the challenges of mobile edge
caching that require further investigation.
E. Integration of Wireless Caching and Wired Caching
Caching mechanisms have been well studied in both the
wired and wireless networks. However, fixed mobile conver- R EFERENCES
gence networks has not been studied yet, i.e., caching in wired [1] Cisco Visual Networking Index: Global Mobile Data Traffic Forecast
and wireless networks have only been investigated separately. Update, 2016–2021 White Paper, Cisco Syst., San Jose, CA, USA,
Mar. 2017.
As content requests are first forwarded to the wireless segment [2] M. Agiwal, A. Roy, and N. Saxena, “Next generation 5G wireless
and then to the wired segment of the network, the latency is networks: A comprehensive survey,” IEEE Commun. Surveys Tuts.,
attributed from these two parts. The throughput can also be vol. 18, no. 3, pp. 1617–1655, 3rd Quart., 2016.
[3] M. R. Rahimi, J. Ren, C. H. Liu, A. V. Vasilakos, and
limited by the capacity of both segments. For example, the user N. Venkatasubramanian, “Mobile cloud computing: A survey, state
QoS is generally considered as the total latency including the of art and future directions,” Mobile Netw. Appl., vol. 19, no. 2,
wired and wireless latencies (i.e., T = Twired + Twireless ). pp. 133–143, Apr. 2014.
[4] A. Kiani and N. Ansari, “Edge computing aware NOMA for 5G
Latency incurred via the wired segment can be lessened by networks,” IEEE Internet Things J., vol. 5, no. 2, pp. 1299–1306,
better caching strategies (i.e., Twired decreases). Hence, the Apr. 2018.
latency requirement for wireless network can be reduced (i.e., [5] J. G. Andrews et al., “What will 5G be?” IEEE J. Sel. Areas Commun.,
vol. 32, no. 6, pp. 1065–1082, Jun. 2014.
Twireless increases) when a certain user QoS is maintained
[6] “View on 5G architecture,” Valencia, Spain, 5GPPP Initiative,
(i.e., T keeps the same). Therefore, the BS transmission power 5GPPP Archit. Working Group, White Paper, Dec. 2017. [Online].
can be reduced accordingly as Twireless increases, thus reduc- Available: https://5g-ppp.eu/wp-content/uploads/2018/01/5G-PPP-5G-
ing the energy consumption. As a result, a better caching Architecture-White-Paper-Jan-2018-v2.0.pdf
[7] I. Parvez, A. Rahmati, I. Guvenc, A. I. Sarwat, and H. Dai, “A survey
strategy in the wired network can help reduce the power on low latency towards 5G: RAN, core network and caching solu-
consumption in the wireless network. In order to further pro- tions,” IEEE Commun. Surveys Tuts., vol. 20, no. 4, pp. 3098–3130,
vide better network performances in terms of the energy 4th Quart., 2018.
[8] X. Sun and N. Ansari, “Latency aware workload offloading in the
consumption, system throughput and network delay, caching cloudlet network,” IEEE Commun. Lett., vol. 21, no. 7, pp. 1481–1484,
incorporated with joint optimization of both wireless and wired Jul. 2017.
networks is worth further investigation. [9] N. Chalaemwongwan and W. Kurutach, “Mobile cloud computing:
A survey and propose solution framework,” in Proc. 13th Int. Conf.
Elect. Eng. Electron. Comput. Telecommun. Inf. Technol. (ECTI-CON),
F. Net Neutrality Jun. 2016, pp. 1–4.
[10] S. Wang et al., “A survey on mobile edge networks: Convergence
Net neutrality originally aims to ensure equal and fair pas- of computing, caching and communications,” IEEE Access, vol. 5,
sage of all packets in IP networks so that Internet end-users pp. 6757–6779, 2017.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2550 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
[11] L. Li, G. Zhao, and R. S. Blum, “A survey of caching techniques in [37] T. Han and N. Ansari, “A traffic load balancing framework for software-
cellular networks: Research issues and challenges in content placement defined radio access networks powered by hybrid energy sources,”
and delivery strategies,” IEEE Commun. Surveys Tuts., vol. 20, no. 3, IEEE/ACM Trans. Netw., vol. 24, no. 2, pp. 1038–1051, Apr. 2016.
pp. 1710–1732, 3rd Quart., 2018. [38] R. Wang, J. Zhang, S. H. Song, and K. B. Letaief, “Mobility-aware
[12] Z. Hu, Z. Zheng, T. Wang, L. Song, and X. Li, “Caching as a service: caching in D2D networks,” IEEE Trans. Wireless Commun., vol. 16,
Small-cell caching mechanism design for service providers,” IEEE no. 8, pp. 5001–5015, Aug. 2017.
Trans. Wireless Commun., vol. 15, no. 10, pp. 6992–7004, Oct. 2016. [39] A. Pyattaev, O. Galinina, S. Andreev, M. Katz, and Y. Koucheryavy,
[13] “5G vision-the 5G infrastructure public private partnership: The next “Understanding practical limitations of network coding for assisted
generation of communication networks and services,” Valencia, Spain, proximate communication,” IEEE J. Sel. Areas Commun., vol. 33, no. 2,
5G Infrastruct. PPP Assoc., White Paper, Feb. 2015. pp. 156–170, Feb. 2015.
[14] Mobile Edge Computing (MEC) Terminology, V1.1.1, ETSI Standard [40] M. Ji, G. Caire, and A. F. Molisch, “Fundamental limits of dis-
GS MEC 001, Mar. 2016. tributed caching in D2D wireless networks,” in Proc. IEEE Inf. Theory
[15] Mobile Edge Computing (MEC); Framework and Reference Workshop (ITW), Seville, Spain, Sep. 2013, pp. 1–5.
Architecture, V1.1.1, ETSI Standard GS MEC 002, Mar. 2016. [41] M. Ji, G. Caire, and A. F. Molisch, “Wireless device-to-device caching
[16] Mobile Edge Computing (MEC); Technical Requirements, V1.1.1, ETSI networks: Basic principles and system performance,” IEEE J. Sel. Areas
Standard GS MEC 003, Mar. 2016. Commun., vol. 34, no. 1, pp. 176–189, Jan. 2016.
[17] W. Ali, S. M. Shamsuddin, and A. S. Ismail, “A survey of Web [42] M. Sheng et al., “Enhancement for content delivery with proximity
caching and prefetching,” Int. J. Adv. Soft Comput. Appl., vol. 3, no. 1, communications in caching enabled wireless networks: Architecture
pp. 18–44, 2011. and challenges,” IEEE Commun. Mag., vol. 54, no. 8, pp. 70–76,
[18] G. Zhang, Y. Li, and T. Lin, “Caching in information centric Aug. 2016.
networking: A survey,” Comput. Netw., vol. 57, no. 16, pp. 3128–3141, [43] M. Ji, G. Caire, and A. F. Molisch, “Fundamental limits of caching
2013. in wireless D2D networks,” IEEE Trans. Inf. Theory, vol. 62, no. 2,
[19] M. Zhang, H. Luo, and H. Zhang, “A survey of caching mechanisms in pp. 849–869, Feb. 2016.
information-centric networking,” IEEE Commun. Surveys Tuts., vol. 17, [44] Y. Zhang et al., “A game-theoretic approach for optimal distributed
no. 3, pp. 1473–1499, 3rd Quart., 2015. cooperative hybrid caching in D2D networks,” IEEE Wireless Commun.
[20] T. Han, N. Ansari, M. Wu, and H. Yu, “On accelerating content delivery Lett., vol. 7, no. 3, pp. 324–327, Jun. 2018.
in mobile networks,” IEEE Commun. Surveys Tuts., vol. 15, no. 3, [45] N. Golrezaei, A. G. Dimakis, and A. F. Molisch, “Scaling behavior
pp. 1314–1333, 3rd Quart., 2013. for device-to-device communications with distributed caching,” IEEE
[21] D. Liu, B. Chen, C. Yang, and A. F. Molisch, “Caching at the wire- Trans. Inf. Theory, vol. 60, no. 7, pp. 4286–4298, Jul. 2014.
less edge: Design aspects, challenges, and future directions,” IEEE [46] A. Altieri, P. Piantanida, L. R. Vega, and C. G. Galarza, “On
Commun. Mag., vol. 54, no. 9, pp. 22–28, Sep. 2016. fundamental trade-offs of device-to-device communications in large
[22] Y. Wu et al., “Challenges of mobile social device caching,” IEEE wireless networks,” IEEE Trans. Wireless Commun., vol. 14, no. 9,
Access, vol. 4, pp. 8938–8947, 2016. pp. 4958–4971, Sep. 2015.
[23] E. Nygren, R. K. Sitaraman, and J. Sun, “The Akamai network: A [47] X. Wang, M. Chen, Z. Han, D. O. Wu, and T. T. Kwon, “TOSS:
platform for high-performance Internet applications,” ACM SIGOPS Traffic offloading by social network service-based opportunistic sharing
Oper. Syst. Rev., vol. 44, no. 3, pp. 2–19, 2010. in mobile social networks,” in Proc. IEEE Conf. Comput. Commun.
[24] S. Scellato, C. Mascolo, M. Musolesi, and J. Crowcroft, “Track glob- (IEEE INFOCOM), Toronto, ON, Canada, Apr. 2014, pp. 2346–2354.
ally, deliver locally: Improving content delivery networks by tracking [48] Y. Wu, S. Yao, Y. Yang, Z. Hu, and C.-X. Wang, “Semigradient-based
geographic social cascades,” in Proc. 20th Int. Conf. World Wide Web, cooperative caching algorithm for mobile social networks,” in Proc.
2011, pp. 457–466. IEEE Glob. Commun. Conf. (GLOBECOM), Washington, DC, USA,
[25] S. Borst, V. Gupta, and A. Walid, “Distributed caching algorithms for Dec. 2016, pp. 1–6.
content distribution networks,” in Proc. IEEE INFOCOM, San Diego, [49] N. Golrezaei, A. F. Molisch, A. G. Dimakis, and G. Caire,
CA, USA, 2010, pp. 1–9. “Femtocaching and device-to-device collaboration: A new architecture
[26] “MEC deployments in 4G and evolution towards 5G,” Sophia Antipolis, for wireless video distribution,” IEEE Commun. Mag., vol. 51, no. 4,
France, ETSI, White Paper, Feb. 2018. pp. 142–149, Apr. 2013.
[27] 3rd Generation Partnership Project; Technical Specification Group [50] D. Liu and C. Yang, “Energy efficiency of downlink networks with
Services and System Aspects; System Architecture for the 5G System; caching at base stations,” IEEE J. Sel. Areas Commun., vol. 34, no. 4,
Stage 2 (Release 15), V15.0.0, 3GPP Standard TS 23.501, Dec. 2017. pp. 907–922, Apr. 2016.
[28] H. T. Dinh, C. Lee, D. Niyato, and P. Wang, “A survey of mobile [51] M. Sheng, W. Han, C. Huang, J. Li, and S. Cui, “Video delivery
cloud computing: Architecture, applications, and approaches,” Wireless in heterogenous CRANs: Architectures and strategies,” IEEE Wireless
Commun. Mobile Comput., vol. 13, no. 18, pp. 1587–1611, 2013. Commun., vol. 22, no. 3, pp. 14–21, Jun. 2015.
[29] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile [52] S. A. R. Zaidi, M. Ghogho, and D. C. McLernon, “Information cen-
edge computing a key technology towards 5G,” Sophia Antipolis, tric modeling for two-tier cache enabled cellular networks,” in Proc.
France, ETSI, White Paper, 2015. IEEE Int. Conf. Commun. Workshop (ICCW), London, U.K., Jun. 2015,
[30] A. Kiani and N. Ansari, “Toward hierarchical mobile edge computing: pp. 80–86.
An auction-based profit maximization approach,” IEEE Internet Things [53] Z. Chang, Y. Gu, Z. Han, X. Chen, and T. Ristaniemi, “Context-aware
J., vol. 4, no. 6, pp. 2082–2091, Dec. 2017. data caching for 5G heterogeneous small cells networks,” in Proc. IEEE
[31] X. Sun and N. Ansari, “EdgeIoT: Mobile edge computing for the Int. Conf. Commun. (ICC), May 2016, pp. 1–6.
Internet of Things,” IEEE Commun. Mag., vol. 54, no. 12, pp. 22–29, [54] N. Golrezaei, K. Shanmugam, A. G. Dimakis, A. F. Molisch, and
Dec. 2016. G. Caire, “FemtoCaching: Wireless video content delivery through
[32] E. Bastug, M. Bennis, and M. Debbah, “Living on the edge: The role distributed caching helpers,” in Proc. IEEE INFOCOM, Orlando, FL,
of proactive caching in 5G wireless networks,” IEEE Commun. Mag., USA, Mar. 2012, pp. 1107–1115.
vol. 52, no. 8, pp. 82–89, Aug. 2014. [55] P. Blasco and D. Gündüz, “Learning-based optimization of cache con-
[33] N. Ansari and X. Sun, “Mobile edge computing empowers Internet tent in a small cell base station,” in Proc. IEEE Int. Conf. Commun.
of Things,” IEICE Trans. Commun., vol. E101B, no. 3, pp. 604–619, (ICC), Sydney, NSW, Australia, Jun. 2014, pp. 1897–1903.
2018. [56] M. Gregori, J. Gómez-Vilardebó, J. Matamoros, and D. Gündüz,
[34] M. T. Beck, M. Werner, S. Feld, and S. Schimper, “Mobile edge com- “Wireless content caching for small cell and D2D networks,” IEEE
puting: A taxonomy,” in Proc. 6th Int. Conf. Adv. Future Internet, 2014, J. Sel. Areas Commun., vol. 34, no. 5, pp. 1222–1234, May 2016.
pp. 48–55. [57] A. Khreishah, J. Chakareski, and A. Gharaibeh, “Joint caching, routing,
[35] M. Peng, Y. Sun, X. Li, Z. Mao, and C. Wang, “Recent advances and channel assignment for collaborative small-cell cellular networks,”
in cloud radio access networks: System architectures, key techniques, IEEE J. Sel. Areas Commun., vol. 34, no. 8, pp. 2275–2284, Aug. 2016.
and open issues,” IEEE Commun. Surveys Tuts., vol. 18, no. 3, [58] S. Tamoor-ul-Hassan, M. Bennis, P. H. J. Nardelli, and M. Latva-Aho,
pp. 2282–2308, 3rd Quart., 2016. “Modeling and analysis of content caching in wireless small cell
[36] “C-RAN: The road towards green RAN,” Hong Kong, China Mobile, networks,” in Proc. Int. Symp. Wireless Commun. Syst. (ISWCS),
White Paper, 2011. Brussels, Belgium, Aug. 2015, pp. 765–769.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2551
[59] S. Tamoor-ul-Hassan, M. Bennis, P. H. Nardelli, and M. Latva-Aho, [83] N. Golrezaei, P. Mansourifard, A. F. Molisch, and A. G. Dimakis,
“Caching in wireless small cell networks: A storage-bandwidth trade- “Base-station assisted device-to-device communications for high-
off,” IEEE Commun. Lett., vol. 20, no. 6, pp. 1175–1178, Jun. 2016. throughput wireless video networks,” IEEE Trans. Wireless Commun.,
[60] X. Li, X. Wang, and V. C. M. Leung, “Weighted network traffic offload- vol. 13, no. 7, pp. 3665–3676, Jul. 2014.
ing in cache-enabled heterogeneous networks,” in Proc. IEEE Int. Conf. [84] Z. Chen and M. Kountouris, “D2D caching vs. small cell caching:
Commun. (ICC), May 2016, pp. 1–6. Where to cache content in a wireless network?” in Proc. IEEE 17th Int.
[61] Y. Cui and D. Jiang, “Analysis and optimization of caching and multi- Workshop Signal Process. Adv. Wireless Commun. (SPAWC), Jul. 2016,
casting in large-scale cache-enabled heterogeneous wireless networks,” pp. 1–6.
IEEE Trans. Wireless Commun., vol. 16, no. 1, pp. 250–264, Jan. 2017. [85] B. Blaszczyszyn and A. Giovanidis, “Optimal geographic caching in
[62] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and cellular networks,” in Proc. IEEE Int. Conf. Commun. (ICC), Jun. 2015,
G. Caire, “FemtoCaching: Wireless content delivery through dis- pp. 3358–3363.
tributed caching helpers,” IEEE Trans. Inf. Theory, vol. 59, no. 12, [86] D. Liu and C. Yang, “Cache-enabled heterogeneous cellular networks:
pp. 8402–8413, Dec. 2013. Comparison and tradeoffs,” in Proc. IEEE Int. Conf. Commun. (ICC),
[63] T. Wang, L. Song, and Z. Han, “Dynamic femtocaching for mobile May 2016, pp. 1–6.
users,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), [87] B. Perabathini, E. Baştuǧ, M. Kountouris, M. Debbah, and A. Conte,
New Orleans, LA, USA, Mar. 2015, pp. 861–865. “Caching at the edge: A green perspective for 5G networks,” in
[64] R. Pabst et al., “Relay-based deployment concepts for wireless and Proc. IEEE Int. Conf. Commun. Workshop (ICCW), Jun. 2015,
mobile broadband radio,” IEEE Commun. Mag., vol. 42, no. 9, pp. 2830–2835.
pp. 80–89, Sep. 2004. [88] A. A. Almomani, A. Argyriou, and M. Erol-Kantarci, “A heuristic
[65] X. Wang, Y. Bao, X. Liu, and Z. Niu, “On the design of relay caching in approach for overlay content-caching network design in 5G wireless
cellular networks for energy efficiency,” in Proc. IEEE Conf. Comput. networks,” in Proc. IEEE Symp. Comput. Commun. (ISCC), Jun. 2016,
Commun. Workshops (INFOCOM WKSHPS), Shanghai, China, 2011, pp. 621–626.
pp. 259–264.
[89] T. M. Nguyen, W. Ajib, and C. Assi, “Designing wireless backhaul het-
[66] M. Erol-Kantarci, “Cache-at-relay: Energy-efficient content placement
erogeneous networks with small cell buffering,” IEEE Trans. Commun.,
for next-generation wireless relays,” Int. J. Netw. Manag., vol. 25, no. 6,
vol. 66, no. 10, pp. 4596–4610, Oct. 2018.
pp. 454–470, 2015.
[67] A. Liu and V. K. N. Lau, “Cache-enabled opportunistic cooperative [90] H. Hsu and K.-C. Chen, “A resource allocation perspective on
MIMO for video streaming in wireless systems,” IEEE Trans. Signal caching to achieve low latency,” IEEE Commun. Lett., vol. 20, no. 1,
Process., vol. 62, no. 2, pp. 390–402, Jan. 2014. pp. 145–148, Jan. 2016.
[68] Y. Wang, X. Tao, X. Zhang, and G. Mao, “Joint caching placement and [91] J. Liu, B. Bai, J. Zhang, and K. B. Letaief, “Content caching at the
user association for minimizing user download delay,” IEEE Access, wireless network edge: A distributed algorithm via belief propagation,”
vol. 4, pp. 8625–8633, 2016. in Proc. IEEE Int. Conf. Commun. (ICC), May 2016, pp. 1–6.
[69] C. Yang, Y. Yao, Z. Chen, and B. Xia, “Analysis on cache-enabled wire- [92] M. Dehghan et al., “On the complexity of optimal routing and con-
less heterogeneous networks,” IEEE Trans. Wireless Commun., vol. 15, tent caching in heterogeneous networks,” in Proc. IEEE Conf. Comput.
no. 1, pp. 131–145, Jan. 2016. Commun. (INFOCOM), Apr. 2015, pp. 936–944.
[70] J. Yao and N. Ansari, “QoS-aware joint BBU-RRH mapping and [93] J. Rao, H. Feng, and Z. Chen, “Exploiting user mobility for D2D
user association in cloud-RANs,” IEEE Trans. Green Commun. Netw., assisted wireless caching networks,” in Proc. 8th Int. Conf. Wireless
vol. 2, no. 4, pp. 881–889, Dec. 2018. Commun. Signal Process. (WCSP), Oct. 2016, pp. 1–5.
[71] S.-H. Park, O. Simeone, and S. S. Shitz, “Joint optimization of cloud [94] J. Yao and N. Ansari, “QoS-aware rechargeable UAV trajectory
and edge processing for fog radio access networks,” IEEE Trans. optimization for sensing service,” in Proc. IEEE Int. Conf. Commun.
Wireless Commun., vol. 15, no. 11, pp. 7621–7632, Nov. 2016. (ICC), May 2019.
[72] R. Tandon and O. Simeone, “Cloud-aided wireless networks with [95] H. Ahlehagh and S. Dey, “Video-aware scheduling and caching in
edge caching: Fundamental latency trade-offs in fog radio access the radio access network,” IEEE/ACM Trans. Netw., vol. 22, no. 5,
networks,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Barcelona, pp. 1444–1462, Oct. 2014.
Spain, Jul. 2016, pp. 2029–2033. [96] J. Tadrous, A. Eryilmaz, and H. E. Gamal, “Proactive content download
[73] M. Tao, E. Chen, H. Zhou, and W. Yu, “Content-centric sparse multicast and user demand shaping for data networks,” IEEE/ACM Trans. Netw.
beamforming for cache-enabled cloud RAN,” IEEE Trans. Wireless (TON), vol. 23, no. 6, pp. 1917–1930, Dec. 2015.
Commun., vol. 15, no. 9, pp. 6118–6131, Sep. 2016. [97] T. Hou, G. Feng, S. Qin, and W. Jiang, “Proactive content caching by
[74] S. Mosleh, L. Liu, H. Hou, and Y. Yi, “Coordinated data assign- exploiting transfer learning for mobile edge computing,” in Proc. IEEE
ment: A novel scheme for big data over cached Cloud-RAN,” in Proc. Glob. Commun. Conf. (GLOBECOM), Dec. 2017, pp. 1–6.
IEEE Glob. Commun. Conf. (GLOBECOM), Washington, DC, USA, [98] A. Gharaibeh, A. Khreishah, B. Ji, and M. Ayyash, “A provably effi-
Dec. 2016, pp. 1–6. cient online collaborative caching algorithm for multicell-coordinated
[75] R. G. Stephen and R. Zhang, “Green OFDMA resource allocation in systems,” IEEE Trans. Mobile Comput., vol. 15, no. 8, pp. 1863–1876,
cache-enabled CRAN,” in Proc. IEEE Online Conf. Green Commun. Aug. 2016.
(OnlineGreenComm), Piscataway, NJ, USA, Nov. 2016, pp. 70–75. [99] L. Marini, J. Li, and Y. Li, “Distributed caching based on decentralized
[76] T. X. Tran and D. Pompili, “Octopus: A cooperative hierarchical learning automata,” in Proc. IEEE Int. Conf. Commun. (ICC), 2015,
caching strategy for cloud radio access networks,” in Proc. IEEE 13th pp. 3807–3812.
Int. Conf. Mobile Ad Hoc Sensor Syst. (MASS), Oct. 2016, pp. 154–162.
[100] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and
[77] J. Yao and N. Ansari, “Joint caching in fronthaul and backhaul con-
the sum-product algorithm,” IEEE Trans. Inf. Theory, vol. 47, no. 2,
strained C-RAN,” in Proc. IEEE Glob. Commun. Conf. (GLOBECOM),
pp. 498–519, Feb. 2001.
Dec. 2017, pp. 1–6.
[78] J. Yao and N. Ansari, “Joint content placement and storage allocation [101] J. Li et al., “Distributed caching for data dissemination in the downlink
in C-RANs for IoT sensing service,” IEEE Internet Things J., vol. 6, of heterogeneous networks,” IEEE Trans. Commun., vol. 63, no. 10,
no. 1, pp. 1060–1067, Feb. 2019. pp. 3553–3568, Oct. 2015.
[79] M. Chen, W. Saad, C. Yin, and M. Debbah, “Echo state networks for [102] W. Jiang, G. Feng, and S. Qin, “Optimal cooperative content caching
proactive caching in cloud-based radio access networks with mobile and delivery policy for heterogeneous cellular networks,” IEEE Trans.
users,” IEEE Trans. Wireless Commun., vol. 16, no. 6, pp. 3520–3535, Mobile Comput., vol. 16, no. 5, pp. 1382–1393, May 2017.
Jun. 2017. [103] J. Song, M. Sheng, T. Q. S. Quek, C. Xu, and X. Wang, “Learning-
[80] X. Wang, S. Leng, and K. Yang, “Social-aware edge caching in fog based content caching and sharing for wireless networks,” IEEE Trans.
radio access networks,” IEEE Access, vol. 5, pp. 8492–8501, 2017. Commun., vol. 65, no. 10, pp. 4309–4324, Oct. 2017.
[81] K. Zheng, Q. Zheng, P. Chatzimisios, W. Xiang, and Y. Zhou, [104] T. Ho and D. Lun, Network Coding: An Introduction. New York, NY,
“Heterogeneous vehicular networking: A survey on architecture, chal- USA: Cambridge Univ. Press, 2008.
lenges, and solutions,” IEEE Commun. Surveys Tuts., vol. 17, no. 4, [105] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,”
pp. 2377–2396, 4th Quart., 2015. IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
[82] F. Pantisano, M. Bennis, W. Saad, and M. Debbah, “In-network caching [106] M. A. Maddah-Ali and U. Niesen, “Decentralized coded caching attains
and content placement in cooperative small cell networks,” in Proc. 1st order-optimal memory-rate tradeoff,” IEEE/ACM Trans. Netw., vol. 23,
Int. Conf. 5G Ubiquitous Connectivity, Nov. 2014, pp. 128–133. no. 4, pp. 1029–1040, Aug. 2015.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
2552 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 3, THIRD QUARTER 2019
[107] M. A. Maddah-Ali and U. Niesen, “Coding for caching: Fundamental [132] K. Poularakis and L. Tassiulas, “Exploiting user mobility for wire-
limits and practical challenges,” IEEE Commun. Mag., vol. 54, no. 8, less content delivery,” in Proc. IEEE Int. Symp. Inf. Theory, 2013,
pp. 23–29, Aug. 2016. pp. 1017–1021.
[108] N. Karamchandani, U. Niesen, M. A. Maddah-Ali, and S. N. Diggavi, [133] D. Wang, D. Pedreschi, C. Song, F. Giannotti, and A.-L. Barabasi,
“Hierarchical coded caching,” IEEE Trans. Inf. Theory, vol. 62, no. 6, “Human mobility, social ties, and link prediction,” in Proc. 17th ACM
pp. 3212–3229, Jun. 2016. SIGKDD Int. Conf. Knowl. Disc. Data Min., 2011, pp. 1100–1108.
[109] Z. Hu, Z. Zheng, T. Wang, L. Song, and X. Li, “Game theoretic [134] M. Musolesi, S. Hailes, and C. Mascolo, “An ad hoc mobility model
approaches for wireless proactive caching,” IEEE Commun. Mag., founded on social network theory,” in Proc. 7th ACM Int. Symp.
vol. 54, no. 8, pp. 37–43, Aug. 2016. Modeling Anal. Simulat. Wireless Mobile Syst., 2004, pp. 20–24.
[110] G. Demange, D. Gale, and M. Sotomayor, “Multi-item auctions,” J. [135] R. Wang, X. Peng, J. Zhang, and K. B. Letaief, “Mobility-aware
Political Econ., vol. 94, no. 4, pp. 863–872, 1986. caching for content-centric wireless networks: Modeling and method-
[111] K. Hamidouche, W. Saad, M. Debbah, J. B. Song, and C. S. Hong, ology,” IEEE Commun. Mag., vol. 54, no. 8, pp. 77–83, Aug. 2016.
“The 5G cellular backhaul management dilemma: To cache or to [136] C. Bettstetter, H. Hartenstein, and X. Pérez-Costa, “Stochastic proper-
serve,” IEEE Trans. Wireless Commun., vol. 16, no. 8, pp. 4866–4879, ties of the random waypoint mobility model,” Wireless Netw., vol. 10,
Aug. 2017. no. 5, pp. 555–567, 2004.
[112] K. Hamidouche, W. Saad, and M. Debbah, “Many-to-many matching [137] J.-K. Lee and J. C. Hou, “Modeling steady-state and transient behaviors
games for proactive social-caching in wireless small cell networks,” in of user mobility: Formulation, analysis, and application,” in Proc. 7th
Proc. 12th Int. Symp. Model. Optim. Mobile Ad Hoc Wireless Netw. ACM Int. Symp. Mobile Ad Hoc Netw. Comput., 2006, pp. 85–96.
(WiOpt), May 2014, pp. 569–574. [138] Q. Lv, Y. Qiao, N. Ansari, J. Liu, and J. Yang, “Big data driven hid-
[113] J. Yao and N. Ansari, “Reliability-aware fog resource provisioning for den Markov model based individual mobility prediction at points of
deadline-driven IoT services,” in Proc. IEEE Glob. Commun. Conf. interest,” IEEE Trans. Veh. Technol., vol. 66, no. 6, pp. 5204–5216,
(GLOBECOM), Dec. 2018, pp. 1–6. Jun. 2017.
[114] J. Yao and N. Ansari, “QoS-aware fog resource provisioning and [139] K. Poularakis, G. Iosifidis, and L. Tassiulas, “Approximation algorithms
mobile device power control in IoT networks,” IEEE Trans. Netw. for mobile data caching in small cell networks,” IEEE Trans. Commun.,
Service Manag., vol. 16, no. 1, pp. 167–175, Mar. 2019. vol. 62, no. 10, pp. 3665–3677, Oct. 2014.
[115] S. Vural et al., “In-network caching of Internet-of-Things data,” in [140] K. Poularakis, G. Iosifidis, A. Argyriou, and L. Tassiulas, “Video
Proc. IEEE Int. Conf. Commun. (ICC), Jun. 2014, pp. 3185–3190. delivery over heterogeneous cellular networks: Optimizing cost and
[116] D. Niyato, D. I. Kim, P. Wang, and L. Song, “A novel caching performance,” in Proc. IEEE INFOCOM IEEE Conf. Comput.
mechanism for Internet of Things (IoT) sensing service with energy Commun., Apr. 2014, pp. 1078–1086.
harvesting,” in Proc. IEEE Int. Conf. Commun. (ICC), May 2016, [141] F. Pantisano, M. Bennis, W. Saad, and M. Debbah, “Cache-aware user
pp. 1–6. association in backhaul-constrained small cell networks,” in Proc. 12th
[117] J. Yao and N. Ansari, “Caching in energy harvesting aided Internet of Int. Symp. Model. Optim. Mobile Ad Hoc Wireless Netw. (WiOpt), 2014,
Things: A game-theoretic approach,” IEEE Internet Things J., to be pp. 37–42.
published. doi: 10.1109/JIOT.2018.2880483. [142] T. Han and N. Ansari, “Network utility aware traffic load balanc-
[118] H. Che, Y. Tung, and Z. Wang, “Hierarchical Web caching systems: ing in backhaul-constrained cache-enabled small cell networks with
Modeling, design and experimental results,” IEEE J. Sel. Areas hybrid power supplies,” IEEE Trans. Mobile Comput., vol. 16, no. 10,
Commun., vol. 20, no. 7, pp. 1305–1314, Sep. 2002. pp. 2819–2832, Oct. 2017.
[119] R. Urgaonkar et al., “Dynamic service migration and workload schedul- [143] J. Song, H. Song, and W. Choi, “Optimal caching placement of
ing in edge-clouds,” Perform. Eval., vol. 91, pp. 205–228, Sep. 2015. caching system with helpers,” in Proc. IEEE Int. Conf. Commun. (ICC),
[120] G. Li et al., “Understanding user generated content characteristics: Jun. 2015, pp. 1825–1830.
A hot-event perspective,” in Proc. IEEE Int. Conf. Commun. (ICC), [144] X. Peng, J.-C. Shen, J. Zhang, and K. B. Letaief, “Backhaul-aware
Jun. 2011, pp. 1–5. caching placement for wireless networks,” in Proc. IEEE Global
[121] A. Tatar, M. D. de Amorim, S. Fdida, and P. Antoniadis, “A survey on Commun. Conf. (GLOBECOM), 2015, pp. 1–6.
predicting the popularity of Web content,” J. Internet Services Appl., [145] K. Poularakis, G. Iosifidis, V. Sourlas, and L. Tassiulas, “Multicast-
vol. 5, no. 1, p. 8, 2014. aware caching for small cell networks,” in Proc. IEEE Wireless
[122] D. Niu, Z. Liu, B. Li, and S. Zhao, “Demand forecast and performance Commun. Netw. Conf. (WCNC), 2014, pp. 2300–2305.
prediction in peer-assisted on-demand streaming systems,” in Proc. [146] J. Liao, K.-K. Wong, Y. Zhang, Z. Zheng, and K. Yang, “Coding,
IEEE INFOCOM, Apr. 2011, pp. 421–425. multicast, and cooperation for cache-enabled heterogeneous small
[123] Z. Wang, L. Sun, C. Wu, and S. Yang, “Enhancing Internet-scale video cell networks,” IEEE Trans. Wireless Commun., vol. 16, no. 10,
service deployment using microblog-based prediction,” IEEE Trans. pp. 6838–6853, Oct. 2017.
Parallel Distrib. Syst., vol. 26, no. 3, pp. 775–785, Mar. 2015. [147] B. Zhou, Y. Cui, and M. Tao, “Stochastic content-centric multicast
[124] M. Rowe, “Forecasting audience increase on YouTube,” in Proc. scheduling for cache-enabled heterogeneous cellular networks,” IEEE
Workshop User Profile Data Soc. Semantic Web 8th Extended Semantic Trans. Wireless Commun., vol. 15, no. 9, pp. 6284–6297, Sep. 2016.
Web Conf. (ESWC), May 2011, pp. 1–15. [148] B. Zhou, Y. Cui, and M. Tao, “Optimal dynamic multicast schedul-
[125] M. Leconte et al., “Placing dynamic content in caches with small popu- ing for cache-enabled content-centric wireless networks,” IEEE Trans.
lation,” in Proc. IEEE INFOCOM 35th Annu. IEEE Int. Conf. Comput. Commun., vol. 65, no. 7, pp. 2956–2970, Jul. 2017.
Commun., Apr. 2016, pp. 1–9. [149] Z. Xia, J. Yan, and Y. Liu, “Cooperative content delivery in
[126] J. Famaey, F. Iterbeke, T. Wauters, and F. De Turck, “Towards a multicast multihop device-to-device networks,” IEEE Access, vol. 5,
predictive cache replacement strategy for multimedia content,” J. Netw. pp. 6314–6324, 2017.
Comput. Appl., vol. 36, no. 1, pp. 219–227, 2013. [150] M. Z. Shafiq, A. X. Liu, and A. R. Khakpour, “Revisiting caching in
[127] L. Breslau, C. Pei, L. Fan, G. Phillips, and S. Shenker, “Web content delivery networks,” ACM SIGMETRICS Perform. Eval. Rev.,
caching and Zipf-like distributions: Evidence and implications,” in vol. 42, no. 1, pp. 567–568, 2014.
Proc. INFOCOM 18th Annu. Joint Conf. IEEE Comput. Commun. Soc., [151] R. Pedarsani, M. A. Maddah-Ali, and U. Niesen, “Online coded
vol. 1, 1999, pp. 126–134. caching,” IEEE/ACM Trans. Netw., vol. 24, no. 2, pp. 836–845,
[128] J. Liu, F. Liu, and N. Ansari, “Monitoring and analyzing big traffic data Apr. 2016.
of a large-scale cellular network with Hadoop,” IEEE Netw., vol. 28, [152] S. Li, J. Xu, M. V. D. Schaar, and W. Li, “Popularity-driven con-
no. 4, pp. 32–39, Jul./Aug. 2014. tent caching,” in Proc. IEEE INFOCOM 35th Annu. IEEE Int. Conf.
[129] E. Zeydan et al., “Big data caching for networking: Moving from cloud Comput. Commun., 2016, pp. 1–9.
to edge,” IEEE Commun. Mag., vol. 54, no. 9, pp. 36–42, Sep. 2016. [153] J. Yao and N. Ansari, “Energy-aware task allocation for mobile IoT
[130] H. Nakayama, Z. M. Fadlullah, N. Ansari, and N. Kato, “A novel by online reinforcement learning,” in Proc. IEEE Int. Conf. Commun.
scheme for WSAN sink mobility based on clustering and set pack- (ICC), May 2019.
ing techniques,” IEEE Trans. Autom. Control, vol. 56, no. 10, [154] X. Sun and N. Ansari, “Traffic load balancing among brokers at the IoT
pp. 2381–2389, Oct. 2011. application layer,” IEEE Trans. Netw. Service Manag., vol. 15, no. 1,
[131] H. Nishiyama, T. Ngo, N. Ansari, and N. Kato, “On minimizing the pp. 489–502, Mar. 2018.
impact of mobility on topology control in mobile ad hoc networks,” [155] J. Quevedo, D. Corujo, and R. Aguiar, “A case for ICN usage in IoT
IEEE Trans. Wireless Commun., vol. 11, no. 3, pp. 1158–1166, environments,” in Proc. IEEE Glob. Commun. Conf. (GLOBECOM),
Mar. 2012. Dec. 2014, pp. 2770–2775.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.
YAO et al.: ON MOBILE EDGE CACHING 2553
[156] X. Sun and N. Ansari, “Dynamic resource caching in the IoT appli- Tao Han (S’08–M’15) received the Ph.D. degree in
cation layer for smart cities,” IEEE Internet Things J., vol. 5, no. 2, electrical engineering from the New Jersey Institute
pp. 606–613, Apr. 2018. of Technology, Newark, NJ, USA. He is cur-
[157] M. Mukherjee et al., “Security and privacy in fog computing: rently an Assistant Professor with the Department
Challenges,” IEEE Access, vol. 5, pp. 19293–19304, 2017. of Electrical and Computer Engineering, University
[158] D. Kim, J. Bi, A. V. Vasilakos, and I. Yeom, “Security of cached of North Carolina at Charlotte, Charlotte, NC,
content in NDN,” IEEE Trans. Inf. Forensics Security, vol. 12, no. 12, USA. He serves as an Associate Editor for the
pp. 2933–2944, Dec. 2017. IEEE C OMMUNICATIONS L ETTERS. His research
[159] J. Leguay, G. S. Paschos, E. A. Quaglia, and B. Smyth, “CryptoCache: interests include mobile edge networking, mobile X
Network caching with confidentiality,” in Proc. IEEE Int. Conf. reality, 5G, Internet of Things, and smart grid.
Commun. (ICC), May 2017, pp. 1–6.
[160] C. Kolias, G. Kambourakis, and S. Gritzalis, “Attacks and countermea-
sures on 802.16: Analysis and assessment,” IEEE Commun. Surveys
Tuts., vol. 15, no. 1, pp. 487–514, 1st Quart., 2013.
[161] J. H. Abawajy, M. I. H. Ninggal, and T. Herawan, “Privacy preserving
social network data publication,” IEEE Commun. Surveys Tuts., vol. 18,
no. 3, pp. 1974–1997, 3rd Quart., 2016.
[162] T. Wang, Z. Zheng, M. H. Rehmani, S. Yao, and Z. Huo, “Privacy
preservation in big data from the communication perspective—A sur-
vey,” IEEE Commun. Surveys Tuts., vol. 21, no. 1, pp. 753–778, Nirwan Ansari (S’78–M’83–SM’94–F’09) received
1st Quart., 2019. the B.S.E.E. degree (summa cum laude with a
[163] W. Xia, Y. Wen, C. H. Foh, D. Niyato, and H. Xie, “A survey on perfect GPA) from the New Jersey Institute of
software-defined networking,” IEEE Commun. Surveys Tuts., vol. 17, Technology (NJIT), the M.S.E.E. degree from the
no. 1, pp. 27–51, 1st Quart., 2015. University of Michigan, and the Ph.D. degree from
[164] Q. Duan, N. Ansari, and M. Toy, “Software-defined network virtual- Purdue University.
ization: An architectural framework for integrating SDN and NFV for He is a Distinguished Professor of electrical and
service provisioning in future networks,” IEEE Netw., vol. 30, no. 5, computer engineering with NJIT. He has also been
pp. 10–16, Sep./Oct. 2016. a visiting (chair) professor with several universities.
[165] I. T. Haque and N. Abu-Ghazaleh, “Wireless software defined He has authored the book entitled Green Mobile
networking: A survey and taxonomy,” IEEE Commun. Surveys Tuts., Networks: A Networking Perspective (Wiley-IEEE,
vol. 18, no. 4, pp. 2713–2737, 4th Quart., 2016. 2017) with T. Han, and has coauthored two other books. He has also
[166] T. Wu, “Network neutrality, broadband discrimination,” J. Telecommun. (co)authored more than 600 technical publications, over 250 published in
High Technol. Law, vol. 2, no. 1, pp. 141–176, 2003. widely cited journals/magazines. He has guest-edited a number of special
[167] D. Miorandi, I. Carreras, E. Gregori, I. Graham, and J. Stewart, issues covering various emerging topics in communications and networking.
“Measuring net neutrality in mobile Internet: Towards a crowdsensing- He has served on the editorial/advisory board of over ten journals. He has also
based citizen observatory,” in Proc. IEEE Int. Conf. Commun. been granted 38 U.S. patents. His current research focuses on green commu-
Workshops (ICC), Jun. 2013, pp. 199–203. nications and networking, cloud computing, drone-assisted networking, and
various aspects of broadband networks.
Dr. Ansari was a recipient of several Excellence in Teaching Awards,
a few best paper awards, the NCE Excellence in Research Award,
the Communications Society (ComSoc) TC-CSR Distinguished Technical
Achievement Award, the ComSoc AHSN TC Technical Recognition Award,
Jingjing Yao (S’17) received the B.E. degree in
the IEEE TCGCC Distinguished Technical Achievement Recognition Award,
information and communication engineering from
the NJ Inventors Hall of Fame Inventor of the Year Award, the Thomas Alva
the Dalian University of Technology and the M.E.
Edison Patent Award, Purdue University Outstanding Electrical and Computer
degree in information and communication engineer-
Engineering Award, and NCE-100 Medal. He was elected to serve in the
ing from the University of Science and Technology
IEEE ComSoc Board of Governors as a Member-at-Large, has chaired some
of China. She is currently pursuing the Ph.D.
ComSoc technical and steering committees, has been serving in many com-
degree in computer engineering at the New Jersey
mittees, such as the IEEE Fellow Committee, and has been actively organizing
Institute of Technology, Newark, NJ, USA. Her
numerous IEEE International Conferences/Symposia/Workshops. He has fre-
research interests include cloud computing, cloud
quently been delivering keynote addresses, distinguished lectures, tutorials,
radio access networks, and Internet of Things.
and invited talks. He is a ComSoc Distinguished Lecturer.
Authorized licensed use limited to: Jordan University of Science & Technology. Downloaded on March 17,2021 at 05:04:12 UTC from IEEE Xplore. Restrictions apply.