0% found this document useful (0 votes)
73 views6 pages

Mapcaching: A Novel Mobility Aware Proactive Caching Over C-Ran

This document proposes MAPCaching, a mobility-aware proactive caching strategy for cloud radio access networks (C-RANs). It aims to minimize average content access delay by strategically caching content at remote radio heads (RRHs) and the baseband unit (BBU) pool based on users' mobility patterns. The strategy models transmission delay and formulates an optimization problem to determine optimal cache placements. Simulation results show MAPCaching outperforms existing strategies by 30% in average delay and 20% in cache hit rate.

Uploaded by

Ninad Kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views6 pages

Mapcaching: A Novel Mobility Aware Proactive Caching Over C-Ran

This document proposes MAPCaching, a mobility-aware proactive caching strategy for cloud radio access networks (C-RANs). It aims to minimize average content access delay by strategically caching content at remote radio heads (RRHs) and the baseband unit (BBU) pool based on users' mobility patterns. The strategy models transmission delay and formulates an optimization problem to determine optimal cache placements. Simulation results show MAPCaching outperforms existing strategies by 30% in average delay and 20% in cache hit rate.

Uploaded by

Ninad Kulkarni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

MAPCaching: A Novel Mobility Aware

Proactive Caching over C-RAN


Jianmei Dai Danpu Liu
Beijing Laboratory of Advanced Information Network Beijing Laboratory of Advanced Information Network
Beijing Key Laboratory of Network System Beijing Key Laboratory of Network System
Architecture and Convergence Architecture and Convergence
Beijing University of Posts and Telecommunications, Beijing University of Posts and Telecommunications,
Beijing, P. R. China 100876 Beijing, P. R. China 100876
Academy of Equipment, Beijing, P. R. China 101416 Email:[email protected]
Email: Jammy [email protected]

Abstract—Caching at the wireless edge is a promising RRHs and BBU are all high-bandwidth and low-latency
way to alleviate the heavy burden of the backhaul links and which are suitable for user data transportation, and this
reduce the latency of transmission and handover. Although allows each RRH to retrieve cache contents from the
some effective caching schemes have been introduced to
Cloud based Radio Access Network (C-RAN), most of them BBU faster, namely, the caching in C-RAN can be
were designed without consideration of users’ mobility. In hierarchical and cooperative.
this paper, we investigate the mobility property in cache- Some caching schemes have been introduced to C-
enabled C-RAN, and propose a mobility aware proactive RAN. For example, a cluster content caching scheme is
caching strategy.We aim to design a novel controller, which
is able to utilize the computation and storage resources
designed by [14], and some hierarchical caching frame-
in C-RAN, and responsible for making cache decision for work are designed by [15] and [16]. Above schemes
both the baseband unit (BBU) pool and Remote Radio exploit the collaborative features of C-RAN, but they
Head (RRH). A transmission delay model is introduced assume fixed network topologies and ignore the users’
to formulate the cache placement optimization problem. mobility, which can not capture the actual scenario.
To solve the problem, we propose an algorithm named
MAPCaching. Numerical simulation results show that Some caching schemes considers users’ mobility, yet
MAPCaching significantly outperforms the Greedy and they have not reflected the specific features of C-RAN
EPC caching strategies in terms of average content access and can not be used in C-RAN for some drawbacks.
delay and cache hit rate by 30% and 20%, respectively. For example, a Greedy caching strategy is proposed and
Index Terms—Mobility Aware, Proactive Caching, C- a specific message flow is designed in [17]. It aims to
RAN, Caching Management Controller
cache the same content object in all potential neighbor
proxy servers, and users will always cache the data
I. I NTRODUCTION
as long as the cache space is enough, but the caching
Cisco’s latest forecast report shows that global mobile efficiency is low and it needs a high cache capacity
data traffic has increased by 18 times over the past five to maintain the cache hit rate. An EPC cache decision
years, and has increased by over 63% in 2016, of which algorithm based on the idea of congestion pricing is
video data accounted for 60%. The mobile operators will designed in [18]. It uses the prediction information of
face a huge network upgrade pressure with the sharp mobile probability to cache the content near the new
increase in data traffic in mobile networks [1]. location to reduce the data access time, but many data
Caching is an important technology to coping with probably would be missed due to handover from old
traffic growth [2, 3]and has been extensively studied in access point to new access point, and the retransmission
many scenarios [4–11], from information-centric wire- will be issued. A caching strategy using the mobile
less network and social network to conventional cel- probability which is predicted by Markov chain mobile
lular networks. However, the caching problem obtains model is proposed in [19]. It considers the frequent user
an interesting new twist with the advent of C-RAN hand-offs among small-cells, and aims at minimizing
networks[12, 13]. On the one hand, various solutions the load of macro-cells, but the prediction data relied
have been proposed that the function of PHY, or even on is a few days outdated and cannot reflect the users’
MAC and RLC are pulled down to RRH, thereby the instantaneous mobile characteristic.
cache can be deployed not only on the BBU, but also Therefore we design a novel mobility aware proac-
on the RRHs. On the other hand, the CPRI links between tive caching (MAPCaching) strategy for C-RAN net-
978-1-5386-3531-5/17/$31.00
c 2017 IEEE work, motivated by the analysis of C-RAN features and
above-mentioned caching schemes. Unlike most caching Cloud cache
Edge cache
schemes, we consider proactive caching, hierarchical
caching and users’ mobility together in the paper. Lower Core Network RRH
Content Server
average transmission latency can be achieved by the pro- User
posed scheme, which can make users get a high quality BBU Pool with CMC
of experience (QoE), especially when they are watching
video on line. Furthermore, the efficiency of proactive RRH

caching is considered by allocating the appropriate cache


on the BBU pool and RRHs for different users.
The rest of the paper is structured as follows: In
Section II, the system model is described, and an average User
transmission latency optimal problem is formulated and
solved. In Section III, a mobility aware cache man- Fig. 1: System model
agement controller is designed in BBU pool, and the
composition and the functions are described, and the
flow of the MAPCaching algorithm is described. In RRH r, i.e., q21 presents that the probability of user 1
Section IV, the performance evaluations are illustrated, moves to RRH 2.
and finally, in Section V we conclude the paper and shed
light on future directions in caching over C-RAN. B. Problem Formulation
Considering the actual needs of moving users to watch
II. SYSTEM MODEL AND PROBLEM FORMULATION
high resolution video continuously and smoothly, we
In this section, the system model is described, an aver- can distribute video content from remote video server
age transmission latency optimal problem is formulated, to the BBU pool or candidate RRHs ahead of the user
and the problem can be solved by linear programming association, so that the video content required by a
method. mobile user will be immediately available when it moves
into a new cell.
A. System Model
The model aiming to minimize the average transmis-
As in Fig. 1, we consider a set R = {r1 , r2 ,...,rR } of sion latency T is as follows.
RRHs distributed within a 500m×500m area and con- P
nected to a BBU pool via low-latency, high-bandwidth (t0 P0 + t1 P1 + t2 P2 )
u∈U
fronthaul links in C-RAN system. Although RRHs are min T̄ = (1)
U
only radio function at present, various solutions have
been proposed that the function of PHY, or even MAC where P0 is the probability of obtaining data from
and RLC could be pulled down to RRH in the future RRH, P1 is the probability of obtaining data from BBU
5G network. Therefore, we consider that each RRH is pool, and P2 is the probability of obtaining data from
equipped with an edge-cache, denoted as Cr , with the source.
same storage capacity of CR bytes, and the BBU pool is Considering
P the users’ mobility, we can get that P0
equipped with a cloud cache, denoted as Cb , with storage = (qru xru ), P1 =xbu , P2 =1 − P0 − P1 . Where xru
r∈R
capacity of CB bytes (usually CB  CR ). The coverage is the fraction of the data cached in the RRH to overall
area of each RRH is 50m and the coverage areas may data should be cached and xbu is the fraction of the data
overlap one another, but a user can be served by only cached in the BBU pool to overall data should be cached
one RRH at the same time. A set U = {u1 , u2 ,...,uU } in the caching decision cycle.
of users moving randomly in RRHs, considering that the By simplifying, the original objective function can be
speed the mobile user moves and the precision of mobile represented as
probability estimation will affect the cache hit ratio, the P

P

average moving speed of users is set to 1.2m/s and the (t2 − t0 ) (qru xru ) + (t2 − t1 ) xbu
u∈U r∈R
mobile probability is predicted by Markov chain mobile min T̄ = t2 −
xru ,xbu U
model. The set N = {N1 , N2 ,...,NU } is denoted as the
(2)
video segments size which should be downloaded by
user u at fixed schedule time. t2 is constant, therefore, the objective function can be
We denote that the transmission latency from the RRH translated into the following functions:
is t0 , the transmission latency from the BBU pool is t1 ,  
and the latency from source is t2 , t2 is usually 10 times P
(t2 − t0 )
P
(qru xru ) + (t2 − t1 ) xbu
bigger than t0 and t1 , t1 is a little bigger than t0 . By u∈U r∈R
max Γ̄ =
taking the mobility property of user into account, we xru ,xbu U
denote qru as the probability of user u moving to the (3)
User Video-Segment- User-RRH Mobile Probability
Request Table(UVSRT) Table(URMPT)

User Segments Request at Current Slot User\RRH r0 r1 r2 rR


u0 S0 V2 S1 V1 S2 V1 S3 V1 S4 V1 u0 0.2 0.3 0.5 0 0
Mobility Aware Caching Management Controller (MACMC)

ux S4 V3 S5 V3 S6 V3 S7 V4 S8 V4 ux 0.03 0.12 0.14 0.67 Mobility Aware Video Request Cache Scheduler
uN S20 V0 S21 V0 S22 V0 S23 V0 S24 V0 uN 0 0 0.2 0.8 Handler Handler
Proactive Caching
Mobility
Manager
Estimator Bandwidth
Estimator Replacement
User-RRH Mobile Manager
Probability Table
Cache Replacement Content
Video Segment RRH
Server
Requests
Cache User Video-Segment-
Video Segments Request Table Cache

Video Segments

Fig. 2: The framework of Mobility Aware Proactive Cache Management Controller

s.t. III. CACHING SCHEME

X A. Mobility Aware Cache Management Controller


xru ≤ min ((CR − Crl ) /Nu , Br /Nu ) (3-a) (MACMC)
u∈U
X As in Fig. 2, a MACMC is designed in BBU pool,
xbu ≤ min ((CB − Cbl ) /Nu , Bb /Nu ) (3-b) and it mainly includes the following modules:
u∈U (1) Mobility Estimator. The module records the mo-
X
qru xru + xbu ≤ 1 (3-c) tion trajectory of the user through the signal strength or
r∈R other information, and predicts the probability of user
∀xru ≤ min (1, Br /Nu ) (3-d) moving to a candidate RRH with Markov chain mobile
∀xbu ≤ min (1, Bb /Nu ) ∀r ∈ R, ∀u ∈ U (3-e) model. A User-RRH Mobile Probability Table (URMPT)
is established at the module, and the qru is recorded in
the table. The table is retrieved in the caching scheme
where Br is the maximum data size which can be for getting optimal cache size.
obtained by RRH, and Bb is the maximum data size that (2) Bandwidth Estimator. According to the URMPT
can be obtained by BBU Pool during the cache decision and the history request information, the module calcu-
cycle. Crl and Cbl denote the cached size in RRH and lates available bandwidth for each RRH, and estimates
BBU pool respectively. all the video versions and segments to be requested. The
The above constraints represent that the cache size of total data size to be cached for each user is computed
all users in one RRH or BBU pool can not exceed the according to the current request information.
remaining cache size of the RRH or BBU pool and the (3) User Video-Segment-Requests Table (UVSRT).
currently available bandwidth; for each user, the sum of UVSRT records the index of video segments requested
cached data on the RRHs and BBU pool should be no by users at the current slot, and it can be used to estimate
more than the totally data size to be cached; the cached the bandwidth and affect the caching decision.
data size in one RRH or BBU pool for one user should (4) Cache Scheduler. The module is primarily respon-
not exceed the data size to be cached and the current sible for placing the data to be cached and replacing
available bandwidth. the cached data. The caching decision is based on the
We can see that the problem can be solved by lin- algorithm proposed in this paper. When the cache space
ear programming (LP) method, while in practice, LP is less than a threshold, the cache replacement is started
problems would be NP-Hard and can be only solved up by the least commonly used algorithm (LFU).
for small-scale instances. Therefore, to avoid the NP-
Hardness nature, we divide the whole C-RAN network B. Mobility Aware Proactive Caching Scheme
into several independent sets. Only a few number of As it shows below, there are two parts in the mobility
RRHs, users and data sources are included in one set. aware proactive caching scheme. One part is cache
The simplex method is applied in the paper, and a placement. Initially, the cached size of RRH and BBU
mobility aware cache management controller is desired pool is null, namely, Crl = 0 and Cbl = 0. In every
for computing and communicating. In the next section, caching decision cycle, the matrix for linear inequality
the framework of the center controller is designed, and constraints A and the linear objective function vector f
the flow of the MAPCaching algorithm is described. are computed by the parameter of qru , Br , and Bb , and
the vector for linear inequality constraints b is computed TABLE I: SYSTEM RELATED PARAMETERS
by Crl and Cbl . Finally, xru and xbu is computed and the Parameters Description
caching data is determined by solving the optimization qru the probability that a user moves to an RRH
nRRH the number of RRHs
problem, seeking the U V SRT and filtering the same
nU ser the number of users
request of different users. The other part is cache re- CB the maximum cache space of BBU pool
placement. The left cache space of RRH and BBU pool CR the maximum cache space of RRH
is computed after every caching decision cycle, if the left T cache decision cycle
Nu the data size to be cached in T
cache space is less than a threshold, the LFU algorithms
t0 the transmission delay from the RRH to user
starts up. t1 the transmission delay from the BBU pool to user
t2 the transmission delay from the source to user
Algorithm 1 Mobility Aware Proactive Caching Scheme BR maximum bandwidth between RRH and BBU Pool
maximum bandwidth between BBU Pool and
Input: nU ser, nRRH, qru , Br , Bb , U RM P T, U V SRT . BB
source server
Output: optimal xru , xbu . Crl the occupied cache space of RRH
Cbl the occupied cache space of BBU
1: initial Crl = 0, Cbl = 0;
SEGu information of video requested
2: while TRUE do
3: compute A, f , with qru , Br , Bb ;
4: compute b, with Crl , Cbl ;
t0 . Initially, the cache space of the RRH and BBU pool
5: compute xru , xbu with linprog(f, A, b, [], []);
are 20 giga bytes and 200 giga bytes, and the maximum
6: seeking the U V SRT ,
bandwidths of the RRH to BBU pool and of BBU Pool
7: for i = 1 : U, j = 1 : U, i 6= j do
to source server are 320Mbps and 640Mbps separately.
8: if SEGi = SEGj then
No cache space is occupied at the beginning. The system
9: merge xru and xbu
related parameters are shown in TABLE I.
10: end if
11: end for A. The Average Latency Performance
12: compute Crl , Cbl with xru , xbu ; It can be seen that MAPCaching always achieves
13: if CR − Crl < Nu , or CB − Cbl < Nu then superior performance in terms of average transmission
14: Caching replacement with LFU delay in Fig. 3. It shows in Fig. 3(a) that the delay
15: end if for all algorithms goes up with the increase of t2 , and
16: end while
the delay of MAPCaching is the lowest owing to the
consideration of users’ mobility. In Fig. 3(b), it shows
that the delay decreases with the increase of the cache
IV. PERFORMANCE EVALUATION
size. When the CB cache space is increased to 2TB,
In this section, we present numerical simulations to the latency of MAPCaching algorithm is only 460ms,
evaluate the performance of the proposed MAPCaching which is nearly 20% and 33% lower than that of Greedy
algorithm. We simulate a C-RAN network with 5 RRHs algorithm and EPC algorithm. In Fig. 3(c), it shows that
and a BBU pool, and more than 50 mobile users are the delay has a significant increase with the increase of
uniformly distributed among the RRHs. In particular, we users. The reason is that the cache space assigned to
compare the proposed MAPCaching scheme with two each user will be less with the increase of the users, and
baseline proactive cache algorithms: Greedy algorithm more data will be fetched from source server directly.
and EPC algorithm. For Greedy algorithm, the data is However, the MAPCaching algorithm still has more gain
cached to all the potential RRHs in advance considering than the Greedy and EPC algorithm. When the number
the users’ mobility, and then the data is cached to the of users is 50, the delay is 18% and 32% lower than that
BBU pool when there is no enough space in the RRHs. of Greedy and EPC algorithm, and when it comes to 150
For EPC algorithm, the cache decision is based on the users, the delay of MAPCaching is still 5% lower than
congestion price [18]. It should be noted that the caching that of EPC algorithm.
replacement we used in all three caching algorithm is
LFU algorithm. B. The Cache Hit Rate Performance
The initial simulation conditions we set are as follows: It can be seen that MAPCaching always achieves
considering that the users may watch different versions superior performance in terms of cache hit rate in Fig.
of one video or different videos, the data sizes to be 4.
cached are set to be different for users, and considering In Fig. 4(a), the cache hit rate increases with the
that many users will terminate video watching less than increase of cache space of BBU pool and RRHs, and
30 seconds, the caching decision is made every second. the cache hit rate of the MAPCaching algorithm is nearly
For being consistent with the actual situation, t0 and t1 75% at 2TB CB size, which is almost 20% higher than
we set are 100ms and 300ms, and t2 is 10 times that of the others. In Fig. 4(b), the cache hit rate decreases with
2 0.8
Greedy algorithm
1.8 EPC algorithm 0.7
MAPCache algorithm
1.6
0.6

1.4

min Delay (s)


0.5

CHitRate
1.2
0.4
1

0.3
0.8
Greedy algorithm
0.6 0.2 EPC algorithm
MAPCache algorithm
0.4 0.1
1 1.2 1.4 1.6 1.8 2 1 1.2 1.4 1.6 1.8 2
t2 (s) CB Size(Bytes) x 10
10

(a) Latency with different t2 (a) Cache hit rate with diferent CB size

0.8 0.8
Greedy algorithm
0.75 EPC algorithm 0.7
MAPCache algorithm

0.7 0.6
Greedy algorithm
EPC algorithm
min Delay (s)

0.65 0.5 MAPCache algorithm

CHitRate
0.6 0.4

0.55 0.3

0.5 0.2

0.45 0.1
1 1.2 1.4 1.6 1.8 2 50 100 150
CB Size(Bytes) 10 User Size
x 10

(b) Latency with diferent CB size (b) Cache hit rate with different user size

0.9
Fig. 4: Cache hit rate in different parameters
0.85

0.8

0.75

rithm is impacted by the forecast accuracy of mobile


min Delay (s)

0.7

0.65 Greedy algorithm


EPC algorithm
probability significantly. The current research mainly
0.6 MAPCache algorithm
simulated the users’ mobile probability by Markov mod-
0.55
el, and other mobility estimator can be used in the algo-
0.5

0.45
rithm too. On the other hand, the paper only considers
50 100 150
User Size the users’ mobility without concerning the popularity
(c) Latency with different user size of content. We will take the two factors into account
simultaneously for a better caching performance in the
Fig. 3: Latency with different parameters future research. In addition, other characteristics of the
data can be considered as well, such as the hierarchical
encoding of video, the correlation of video streams and
the increase of the users dramatically, and it tends to the motion prediction of virtual reality video, etc.
be consistent for all algorithms, but the cache hit rate
of MAPCaching algorithm is nearly 35% at 150 users ACKNOWLEDGMENT
which is still more than the other two algorithms. This work was supported by the National Natural
Science Foundation of China under Grant 61171107 and
V. CONCLUSION Grant 61271257.
In this paper, a proactive caching strategy considering
the users’ mobility is designed over C-RAN and a mobile R EFERENCES
aware cache management controller is designed in BBU [1] V. N. I. Cisco, “Forecast’cisco visual networking
pool to fully utilize the computation resources. Extensive index: Global mobile data traffic forecast update,
simulations are carried out to analyze the performance 2016–2021’,” 2017.
of the proposed algorithm. Simulation results show that, [2] J. Hoydis, M. Kobayashi, and M. Debbah, “Green
by considering the mobile probability, a lower average small-cell networks,” IEEE Vehicular Technology
transmission latency and a higher cache hit rate can be Magazine, vol. 6, no. 1, pp. 37–43, 2011.
achieved by the proposed MAPCaching algorithm. In the [3] X. Wang, M. Chen, T. Taleb, A. Ksentini, and
best case, the average transmission latency is 30% lower V. Leung, “Cache in the air: exploiting content
than the others and the cache hit rate is as high as 75%, caching and delivery techniques for 5g systems,”
which is 20% higher than the others. IEEE Communications Magazine, vol. 52, no. 2,
It should be noted that the performance of the algo- pp. 131–139, 2014.
[4] S. Borst, V. Gupta, and A. Walid, “Distributed networks (c-rans),” IEEE Network, 2016.
caching algorithms for content distribution net- [16] Z. Zhang, D. Liu, and Y. Yuan, “Layered hierarchi-
works,” in INFOCOM, 2010 Proceedings IEEE, cal caching for svc-based http adaptive streaming
2010, pp. 1478–1486. over c-ran,” in IEEE Wireless Communications and
[5] J. Dai, Z. Hu, B. Li, and J. Liu, “Collaborative NETWORKING Conference, 2017, pp. 1–6.
hierarchical caching with dynamic request routing [17] Y. Rao, H. Zhou, D. Gao, H. Luo, and Y. Liu,
for massive content distribution,” in INFOCOM, “Proactive caching for enhancing user-side mobility
2012 Proceedings IEEE, 2012, pp. 2444–2452. support in named data networking,” in International
[6] M. Taghizadeh, K. Micinski, S. Biswas, C. Ofria, Conference on Innovative Mobile & Internet Ser-
and E. Torng, “Distributed cooperative caching in vices in Ubiquitous Computing, 2013, pp. 37–42.
social wireless networks,” IEEE Transactions on [18] V. A. Siris, X. Vasilakos, and G. C. Polyzos, “Ef-
Mobile Computing, vol. 4, no. 4, pp. 1037–1053, ficient proactive caching for supporting seamless
2014. mobility,” pp. 1–6, 2014.
[7] J. Dai, F. Liu, B. Li, B. Li, and J. Liu, “Collabo- [19] K. Poularakis and L. Tassiulas, “Code, cache and
rative caching in wireless video streaming through deliver on the move: A novel caching paradigm
resource auctions,” IEEE Journal on Selected Areas in hyper-dense small-cell networks,” IEEE Trans-
in Communications, vol. 30, no. 2, pp. 458–466, actions on Mobile Computing, vol. 16, no. 3, pp.
2012. 675–687, 2017.
[8] R. Huo, F. R. Yu, T. Huang, R. Xie, J. Liu, V. C. M.
Leung, and Y. Liu, “Software defined networking,
caching, and computing for green wireless net-
works,” IEEE Communications Magazine, vol. 54,
no. 11, pp. 185–193, 2016.
[9] C. Liang, F. R. Yu, and X. Zhang, “Information-
centric network function virtualization over 5g mo-
bile wireless networks,” Network IEEE, vol. 29,
no. 3, pp. 68–74, 2015.
[10] C. Liang and F. R. Yu, “Virtual resource allocation
in information-centric wireless virtual networks,” in
IEEE International Conference on Communication-
s, 2015, pp. 3915–3920.
[11] M. Chen, M. Mozaffari, W. Saad, C. Yin, M. Deb-
bah, and C. S. Hong, “Caching in the sky: Proactive
deployment of cache-enabled unmanned aerial ve-
hicles for optimized quality-of-experience,” IEEE
Journal on Selected Areas in Communications,
vol. 35, no. 5, pp. 1046–1061, 2016.
[12] M. Peng, Y. Sun, X. Li, Z. Mao, and C. Wang,
“Recent advances in cloud radio access networks:
System architectures, key techniques, and open is-
sues,” IEEE Communications Surveys & Tutorials,
vol. 18, no. 3, pp. 2282–2308, 2016.
[13] Z. Zhao, M. Peng, Z. Ding, and C. Wang, “Cluster
formation in cloud-radio access networks: Perfor-
mance analysis and algorithms design,” in IEEE In-
ternational Conference on Communications, 2015,
pp. 3903–3908.
[14] Z. Zhao, M. Peng, Z. Ding, W. Wang, and H. V.
Poor, “Cluster content caching: An energy-efficient
approach to improve quality of service in cloud
radio access networks,” IEEE Journal on Selected
Areas in Communications, vol. 34, no. 5, pp. 1207–
1221, 2016.
[15] T. X. Tran, A. Hajisami, and D. Pompili, “Cooper-
ative hierarchical caching in 5g cloud radio access

You might also like