0% found this document useful (0 votes)
35 views7 pages

Open Radio Access Networks Experimentation Platform Design and Datasets Pre Print

Uploaded by

Daniel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views7 pages

Open Radio Access Networks Experimentation Platform Design and Datasets Pre Print

Uploaded by

Daniel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1

Open Radio Access Networks (O-RAN)


Experimentation Platform: Design and Datasets
J. Xavier Salvat, Jose A. Ayala-Romero, Lanfranco Zanzi, Member, IEEE,
Andres Garcia-Saavedra, and Xavier Costa-Perez, Senior Member, IEEE

Abstract—The Open Radio Access Network (O-RAN) Alliance within tight time deadlines. In fact, different works show that
is driving the latest evolution of RAN deployments, moving from resource contention between NFs sharing computing infras-
traditionally closed and dedicated hardware implementations tructure may lead to up to 40% of performance degradation
towards virtualized instances running over shared platforms
characterized by open interfaces. Such progressive decoupling of compared to dedicated platforms [3] This coupling between
radio software components from the hardware paves the road for radio resource allocation and computing requirements poses
future efficient and cost-effective RAN deployments. Nevertheless, new challenges [2].
there are many open aspects towards the successful implemen- Third, RAN virtualization poses a new energy consumption
tation of O-RAN networks, such as the real-time configuration profile compared to traditional BSs that operate with dedicated
of the network parameters to maximize performance, how to
reliably share processing units among multiple virtualized base hardware [4]. The energy consumption of vBSs not only
station (vBS) instances, how to palliate their energy consumption, depends on the network state (e.g., traffic load, SNR), but
or how to deal with the couplings between vRANs and other also on the general-purpose hardware (e.g., CPU/GPU) and
services co-located at the edge. Intending to shed light on these the software implementation of the radio stack.
aspects, in this article, we showcase the design principles of an O- Finally, when considering AI services running at the edge
RAN compliant testbed, and present different datasets collected
over a wide set of experiments, which are made public to foster
of the network, both the edge and network configuration are
research in this field. intertwined [5]. That is, the configuration of the edge services
(e.g., QoS) and the network (e.g., channel capacity) jointly
Index Terms—O-RAN, vRAN, RAN Intelligent Control.
impact both service performance and power consumption of
the whole system. Therefore, evaluating and orchestrating the
I. I NTRODUCTION system at once, although challenging, can bring global benefits
The O-RAN Alliance is a joint effort in the mobile industry in terms of performance and energy.
to redesign the future Radio Access Network (RAN) technolo- To shed light on these aspects, we present an O-RAN
gies [1]. The key principles are threefold: (i) intelligent RAN testbed that provides a prototypical environment to experiment
control at different timescales to foster innovation; (ii) open with different network settings and evaluate machine learning
interfaces between control-plane components and network (ML) solutions to the above problems. Using this testbed,
functions to break the traditional vendor lock-in; and (iii) we collect three datasets aimed at contributing to different
virtualization to improve flexibility and reduce costs. relatively-unexplored aspects of O-RAN. The datasets are
However, the advent of O-RAN raises novel technical chal- publicly available at [6] and are described as follow:
lenges. First, the higher level of flexibility comes at the cost of • Computing dataset characterizes the computing usage of

less predictable performance and computing resource demand. vBS as a function of several contextual (e.g., traffic load,
In contrast to the traditional hardwired base stations (BSs), the channel quality) and configuration (e.g., MCS, CPU time)
computing resources needed by a virtualized BS (vBS) vary parameters. We also evaluate the effect of several vBS
with the context, including network load, the modulation and instances sharing the same platform [2].
coding scheme (MCS) used, channel quality, etc. The mapping • Energy dataset measures the energy consumption of a

between this high-dimensional set of parameters and the vBS as a function of a wide range of parameters (e.g.,
requirements for computing resources or energy consumption MCS, airtime, computing platform, bandwidth). The en-
is very complex and hard to predict [2]. ergy measurements are taken in parallel using software
Second, virtualizing RAN functions over a shared infras- tools and an external digital power meter [4], [7].
tructure can provide high flexibility and cost efficiency, but the • Application dataset considers an AI service running in an

overhead introduced when contending for a shared resource edge server. It characterizes at the same time the service
compromises reliability to execute signal processing tasks performance and the consumed energy of the vBS and
edge server as a function of their joint configuration [5].
J. X. Salvat, J. A. Ayala, L. Zanzi, and A. Garcia-Saavedra are with NEC These datasets are the result of the study of different
Laboratories Europe GmbH, Heidelberg, Germany.
X. Costa-Pérez is with NEC Laboratories Europe GmbH, Heidelberg, problems addressed in our previous work. In particular, [2]
Germany, and i2CAT Foundation and ICREA, Barcelona, Spain. proposes a deep reinforcement learning approach to allocate
The work was supported by the European Commission through Grants No. the computing resources of a virtualized RAN (vRAN). In [7],
SNS-JU-101097083 (BeGREEN) and 101017109 (DAEMON). Additionally,
it has been supported by MINECO/NG EU (No. TSI-063000-2021-7) and the the energy consumption of the uplink is studied and charac-
CERCA Programme. terized using an analytical model. In [4], a Bayesian learning
2

algorithm is proposed to allocate vRAN radio resources, 3GPP interfaces


SMO Framework O-RAN
balancing energy and performance. Finally, [5] uses online Non-RT RIC
O-gNB
learning to jointly configure the vRAN and an edge AI service rAPP rAPP
O1 E2
O-CU-UP
to save energy while providing performance guarantees. O1 A1 O2
NG-u/X2-u/Xn-u Near RT
Other related works consider the deployment of vRANs Near-RT RIC O1 E1 F1-u  10 ms
O-CU-CP E2
on commodity hardware [8], [9]. The authors in [8] pro- xAPP xAPP
F1-c NG-u/X2-u/Xn-u
pose a CPU scheduling framework to collocate the vRAN E2 O1
O-DU
E2

with general-purpose workloads while meeting the latency Open FH


RT
O-gNB < 10 ms
requirements. In [9], an optimized data processing pipeline is O1
O-RU
E2
O-Cloud
proposed to handle the high computational demand of massive
MIMO processing in software-only systems. Finally, ColO- Fig. 1. O-RAN architecture. On the left, the O-gNB and the O-Cloud
RAN [10] presents an SDR-enabled large-scale framework to are shown with the O-RAN RAN Intelligent Controllers (RICs). On
test O-RAN RIC algorithms. For example, OrchestRAN [11], the right, a detailed scheme of the O-gNB functionalities.
a RIC algorithm that orchestrates other data-driven algorithms
based on mobile operators’ intents, is prototyped and evaluated
in ColO-RAN. III. T ESTBED D ESIGN AND I MPLEMENTATION
This section presents an O-RAN compliant testbed that
II. O-RAN A RCHITECTURE
enables experimentation with vRAN deployments and eval-
Fig. 1 provides an overview of the O-RAN architecture. uation of resource allocation and orchestration algorithms. We
Like 3GPP, O-RAN distributes all the functions of a gNB also detail its design principles and its implementation. Fig. 2
across three main Network Functions (NFs): (i) a Radio depicts the main functional blocks and overall architecture,
Unit (O-RU), (ii) a Distributed Unit (O-DU), and (iii) a while Fig. 3 shows the real testbed.
Central Unit (O-CU) [12]. The O-RU hosts the lowest physical
layer (PHY) tasks, including amplification, signal sampling,
and FFT operations; the O-DU hosts the RLC, MAC, and A. Virtualized RAN Computing Platform
higher PHY operations such as forward error correction (FEC). As depicted in Fig. 2, the testbed hosts 1 multiple user
Finally, the O-CU accommodates the RRC, SDAP, and PDCP
equipments (UEs), each one attached to 2 a virtualized vBS
layers. In addition, O-RAN specifies an O-Cloud platform
to host virtualized NFs (VNFs), including an acceleration instance running in 3 a shared computing platform. Each UE
abstraction layer (AAL) to offload signal processing operations consists of a radio head and a set of dedicated computing re-
such as FEC or FFT. sources provided by a laptop. Such resources host the complete
In the control plane, O-RAN introduces a non-real-time radio protocol stack and processes from heterogeneous mobile
RAN intelligent controller (non-RT RIC), and a near-real- applications. Both UEs and vBSs use a USRP B210 board as
time RAN intelligent controller (near-RT RIC). The non-RT a radio head, and the srsRAN [13] software to implement the
RIC is hosted by the Service Management and Orchestration radio protocol stack. The vBSs’ USRP boards are attached to
(SMO) framework and enables control loops at large time the computing pool via a USB3.0 connector, while the srsRAN
scales (i.e., seconds or minutes). Formally, the different control vBSs run as containerized software instances using Docker.
applications that run within the non-RT RIC are called rApps, To ensure repeatability, UEs and vBSs’ radio front-ends are
and they support different tasks such as analyzing RAN connected with RF SMA cables and 20dB attenuators. Each
monitoring information or issuing control policies. Conversely, UE is connected to one vBS, emulating the aggregated traffic
the near-RT RIC supports control loops over sub-second time volumes generated over a cell. In our testbed, we support a
scales (i.e., ∼10ms) through the so-called xApps. maximum of 5 UEs and 5 vBS.
O-RAN defines four key interfaces –O1, A1, E2, and O2– The computing platform 3 features commercial off-
which allow information exchange among the components the-shelf components like an 8 cores Intel i7-7700K CPU
of the architecture. Specifically, the O1 interface enables and Ubuntu operating system (OS). For the purposes of
operation and management procedures, such as FCAPS (Fault, our tests, the kernel has been compiled with the option
Configuration, Accounting, Performance, and Security), and of CONFIG_RT_GROUP_SCHED, so that resource allocation
software and file management. The A1 interface connects can be performed on real-time threads. 6 computing cores
the non-RT RIC with the near-RT RIC and enables the are reserved for the shared pool by means of systemd’s
enforcement of control policies defined at the upper archi- CPUAffinity. Specifically, considering the CPU’s topology
tectural levels. The near real-time RIC connects to the O-gNB depicted in Fig. 4, we use cores 0 and 4 to run the OS’
components (O-CU, O-DU, and O-RU) by means of the E2 processes and cores 1–3 and 5–7 to run the Docker instances
interface, enabling the enforcement of control policies and data containing vBSs. Thus, the access to the L1–L2 caches of both
collection. Finally, the O2 interface connects the SMO with the core sets is isolated, minimizing the residual computing noise
O-Cloud to enable infrastructure monitoring and management. coming from the OS.
On the other hand, O-RAN leverages on the 3GPP fronthaul
interfaces, which are an enabler of the gNB disaggregated
B. Service Management and Orchestrator and Mobile Core
architecture.
3

9 vRAN Operator
Dashboard
4 Service Management
3 vRAN shared Compung & Orchestrator (SMO)
plaorm (O-Cloud) O-cloud Mgmt AI/ML Lifecycle mgmt. of
&
Engine di erent instances
1 2 xApp Orchestration
Near-RT RIC xApp rApp
Retrieve Time-series
xApp Non-RT RIC rApp
metrics database
rApp
Virtual vBS Instances
vBS vBS vBS vBS 8
1 2 3 N Push radio and
compung policies Monitoring

5 Mobile 6 Edge applicaon server


Shared CPUs Core
SMA cables Trac gen. AI/ML apps
Radio Units 7
UEs Power meter
Laptops USRPS

Fig. 2. Detailed testbed architecture.

Fig. 3. Picture of the testbed.

L3 (LLC) #0 also set up Docker daemon instances in the different hosts


to retrieve and enforce SMO policies.
L2 #0 L2 #1 L2 #2 L2 #3
Besides, the SMO allows us to define and enforce new
L1i/L1d #0 L1i/ L1d #1 L1i/ L1d #2 L1i/ L1d #3 configuration policies via the non-RT RIC, which forwards the
Physical Core #0 Physical Core #1 Physical Core #2 Physical Core #3 configuration policies to the near-RT RIC. In turn, the near-RT
Core #0 Core #1 Core #2 Core #3 RIC enforces them in the vBSs. Also, the SMO features an
AI/ML engine to support different rApps. The O-RAN RIC
Core #4 Core #5 Core #6 Core #7
control interfaces A1 and E2 are implemented using the ZMQ
message library. The vRAN orchestration algorithms under test
Fig. 4. CPU architecture of the vRAN shared computing platform. are deployed using the AI/ML engine. This entity has access to
a time-series database 8 to retrieve the monitoring metrics.
We use a containerized version of srsEPC [13] to emulate
As shown in Fig. 2, the SMO 4 and the mobile core mobile core functionalities. srsEPC is deployed into a separate
functions 5 are deployed in separated computing nodes from host reachable by the Edge application server and attached
the vBSs. The near-RT RIC is co-located with the vBSs, while UEs. We connect the vBS to the mobile core using Docker’s
the SMO hosts the non-RT RIC and the functionalities to host networking.
manage and orchestrate the O-Cloud infrastructure. We use Finally, the operator dashboard 9 is a custom python
a custom version of the non-RT RIC and the near-RT RIC. framework that allows us to interact with the SMO. The oper-
As the SMO hosts the functions to orchestrate and manage ator dashboard enables the configuration of the experimental
the O-Cloud infrastructure using the O1 and O2 interfaces, scenario, including traffic and SNR patterns, the use of an AI
it allows us to set up different virtual networks and start, service located at the edge server, the number of active vBS
stop, and remove a dynamic number of vBSs configuring the and UEs, and the vRAN algorithm to be tested.
resources, such as the computing time and the computing cores
that shall be allocated to each instance. Furthermore, it can C. Traffic Generators and Other Applications
start and stop UEs and the mobile core. We support different Our testbed has an edge application server 6 enabled with
testbed scenarios that require ad-hoc solutions to orchestrate a NVIDIA GeForce RTX 2080 Ti GPU. In some scenarios,
the different entities. We develop a set of different functions we use this server as a source for downlink (DL) traffic
using the python’s Docker library for this purpose. We and a sink for uplink (UL) traffic, using MGEN for this
4

purpose. In other scenarios, this server hosts edge AI services. C. Per-flow Metrics
In our experiments, we select an object recognition service We gather per-flow metrics of the different traffic gen-
due to its popularity in computer vision applications (e.g., erated/received by UEs to/from the application server by
vehicle navigation, surveillance systems, mobile health, etc.) using IP tables packet and byte counters. Upon starting a
and high resource demand (GPU processing is required). In new UE and its traffic generator or application instance, we
particular, we deployed detectron2, an open-source object add two new tables to each container’s IP tables, namely
recognition software. In our experiments with the edge service, TRAFFIC_ACCT_IN and TRAFFIC_ACCT_OUT, to track
the UE sends an image from the well-known COCO dataset, traffic into the INPUT and OUTPUT directions. We add
and the server replies with the bounding boxes and labels dedicated rules to match the IP addresses of the UE and
computed by detectron2. Both the traffic generators and the application and obtain the cumulative count of packets
the edge AI services are deployed using Docker containers. and bytes hitting these rules, saving them into a file that is
periodically read by Telegraf.
IV. M ETRICS AND DATA S TORAGE
To gather monitoring metrics from the vRAN platform and D. Energy Consumption Metrics
the O-Cloud, we use an O-RAN compliant monitoring system.
We use software tools and an external digital power meter
The near-RT RIC subscribes to the O-RAN components de-
ployed so that it retrieves the different radio metrics through 7 to measure the energy consumption of different testbed
the E2 interface [14]. Afterward, the near-RT RIC passes the components. In particular, for the software energy measure-
data using the A1 interface to the non-RT RIC. We developed ments of vBSs, we use Intel’s Running Average Power Limit
an rAPP to push data coming from the different vBS into (RAPL) functionality using the Linux tool turbostat.
the time-series database. Moreover, the SMO can set up RAPL estimates the power consumed by the CPU by using
performance management (PM) jobs to gather metrics from hardware performance counters and I/O models. Similarly, we
the O-Cloud platform, mobile core, and edge server. We use obtain the GPU power consumption using the NVIDIA driver
Telegraf and its file extension as a metric agent collector via nvidia-smi.
to gather the data from all the PM jobs and send it to the time- Note that software measurements only consider the main
series database periodically. To ease the final processing of processing unit (CPU or GPU). In contrast, hardware measure-
multi-host data sources, we keep clock synchronization of all ments capture the entire platform’s power (e.g., CPU, GPU,
hosts by using the Precision Time Protocol (PTP). To store the motherboard, RAM memory, etc.) and the radio head. We use
monitoring metrics database, we use InfluxDB time-series the digital power meter GW-Instek GPM-8213 along with the
database. We also use Grafana to visualize data in real- GW-Instek Measuring adapter GPM-001 to retrieve this data.
time. In the following, we present a complete description of These measurements are collected by the edge application
the metrics that can be collected from our testbed. server via an SCPI interface and saved into a file to be read
by Telegraf.
A. Computing Metrics
E. Radio Control Policies
The computing utilization for the vBS Docker instances
deployed in the computing pool can be gathered by using The vRAN orchestration algorithms can enforce different
a PM job that periodically reads the information in /proc radio policies on vBSs. As shown in related works using this
filesystem (for each thread and in each container), and returns testbed [2], [5], [4], [15], [7], the use of different radio policies
the computing utilization for each computing core in use. The is fundamental, for example, to balance energy consumption
scripts save the information to a JSON file, which can be and performance or to adapt to the available computing
easily read and processed by Telegraf, enabling xApps and resources. We use the E2 O-RAN interface to control the
rApps access to this information. Furthermore, we also use following radio parameters dynamically:
the kernel tool perf to measure low-level metrics for each • Modulation and Coding Scheme: upper-bound and fixed
container, such as the number of cache misses, the number of values. This radio policy is used in [2], [4], [5], [7] to
core cycles, and the number of instructions. set the available computing resources.
• Transmission Gain: to evaluate different SNR patterns or

B. srsRAN Metrics to save energy.


• Airtime (UL and DL Physical Resource Blocks): We con-
We collect metrics from all the srsRAN software instances
figure the maximum number of radio blocks per subframe
(UEs and vBSs). In the case of the vBSs, we enhance
on uplink and downlink directions, which modifies the
srsRAN by adding the E2 interface allowing the near-RT RIC
ratio of used radio resources.
to subscribe and periodically receive monitoring information
from the different layers of the protocol stack, such as the
SNR, the uplink and downlink MCS, or the traffic demand for V. DATA S ETS
both directions, as well as the uplink decoding time and the In this section, we describe the organization and metrics
subframe time processing. In the case of the UEs, we modified of three datasets collected with our testbed and saved in
srsRAN to save standard metrics into a JSON file to be read CSV format. The datasets are available on the IEEE DataPort
by Telegraf. portal [6].
5

COMPUTING DATASET ENERGY DATASET APPLICATION DATASET


Configuration Parameters Configuration Parameters Configuration Parameters
Column Label Description Column Label Description Column Label Description
1 mcs dl i vBS i DL MCS 2 BW Bandwidth 3 BW Bandwidth
2 mcs ul i vBS i UL MCS 5(6) traffic load dl(ul) DL(UL) load 4 img resolution Image size
3 dl kbps i vBS i DL load 7(8) txgain dl(ul) TX gain 5 airtime ratio airtime alloc.
4 ul kbps i vBS i UL load 9(10) selected mcs dl(ul) DL(UL) MCS alloc. 6 gpu power GPU alloc.
5 cpu set i vBS i CPU set 11(12) selected airtime dl(ul) DL(UL) airtime alloc.
Measurements Measurements Measurements
Column Label Description Column Label Description Column Label Description
6-13 cpu i Avg. CPU usage 23(24) thr dl(ul) Avg. DL(UL) throughput 7 av end2end delay Avg. delay
14 explode Successful? 25(26) bler dl(ul) Avg. DL(UL) Block Error Rate 12-17 AP(1-6) Avg. Precision
28 pm power Avg. HW power 18-23 AR(1-6) Avg. Recall
29 pm var Var. HW power 24 powermeter av Avg. HW power
30 pm median Median HW power 25 powermeter var Var. HW power
31 n pm Nr. of HW power samples 26 powermeter median Median HW power
32 rapl power Avg. SW power 27 rapl av Avg. SW power
33 rapl var Var. SW power 28 rapl var Var. SW power
34 n rapl Nr. of SW power samples 29 gpu av Avg. GPU power
30 gpu var Var. GPU power

TABLE I
R ELEVANT FIELDS IN C OMPUTING , ENERGY , AND APPLICATION DATASETS [6].

A. Computing Dataset Description mcs dl i, mcs ul i, dl kbps i, ul kbps i and cpu set i define
the context of a vBS i, which represent the instantaneous DL
This dataset relates to the research activities published in [2] MCS index, UL MCS index, the traffic demand in downlink
and considers the instantiation of a different number of vBSs and uplink (in kbps), and the CPU core set configuration. The
over the same computing platform. With reference to Fig. 2, measurements for the i-th computing core are provided by the
we adopted the components 1 , 2 , 3 , 4 , 5 , 6 , column cpu i. Finally, when column explode takes the value
8 and 9 . The vBSs are instantiated in specific CPU core True, it indicates that the traffic demand has not been served
sets with different time-sharing allocations. Each vBS has correctly, which is correlated to the lack of computational
an associated context, composed of the traffic demands and resources. Conversely, when explode is set to False the traffic
statistics about the used MCS for both UL and DL. We remark is served successfully. Table I (left) summarizes the above.
that different network parameters (e.g., the MCS index) have
impacts on the CPU load, mainly due to coding/decoding B. Energy Dataset Description
workloads. We run a 20-second experiment for each row in
the dataset with a specific context, and evaluate the impact This dataset used in [4], [7] aims to characterize the
(and cross-interference) of the processing workload across the power consumption of a vBS. These experiments adopted
running instances. The resulting per-core CPU utilization is the same components presented in the previous scenario, with
the average of the samples collected every 200 ms. the addition of the power meter 7 . The main configuration
parameters, shown in Table I (middle), are related to the traffic
We collected two sets of data. The measurements in
load, SNR, MCS, and airtime in both DL and UL. Note that,
datasets unpinned directory consider the default Linux CPU
to measure different SNR values, we modify the transmission
scheduler policy. It allocates CPU resources in an unrestricted
gain of the USRPs.
manner and, therefore, the workloads of different vBSs share
The dataset comprises two files. The file dataset ul.csv only
CPU cores. We consider heterogeneous deployment cases,
considers UL traffic [7], while in dataset dlul.csv considers
spawning from one to five concurrent vBS instances. The
both concurrent UL and DL traffic loads [4]. Each row
measurements in datasets pinned directory are collected with
corresponds to 1 minute execution of a fixed configuration.
a set of CPU cores dedicated to each vBS instance (i.e., CPU
The most important metrics in the dataset are shown in the
pinning) that provides isolation between vBS workloads. In
bottom part of Table I. We measure the consumed power
particular, we deploy two vBS instances and consider two
via software (RAPL) and hardware (digital power meter), as
different pinning options: (i) pin one vBS to core 1 and
explained in Sec. IV. We also measure the block error rate
the second vBS to core 2, and (ii) we change the pinning
(BLER) and the throughput. Other interesting metrics in the
configuration of the second vBS to core 5. In this way, we can
dataset are the average decoding time of the uplink transport
compare the computing utilization when L1 and L2 caches are
blocks, the clock speed of the CPU in the computing platform,
shared or not.
and the buffer state of the UE and vBS.
Second, we deploy four vBSs and carry out a similar
experiment. In the first case, we pin the vBSs to cores 1,
2, 3, and 4, respectively. In this way, vBSs have the L1 and C. Application Dataset Description
L2 caches isolated. In the second experiment, we pin them in In this dataset, used in [5], we consider the scenario of a mo-
cores 1, 5, 2, and 6, respectively. In this scenario, there is L1 bile user accessing an AI service running in an edge server, and
and L2 cache isolation between the sets of vBSs 1,2 and 3,4, measure how the joint configuration of the vBS, AI service,
but there is no cache isolation between the vBSs 1 and 2, and and the edge server settings impact the power consumption and
3 and 4. The dataset contains the following metrics. Columns service performance. To launch these experiments we deploy
6

an AI/ML application in 6 shown in Fig. 2 and used 1 ,


2 , 3 , 4 , 5 , 8 and 9 . Measured Ideal isolation
In this dataset, the configuration parameters include the Low SNR Mid SNR High SNR
6

CPU usage (cores)


airtime (airtime ratio), the image resolution (img resolution),
5
which indicates the percentage of the original size of the 4
image, and the GPU speed (gpu power), which indicates the 3
maximum power that the GPU is allowed to dissipate. Thus, 2
1
the higher the GPU speed, the faster the processing. 0
For each row in the dataset, 150 images from the COCO 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
dataset are processed by the edge server, which returns the Number of vBS instances
bounding boxes and labels of the objects in the image. For each
image, we measure (i) the end-to-end delay that includes the Fig. 5. Computing usage for different SNR configurations and
time incurred by a user request (an image) to be delivered to different number of vBSs compared to expected linear increase.
the service, the processing time (GPU delay), the time incurred
to reach the user with the reply; (ii) the image processing
delay (imp proc delay) indicates the time to load and resize VII. C ONCLUSIONS
the images at the user side; (iii) the GPU delay (gpu delay) In this paper, we described the setup of an O-RAN compli-
indicates the delay incurred by the GPU at the edge server; (iv) ant testbed, starting from its design principles towards practical
Number of detected objects (num obj); (v) Average precision implementation and technical aspects, using off-the-shelf net-
in the object recognition task (AP per image). Moreover, we working equipment and virtualization software. Additionally,
also include in each row global measurements as a result this paper is accompanied by 3 datasets, collected from our
of averaging across all the images. To measure the global testbed, each one focusing on different scenarios (i) Computing
performance of the object recognition service, we also provide dataset characterizes the computing usage of vBS instances on
several precision and recall values, namely AP1-AP6 and AR1- shared computing platforms; (ii) Energy dataset measures the
AR6 [5]. energy consumption of a vBS as a function of a wide range
Finally, to measure the global power consumption, we of parameters; and (iii) Application dataset characterizes the
provide the columns rapl av and rapl var that measure CPU joint impact of the network and an edge service configuration
consumed power using RAPL. Additionally, the power con- on the energy consumption and performance of the system.
sumed by the GPU-enabled edge server is measured using We believe these datasets, together with our practical insights,
the power meter (powermeter av and powermeter var) and can promote research in this field and foster the development
software (gpu av and gpu var). of novel solutions for the efficient sharing and management of
radio and computing resources in open radio environments.
VI. N OVEL A PPLICATIONS
R EFERENCES
In this section, we describe some examples of novel ap-
[1] M. Polese et al., “Understanding O-RAN: Architecture, interfaces,
plications for the presented datasets. Using the computing algorithms, security, and research challenges,” IEEE Communications
dataset, we plot in Fig. 5, the CPU usage of a shared Surveys & Tutorials, 2023.
computing platform as a function of a different number of vBS [2] J. A. Ayala-Romero et al., “vrAIn: Deep Learning based Orchestration
for Computing and Radio Resources in vRANs,” IEEE Transactions on
instances for three different channel quality configurations. Mobile Computing, 2020.
Specifically, we observe that the CPU usage does not scale [3] A. Manousis et al., “Contention-aware Performance Prediction for
linearly with the number of vBS due to the interference among Virtualized Network Functions,” in ACM SIGCOMM, 2020.
[4] J. A. Ayala-Romero et al., “Orchestrating Energy-Efficient vRANs:
processes (called the noisy neighbor problem), even when the Bayesian Learning and Experimental Results,” IEEE Transactions on
processes are pinned. Moreover, we also observe that the CPU Mobile Computing, 2021.
usage also depends on other parameters, such as the channel [5] ——, “Edgebol: Automating energy-savings for mobile edge ai,” in ACM
CoNEXT, 2021.
quality. This motivates the need for predictive ML models [6] J. X. Salvat Lozano et al., “O-RAN experimental evaluation
that can anticipate the CPU demand given the context and datasets,” 2022, Accessed on 09.03.2023. [Online]. Available: https:
the configuration of the vBS instances. This is of enormous //dx.doi.org/10.21227/64s5-q431
[7] J. A. Ayala-Romero et al., “Experimental Evaluation of Power Con-
importance as a deficit of computing resources can lead to sumption in Virtualized Base Stations,” in IEEE ICC, 2021.
synchronization loss and drastic network throughput decay. [8] X. Foukas et al., “Concordia: Teaching the 5g vran to share compute,”
Concerning the energy dataset, this data can be used to in ACM SIGCOMM 2021 Conference, 2021, pp. 580–596.
[9] J. Ding et al., “Agora: Real-time massive mimo baseband processing
fit the linear energy model proposed in [7] or a potentially in software,” in International Conference on Emerging Networking
extended model considering also the DL. Similarly, the appli- Experiments and Technologies, 2020, pp. 232–244.
cation dataset can be used to train models of the consumed [10] M. Polese et al., “Colo-ran: Developing machine learning-based xapps
for open ran closed-loop control on programmable experimental plat-
power of an edge AI service. These energy models can be forms,” IEEE Transactions on Mobile Computing, 2022.
very useful for the research community, as they allow us to [11] S. D’Oro et al., “Orchestran: Network automation through orchestrated
accurately predict the consumed power of a mobile network as intelligence in the open ran,” in IEEE INFOCOM 2022-IEEE Conference
on Computer Communications. IEEE, 2022, pp. 270–279.
a function of its configuration and can be used, for example, [12] Open RAN Alliance, “O-RAN-WG1-O-RAN Architecture Description
to derive novel energy-driven strategies for green networking. – v04.00.00,” Tech. Spec., Mar. 2021.
7

[13] I. Gomez-Miguelez et al., “srsLTE: An open-source platform for LTE


evolution and experimentation,” in ACM WiNTECH, 2016, pp. 25–32.
[14] O-RAN Alliance, “O-RAN Near-Real-time RAN Intelligent Controller
Architecture & E2 General Aspects and Principles 2.0,” Link, O-RAN
Alliance, Technical Specification (TS), 2022.
[15] L. Zanzi et al., “LACO: A Latency-Driven Network Slicing Orchestra-
tion in Beyond-5G Networks,” IEEE Transactions on Wireless Commu-
nications, vol. 20, no. 1, pp. 667–682, 2021.

Josep Xavier Salvat received his Ph.D. from the Technical University of
Kaiserslautern in 2022 and he currently works as a senior research scientist in
the 6G Network group at NEC Laboratories Europe, Heidelberg. His research
interests lie in the application of machine learning to real-life computer
communications systems, including resource allocation and energy efficiency
problems.

Jose A. Ayala-Romero received his Ph.D. degree from the Technical Uni-
versity of Cartagena, Spain, in 2019. Currently, he is a senior researcher with
the 6G Network group at NEC Laboratories Europe. His research interests
include the application of machine learning and reinforcement learning to
solve mobile network problems.

Lanfranco Zanzi received his Ph.D. degree from the Technical University
of Kaiserslautern (Germany) in 2022. He works as a senior research scientist
at NEC Laboratories Europe. His research interests include network virtual-
ization, machine learning, blockchain, and their applicability to 5G and 6G
mobile networks in the context of network slicing.

Andres Garcia-Saavedra received his Ph.D. degree from the University


Carlos III of Madrid in 2013. Currently, he is a Principal Researcher at
NEC Laboratories Europe. His research interests lie in the application of
fundamental mathematics to real-life wireless communication systems.

Xavier Costa-Pérez (M’06–SM’18) is Head of 5G/6G R&D at NEC Labs


Europe, Scientific Director at i2Cat and Research Professor at ICREA. He
received both his M.Sc. and Ph.D. degrees in Telecommunications from the
Polytechnic University of Catalonia, Barcelona.

You might also like