Benchmarking O-RAN Performance: White Paper
Benchmarking O-RAN Performance: White Paper
White Paper
Benchmarking O-RAN
Performance
Introduction to O-RAN
2020 was a good year for O-RAN. We saw a number of collaborations between telcos and vendors to develop
O-RAN technologies in 4G and 5G. The O-RAN ALLIANCE (in which VIAVI plays an active role), and the Open
RAN Policy Coalition both welcomed new members. The latter also rounded off the year with a roadmap
‘outlining the steps that governments and policymakers can take to accelerate adoption of open standards in
the RAN and promote a network ecosystem that is more diverse, competitive and secure’.
We at VIAVI also made great strides. We took part in the Joint European O-RAN and TIP Plugfest 2020, where
together with EANTC, we demonstrated automated end-to-end (E2E) O-RAN tests at the Open Test and
Integration Center (OTIC). This included testing multi-vendor pairings.
Also in 2020, a number of major vendors came out and publicly backed O-RAN, something that would have
been unheard of a few years ago. O-RAN may require an adaptation of vendors’ traditional business models,
but it won’t impact the fundamental relationship between vendors and CSPs. Vendors have always leveraged
the very tight integration of networks components to deliver high performance. Now, O-RAN is creating
new paths for mixing and matching best in breed components, to achieve each CSP’s network objectives and
ensure that traditional vendors remain a hugely valuable element of the ecosystem. It’s clear that the whole
telco industry has now begun to acknowledge the benefits of O-RAN.
These include enabling an open, multi-vendor, interoperable ecosystem and new opportunities for innovation;
helping to drive down the cost of RAN equipment; enabling automation which can reduce deployment and
management costs; and scale and agility, where network components implemented as software functions can
be scaled to meet network capability and capacity demands.
Finally, 2020 saw the rapid acceleration of the digitalization of all industries on a global scale – something
which will continue this year as the world settles into a new(ish) and unpredictable ‘normal’. There remains a
real need for organizations to be able to work together virtually, highlighting our continued reliance on digital
services. This includes the need for reliable and efficient mobile wireless networks. They must support an
increasingly diverse range of use cases, including those associated with critical communications infrastructure.
O-RAN may be capable of delivering all of this, but we’re not quite there yet. Due to its novelty and recognizing
that multiple vendors are involved, testing and maintaining the resilience and performance of the entire network
chain is crucial.
O-RAN made substantial progress in 2020, and during 2021 has the potential to become a mainstream technology.
There’s already been a lot of positive chatter in the media and the telecoms community. In the first weeks of
January, for example, Deutsche Telekom, Orange, Telefonica, and Vodafone signed a memorandum of understanding
outlining their commitment to create a framework for the creation of a European Open RAN ecosystem.
The move to O-RAN now seems inevitable, meaning a ‘do or die’ mentality is gaining momentum across the
industry. Business models will be re-defined and new opportunities will be unlocked – for CSPs and vendors. This
year, we’ll see O-RAN network developments and rollouts as well as a continuation of the great work achieved at
last year’s Plugfest.
VIAVI has been enabling NEMs and operators to test and validate new O-RAN equipment and networks, meeting
varied customer requirements. Our leading position in testing this new RAN architecture has been achieved
thanks to our history and expertise working with NEMs and operators, and in defining benchmarks for how high-
performance networks must operate. There is much scope to re-use benchmarking tests developed for 5G and 4G
RAN that would benefit O-RAN benchmarking scenarios that involve 4G-5G interactions.
Now, the situation is changing. The market is opening up, architectures are being virtualized, and there’s an ongoing
need to incorporate new 3GPP features. It is understandable that many NEMs will find it challenging to transition
to this new and more open ecosystem.
Disaggregation and opening the market up to more vendors requires integrating elements and testing with
different players in the ecosystem. This is very different from the traditional RAN testing scenario. This involved just
one vendor which would quickly and easily run test cases through a carefully managed software delivery process.
With network components disaggregated and connected with open interfaces, it can now be a lot more difficult to
get everything tested and working together, end-to-end. The number of players involved in the process creates a
further difficulty: that of accountability. If an issue is identified during testing, whose fault is it? Playing the blame
game will not advance O-RAN and virtualization – quite the opposite.
It is therefore key for systems integrators (SIs) and all others involved in end-to-end testing, to provide visibility
into the disaggregated networks and devise means of resolving issues. This will help ensure performance KPIs are
met, and that individual network elements meet the same standard or better than pre-O-RAN architectures.
O-RAN
SMO/RIC Xn S1
O1, O2, E2 X2
End User Simulation O-DU Emulation O-RU Emulation E2 O-DU O-CU RAN Core App
4G and 5G Device and O-RU (OFH) Test O-DU (OFH) Test Emulation Emulation Emulation Emulation Emulation Emulation
It’s one thingemulation
Application to ensure network interoperability, yet it’s another to obtain
RIC Test the O-DU
O-CU (F1) Test performance
Test and guarantee
Core Test RAN Test
throughput, latency and quality measures at the transport layer and the application layer, at scale.
FIELD
For this, using pre-O-RAN testing as a reference point – a benchmark – can help. When we test O-RAN
components end-to-end, it is possible to prove that the throughput, latency, and other KPIs are the same – or
better – than conventional aggregated RAN performance. With over 20 years of working with all the NEMs,
3Z RF Vision Antenna Motion CellAdvisor 5G TBERD/MTS-5800 OneAdvisor-800
VIAVI has accumulated
Antenna Processormassive
(AMP) experience
RF Beam inNetwork
this area.
TesterThere’s no need
All-in-one to totally reinvent the wheel: instead,
cell site installation
Alignment Antenna Motion Analysis X-haul, Latency, & maintenance test tool;
we can re-use test cases,
Sensing
meaning new entrants don’t have to fiber,
Timing & Sync testing
startcoax,
from scratch, and they can learn from and
and RF
ASSURANCE
leverage established benchmarks.
Since the O-RU is a network entity, it is necessary to ensure that performance is acceptable at both the
transport and application layers. RF performance and interoperability performance with the O-DU should
remain aligned with the cost and performance requirements of the network. Synchronization and timing issues
at the O-RAN fronthaul layer should not affect any applications, especially latency-sensitive applications such
as voice over NR. Again, it’s understandable that NEMs may be very sensitive about performance as they have
invested much in optimizing the RU performance and this would serve as the baseline for evaluating the O-RU.
Another aspect that NEMs and operators must consider is conformance testing. This involves performing
specific tests according to O-RAN specifications, e.g. conformance testing of the O-RU at the transport layer.
However, conformance testing alone is not enough. Think of it like testing white goods to check that they
conform to industry standards. You do the necessary checks, the product passes, and you stamp it with a ‘CE’
mark. All good? Not quite. You then try and switch it on and although it meets the required specs, the product
doesn’t actually work.
It’s a similar story with conformance testing. Infrastructure might meet certain standards when elements are
tested independently, but when it comes to interoperability, there’s no way of knowing whether what you’re
3 Benchmarking O-RAN Performance
testing will function as part of an end-to-end architecture, alongside other elements. Different NEMs and
operators will have different requirements in terms of the functionality of the O-DU e.g. different frequency
bands, MIMO schemes etc. So infrastructure must be tested to ensure interoperability with these requirements.
While conformance testing is important, NEMs and operators must go beyond that. Equipment needs to be
tested end-to-end, plugged into different O-DUs, and with different vendor equipment (and different vendor
requirements).
For NEMs, the greater scope of components means that it is important to select scalable test tools that can
support not just conformance testing but can be upgraded to do performance tests at the higher layers e.g. the
5G stack. This will support a range of real-world test scenarios including large number of UEs, mobility, mixed
traffic types and maximum data rates.
End-to-end benchmark testing at scale ensures the overall throughput for different capacity and mobility use
cases is as good or better than with traditional RAN.
To get this efficiency at scale – and to benefit from reduced costs – it’s key that SIs and others involved
in doing end-to-end testing, can identify the cause of issues resulting from software updates in any of
the network elements. Upgrading the eNB and gNB software and providing bug fixes present operational
challenges – especially in such a dynamic environment with multiple vendors as play. As such, it is essential for
operators to be confident that they’re continuing to deliver the best QoS for their subscribers.
Network disaggregation
O-RAN is introducing standardized open interfaces, enabling disaggregated network elements and the use of
COTS hardware. As a result, you may have to benchmark and test each individual component independently.
The benefit of doing this individual element testing properly is that it ensures that when everything is joined
up, everything works without compromising the performance. The alternative is discovering issues only later on,
when they are more difficult to isolate, and potentially more expensive to address.
Latency is one such example. It may be quite tedious to pinpoint latency issues from the O-CU and this may
only be apparent when you ramp up the number of C-plane or U-plane messages. You may also find out that
for a 4x4 MIMO scheme, the throughput appears to be similar to a 2x2 MIMO scheme because of an increased
number of retransmissions which may not be straightforward to isolate.
4 Benchmarking O-RAN Performance
Preserving or improving on KPIs
Since the network elements are from different vendors, it is important to ensure KPIs are preserved at both an
element level and performance level when they all interoperate. The Open Test and Integration Centre (OTIC)
labs will help this cause, but what’s really needed is experience in testing traditional base stations, as many of
the performance measures will still apply and therefore serve as important benchmarks. The test cases should be
designed using current KPI measures as a reference. Again, this is hardly re-inventing the wheel; instead it ensures
efficiency in re-using existing test cases particularly for end-to-end testing.
However, as they will be working together with other vendors, it’s unlikely that new entrants will be at a significant
disadvantage. In addition – and conversely – they may also bring with them new, innovative and more efficient
software and hardware architectures that promote greater flexibility and address the issue of cost and scale. Ensuring
that these new innovations do not negatively impact key KPIs is key to successful benchmarking.
To help O-RAN equipment interoperability, the OTIC labs need to validate interactions between disaggregated
5G access infrastructure providers. However, it’s not just a case of testing the interoperability of distinct vendor
technology as part of an O-RAN set-up. It is in the best interest of operators to see how this technology interacts
with legacy 4G equipment in the network, and how it responds to different UE environments e.g. handover between
4G and 5G.
Operators must also be able to identify, isolate and resolve network performance issues before network deployment.
They need a test portfolio that offers emulation of many components of the network including UE, RAN and Core,
and wrap-around of major functions over multiple interfaces. There are a number of ways to test in the lab, including:
y Multi-vendor interoperability testing, which ensures that the mix of components will work functionally, and test
for performance, resilience and stability. When new features are added, benchmarking ensures that although it may
require more processing power, the benchmarking KPI measures are preserved or improved.
y Subsystem (wrap-around) testing: this could be highly valuable for specialist vendors that are targeting a niche
market for their offering but don’t have end-to-end capabilities in-house.
y Holistic evaluation of multiple RAN deployment options (RAN disaggregation topologies for eMBB, URLLC,
different etc.).
y With benchmarking, you ensure consistent performance at the transport and application layers. Monitoring allows
you to see where the peaks are, where there are bottlenecks and particularly where the KPIs being measured are
reaching very close to the threshold. This can benefit in terms of where to do optimizations e.g. on the user plane
or control plane.
TeraVM enables NEMs and service providers to efficiently test mobile RAN and core elements, validating that the
equipment works according to O-RAN and 3GPP standards, interoperates with other 5G elements, and performs
optimally when fully loaded with complex mobile traffic profiles. TeraVM is part of the VIAVI Lab To Field network
testing and assurance portfolio as well as its Test Suite for O-RAN Specifications.
O-RAN
SMO/RIC Xn S1
O1, O2, E2 X2
VSG
End User Simulation O-DU Emulation O-RU Emulation E2 O-DU O-CU RAN Core App
4G and 5G Device and O-RU (OFH) Test O-DU (OFH) Test Emulation Emulation Emulation Emulation Emulation Emulation
Application emulation RIC Test O-CU (F1) Test O-DU Test Core Test RAN Test
FIELD
ASSURANCE
In order to instil that all-important confidence with operators, VIAVI works to simplify the complexity of testing
different O-RAN interfaces across the network. Our end-to-end solutions span the full lifecycle and include fully
virtualized first-to-market NFV tools that can run in the cloud or commercial off-the-shelf hardware. We also
provide core test emulation, field-portable instruments, and assurance products that provide visibility to operators
once their network in live and starting to scale.
VIAVI plays an active role in contributing to the development of O-RAN standards. We were also part of the first
global Plugfest, an event conducted to foster adoption of open and interoperable 5G and 4G RAN. We participate
in a number of standards bodies and workgroups which are setting the agenda for the future of O-RAN, including
ITU-T, 3GPP, ONAP, and IEEE. Finally, we’re committed and will continue to invest in multi-vendor interoperability
test efforts, as part of O-RAN Alliance-led initiatives.
A better approach would be to view O-RAN as a cultural change and a movement toward openness. We’re only at
the early stages of this evolution, and technology, attitudes and approaches are still to mature. In order to continue
on the O-RAN trajectory – and for operators to realize all those hoped-for benefits – confidence, commitment and
cooperation from and between operators is needed.
Cooperation is a crucial element here. Testing will involve integrating different components from different vendors.
Compatibility, as well as consistency and quality of service, must be assured, and while doing so we must avoid
slipping into a ‘blame game’ culture. Plugfest for example, was great example of this. The event, which took part
in December and in which VIAVI played a key role, showcased, “the power of the community to document and
develop truly open interface specifications with the potential to accelerate new 5G advanced services that will be
multi-operator, multi-vendor, and create opportunities for the entire ecosystem,” (Andre Fuetsch, Chairman of the
O-RAN Alliance and Chief Technology Officer of AT&T).
O-RAN development and testing is an ongoing process that will change and develop in line with new features.
VIAVI has a history of supporting NEM and operator partners through decades of network and telcoms evolutions.
By collaborating with leading O-RAN authorities and pioneering the most advanced test equipment, we’re well
suited to support operators in getting the most out of this next phase of network evolution.
viavisolutions.com