Synchronization Distribution in 5G Transport Networks 0282 EB RevA 0321
Synchronization Distribution in 5G Transport Networks 0282 EB RevA 0321
Synchronization
Distribution in 5G
Transport Networks
The world is moving to 5G, which offers a wide range of new services beyond the voice
and data combination that was the primary service offering in the first four generations
of mobile technology. This latest generation of mobile networks will expand service
offerings into highly reliable and low-latency services that will potentially revolutionize
many areas of industrialization and our day-to-day lives. In order to deliver the higher
performance that these new services will require, all aspects of the mobile network will
require modernization. This includes the DWDM-based mobile transport network that
underpins the end-to-end mobile network.
Table of Contents
INTRODUCTION SECTION 2
The Importance of Synchronization in 5G Networks __ 3 Infinera’s Sync Distribution Solution for End-to-End
Synchronization Delivery ______________________ 17
SECTION 1 Synchronization in the IP Layer _________________ 18
Understanding Synchronization and Synchronization in Packet Optical Transport
Synchronization Distribution ____________________ 4 in Metro and Regional Networks ________________ 20
Synchronization Basics ________________________ 5 Packet Optical Transport with Layer 2 Ethernet/
Evolution of Synchronization Requirements ________ 6 eCPRI Switching ____________________________ 20
Synchronization Delivery Mechanisms ____________ 7 DWDM Transport ____________________________ 21
Frequency Synchronization Standards ____________ 8 Synchronization in DWDM Transport over Regional,
Long-haul, and Legacy Networks _______________ 23
Phase Synchronization Standards ________________ 8
End-to-End Sync Planning and Management ______ 25
ITU-T G.8271.1 Network Limits ___________________ 9
Summary _________________________________ 26
ITU-T G.8273.2 PTP T-BC Classes _______________ 12
Further Reading ____________________________ 27
ITU-T G.8275.1 Full On-path Support and G.8275.2
Partial On-path Support _______________________ 13
3GPP TS 38.104 Time Alignment Error ____________ 14
Pulling It All Together to Provide 5G-quality ________ 15
Synchronization
Getting Synchronization Right __________________ 16
The new Phase 2 5G services, especially ultra-reliable low-latency communications (uRLLC) services, will drive significant
changes into overall mobile network architecture, as well as into the mobile transport network that connects the cell tower
to core processing resources. These architectural changes include lower latency through multi-access edge compute (MEC),
new network slicing capabilities, and better synchronization performance to support new 5G RAN functionality like carrier
aggregation (CA) and previous 4G/LTE-A functionality that is now being rolled out in 4G/5G networks, such as coordinated
multipoint (CoMP).
This e-book intends to give an overview of synchronization distribution and of Infinera’s approach to this challenging
environment. It is split into two major sections to enable readers to quickly navigate to the most relevant sections, or the
complete e-book can be read sequentially if preferred. The first section covers the background to network synchronization
in mobile networks, why synchronization is needed, and how it works. The second section outlines Infinera’s end-to-end
Sync Distribution Solution and the benefits that the breadth and enhanced performance capabilities of this synchronization
distribution solution are bringing to mobile operators across the globe as they build out 5G networks.
Understanding
Synchronization
and Synchronization
Distribution
E-BOOK
RU
RU
RU DU CU
Grandmaster
(T-GM) Clock Primary Reference
RU
Timing Clock (PRTC)
Fronthaul Network Midhaul Network Backhaul Network
SYNC SIGNALS – FREQUENCY SYNC SYNC SIGNALS – PHASE SYNC SYNC SIGNALS – TIME SYNC
Clock B Same time interval Clock B Same time interval and Clock B Pulses aligned and equal
pulses aligned timestamps on each sync signal
Evolution of Synchronization performance that was required in the FDD domain and
add phase and time synchronization, as outlined earlier
Requirements
in Figure 2.
2G, 3G, and initial releases of 4G all use frequency-
division duplex (FDD) as the underlying transmission Phase synchronization quality is measured by the time
within the RAN. FDD uses two separate frequencies difference between the timing signals and is typically
for upstream and downstream communication, and represented in microseconds (µs) or even nanoseconds
these networks require tight frequency synchronization (ns). Initial use of TDD in 4G LTE networks drove a
to ensure the correct frequencies are used and that requirement for phase synchronization accuracy of 1.5
these frequencies can be tightly packed to achieve µs. As will be discussed in the following sections, 5G has
efficient use of the available spectrum. Tight alignment tightened this requirement, especially for the relative
to the planned frequencies for a cell also ensures that difference between adjacent cell sites, where it can be
regulatory commitments are met in terms of spectrum as low as just 60 ns.
licenses and enables smooth handover of calls to
adjacent cells. FDD Operation
In theory it is possible to build networks with only or routing functions and correcting the timestamp
1588v2, using the G.8265.1 PTP frequency specification, in outgoing PTP packets to compensate for this
but the vast majority of modern 1588v2-capable internal delay. This gives the effect of reducing the
equipment also supports SyncE or eEEC, and most impact of the node on the PTP stream at a lower
networks today will use a “SyncE assist” mode, which cost than T-BC capabilities. However, it should be
will improve PTP performance to varying levels. IEEE noted that T-TC performance also is significantly
1588v2 contains a range of standard definitions of lower than that achieved by T-BC devices with a
differing classes of devices with varying capabilities lower level of compensation, issues with longer
and performance levels that together can build a chains of nodes, and a more restricted range of
synchronization delivery network. The most common of network architectures than T-BC-enabled networks.
these are: ■ Time slave clock (TSC), also called a telecom time
■ Grandmaster (GM), also called a telecom slave clock (T-TSC) in ITU-T specifications – The
grandmaster (T-GM) in ITU-T specifications – A GM end device that receives the clock information,
clock is typically located in the core of the mobile typically a BBU in a 4G LTE network or a DU or RU
network. In the core, the T-GM is typically the PRTC, in a 5G network. Also called a clock client.
but in synchronization networks built over different To enable 1588v2 to be deployed in telecom networks,
synchronization domains, the T-GM is the master the ITU-T has defined a range of specifications that
clock at the start of a PTP domain. ensure that the mechanism defined in 1588v2 can meet
■ Boundary clock (BC), also called a telecom the demanding requirements, especially those of TDD-
boundary clock (T-BC) in ITU-T specifications – A based mobile networks. These specifications outline
device with a built-in PTP clock client and PTP the available time error budget, or in other words, the
master interconnected with a local clock. This maximum allowed phase error in µs or ns, and how this
enables a network node (typically a router or budget is allocated across the network elements and the
Ethernet switch) to synchronize the local clock to performance specifications of specific devices. All these
the upstream T-GM/T-BC and act as a master to any specifications are important, and the most significant of
downstream client clock. Many modern Ethernet them are as follows:
switching devices now contain T-BC functionality,
whereas earlier implementations often had T-BC ITU-T G.8271.1 Network Limits
capabilities via an external “sync box” that added As previously mentioned, TDD networks, either 4G LTE
this capability to the node. or 5G, require 1.5 µs maximum time error at the cell site
to ensure compliant operation. The maximum absolute
■ Transparent clock (TC), also called a telecom
time error (Max|TE|) is subdivided into smaller error
transparent clock (T-TC) in ITU-T specifications – A
budgets for differing segments of the network, as shown
device with the capability to measure any delay
in Figure 4 for an example 10-hop network.
created internally within the device by switching
A,B C D
±100 ns
(PRTC/T-GM) ±200 ns dTE
(random network
variation)
link asymmetry
node asymmetry compensation
±250 ns cTE ±250 ns
±550 ns cTE (11 nodes at ±50 ns per node) (short-term holdover)
±150 ns
(end application)
This allocation of time error allows for a total of 1,000 dTE is also represented in ns, although as it varies over
ns for the transport network between the T-GM and time, it is usually specified as maximum time interval error
the T-TSC at the cell site, as shown between reference (MTIE) over the observation period, as shown in blue
points B and C in Figure 4. This time error budget is in Figure 5. Max(dTE) is the maximum dTE measured
largely taken up by asymmetry in the nodes and the from the cTE, and Min(dTE) is the minimum dTE, again
links (fibers). Managing this asymmetry is of paramount measured from the cTE, giving a negative value.
importance in building a 5G-quality mobile transport
Looking at the mobile transport network, the main
network. The remaining budget includes ±100 ns for
consideration in synchronization-friendly network design
the PRTC/T-GM; ±150 ns for the end application, which
is managing both the constant and dynamic time errors
is essentially the base station in a mobile network; and
throughout all network components, paying particular
±250 ns for short-term holdover in the base station to
attention to the asymmetry.
allow for switching to an alternative PRTC/T-GM in failure
scenarios, etc. The main contributors to time error in optical transport
The primary reason that asymmetry management is so networks can be summarized as follows:
important is that 1588v2 fundamentally assumes that the ■ Fiber asymmetry within the network. DWDM is
network is symmetrical, with exactly the same delay in typically unidirectional, with each fiber being used
both directions. Understanding the transit time from the for transmission in one direction only and a fiber
T-GM to the T-TSC is a critical part of 1588v2 operation, pair being used for a bidirectional transmission
and this is determined by measuring then halving the channel. Differences in the lengths of the fibers
time for a PTP packet to go from the T-GM to the T-TSC over the route will create a constant time error.
and back again. In a totally symmetrical world, this Differences occur in outside plant fiber, patch cable
method would give an accurate calculation, but in reality, length, repair splicing, etc. Each meter of fiber
as we will discuss at length later in the e-book, there are length asymmetry creates 5 ns of additional latency
lots of sources of asymmetry in transmission networks with a corresponding 2.5 ns of cTE. This asymmetry
that impact this measurement and need to be managed is predominantly static but will change when fibers
to enable 1588v2 operation in telecom networks. The are repaired following fiber cuts or when patch
situation is further complicated as these time/phase cables are changed during network maintenance or
errors are not static over time itself. Therefore, the reconfiguration.
Max|TE| is calculated from understanding both the
constant time error (cTE) of a node, link, or network and
the corresponding dynamic time error (dTE), as shown in
Figure 5. Time Error
Time error (TE) at any given time is the sum of cTE Max|TE|
Max(dTE)
and dTE, as shown in green in Figure 5. Max|TE| is the
dTE Time Error (TE)
maximum observed absolute value of TE in the network TE MTIE
cTE
measured from zero, is represented as a time, usually in cTE Min(dTE)
Time
ns, and is always a positive figure. cTE, shown in orange,
MTIE = Max(dTE) – Min(dTE)
is constant time error, which again is represented as a
time figure in ns and can be either a positive or negative
figure. For network components with a static error, such Figure 5: Maximum time error relationship to constant
and dynamic time errors
as optical fiber, cTE is the same at any instance in time.
For network components with a more dynamic nature,
such as an IP router, then the standards define that cTE is
calculated using an average measurement of time error
over a 1,000-second period.
■ Dispersion compensation for non-coherent DWDM. or unintentional network instances such as fiber
Many access networks either are not yet using cuts or power grid failures. These events are not
coherent optics or mix coherent with 10 or 25 a common occurrence on an individual link in an
Gb/s on/off-keyed optics that require dispersion operational network, but the size of the random cTE
compensation. Dispersion compensation based that can be created on initial startup and in restarts
on compensating fiber (DCF) is most common can be significant.
and uses lengths of fiber cut to meet a dispersion ■ DWDM transponders and muxponders based on
requirement rather than of constant length. OTN mapping. OTN mapping chips also utilize FIFO
Variable length creates variable cTE issues in buffers, which have a latency that varies on initial
synchronization networks. Dispersion compensation startup and restarts. These deep FIFO buffers are
modules (DCM) based on fiber Bragg gratings used in OTN mapping to enable the devices to
rather than fiber remove this issue, but these are accommodate a wide range of service types and
less common in brownfield networks due to the can cause an even larger latency/delay than those in
higher cost. coherent optics. As with the FIFO buffers in coherent
■ First-in first-out (FIFO) buffers in coherent optics. optics, the figures here do not vary once the network
DWDM optics operating at 100 Gb/s and above use is up and running, but the size of this error is random
coherent optics that contain FIFO buffers within the across a large range, created on initial startup and
digital signal processor (DSP). These buffers have every restart, and differs in each direction.
a random latency/delay upon initial startup, which ■ Time error in IP routers and Ethernet switches.
varies in each optical interface and therefore varies Asymmetry within the router/switch can be created
in each direction, creating asymmetry. This creates through inaccuracies in timestamping. There are
a random time error that is constant (cTE) over strict T-BC requirements on the specification for
the shorter term but can sometimes be dynamic these devices for all aspects of time error, which are
(dTE) over the longer term if there are restarts on covered below in the ITU-T G.8273.2 section.
a link due to intentional network maintenance
Overall, these elements can be summarized as follows:
Contributor Fiber Dispersion Coherent Optics OTN Mapping IP Routing and Ethernet Switching
Compensation
Source Asymmetry in fiber Random asymmetry FIFO buffers in Deep FIFO buffers Timestamping inaccuracy.
lengths, jumper in DCF used in each DSP. Varies on in OTN mapping.
cables, etc. cTE of 2.5 direction. restart. Varies on restart.
ns/m.
Impact Large but Very large but Varying and Large and random. Tight requirements to control
predominantly static. predominantly static. random. impact.
Range Fixed cTE of ±5 to Fixed cTE of ±5 to Random cTE per Random cTE per Class A/B/C specifications. Max|TE|
1,000+ ns. 20,000 ns. device/interface device/interface of of 30 to 100 ns. cTE of 10 to 50
of ±20 to 130 ns ±20 to 1,000 ns on ns. dTE (low-pass-filtered) noise
on restart. restart. generation (MTIE) of 10 to 40 ns.
Fiber
Dispersion Coherent OTN
Comp. Optics Mapping
Class C T-BC Class C T-BC
Ethernet Switch Ethernet Switch
Returning to the network limits outlined in G.8271.1 and the allocation within this for node and link asymmetry, it is clear
that careful design of the underlying DWDM-based transport network is required. The dTE elements of Max|TE| are largely
generated by switching/routing devices that can be managed through the use of G.8273.2-compliant devices. The cTE
elements of Max|TE| are either large static figures that can potentially be compensated for within boundary clocks or random
elements from coherent optics and OTN mapping. These random cTE elements can be managed through the careful
selection of optimized packet optical and DWDM devices with a significantly lower, and more acceptable, level of random
cTE, or through optical timing channel techniques that can bypass these elements totally. Without the careful management
of dTE and both static and random cTE across the complete end-to-end 5G transport network, these time error limits can be
costly and very hard, if not impossible, to achieve.
G.8271.1 defines ±200 ns of dTE for random network variation and ±800 ns of cTE asymmetry error, split between ±550 ns for
nodes and ±250 ns for the overall end-to-end link for a Type A network with Class A boundary clocks.
The table below compares the G.8273.2 T-BC Classes against various parameters. Due to the more dynamic nature of dTE,
multiple parameters are defined in G.8273.2 and multiple measurements are required to classify dTE performance. MTIE, as
outlined earlier, is the maximum error measured against the reference clock for the specified time interval. Time deviation
(TDEV) is a measurement of the phase stability of a signal over a given period of time. MTIE and TDEV are used together to
give a measurement of dTE requirements and performance.
The original G.8273 specification included Class A and B T-BC specifications and G.8273.2 added new Class C and Class
D to support tighter 5G requirements, especially to support mobile fronthaul networks. Note that Classes A, B, and C have
an unfiltered value for Max|TE|, whereas Class D uses a low-pass-filtered value. Class D also does not contain cTE and dTE
specifications as the overall low-pass-filtered Max|TE| of 5 ns is such a tight requirement that any combination of cTE and dTE
is permissible as long as the overall Max|TE| specification is met.
A,B C D
±100 ns
(PRTC/T-GM) ±200 ns dTE
(random network
variation)
link asymmetry
node asymmetry compensation
Class A T-BCs: ±250 ns cTE
±550 ns cTE (11 nodes at ±50 ns per node)
Class B T-BCs: ±380 ns cTE
±420 ns cTE (21 nodes at ±20 ns per node)
Class C T-BCs: ±590 ns cTE (link asymmetry compensation) ±250 ns
±210 ns cTE (21 nodes at ±10 ns per node) (short-term holdover)
±150 ns
(end application)
As discussed in the previous section, G.8271.1 ITU-T G.8275.1 Full On-path Support
specifies ±550 ns for node asymmetry and ±250 ns
and G.8275.2 Partial On-path Support
for link asymmetry (±800 ns in total) in what it calls
The ITU-T has defined two phase profiles for PTP
Type A networks. Type A networks can contain up
networks. The first is G.8275.1, which provides full
to 11 nodes, so 10 links, with ±50 ns cTE per node,
on-path support for PTP with boundary clocks at
hence ±550 ns total node asymmetry. However, the
each IP routing or Ethernet switching node in the
G.8271.1 specification allows network operators to take
network. G.8275.1 full on-path support uses Layer 2
advantage of T-BC network nodes with better cTE
multicast Ethernet as the main delivery mechanism and
performance with Type B and Type C networks. Type
recommends the use of SyncE to assist in locking T-BC
B and Type C networks support 21 nodes, so 20 links,
nodes to a stable frequency. This profile is intended for
with ±20 ns and ±10 ns node cTE, which reduces the
all networks where new hardware is being deployed in
total node asymmetry to ±420 ns and ±210 ns. In turn,
the network, including both greenfield new deployments
this increases the possible link asymmetry to ±380 ns
and cases where new routing or switching hardware is
(Type B) and ±590 ns (Type C) while maintaining the
being added to a network to support higher capacity
overall ±800 ns total node and link cTE. dTE must still be
or performance for 5G. Overall, this approach provides
managed within the available ±200 ns random network
superior PTP performance and is recommended for any
error budget. Therefore, it is highly desirable to take
new network buildout where mobile traffic is planned.
advantage of T-BC nodes with Class B or ideally Class
C performance and lower cTE within a mobile transport To allow for the fact that not all upgrades involve
network when they are available so as to enable a Type complete upgrades to all routing and switching hardware
C G.8271.1 network with its higher allocation of cTE to in the network, the ITU-T also developed the G.8275.2
the link elements of the transport network, such as fiber partial on-path support profile. Partial on-path support
asymmetry and asymmetry within DWDM devices. uses Layer 3 unicast IP as the main delivery mechanism.
This profile uses T-BC functionality at intermediate From a synchronization perspective, fronthaul adds
nodes and, wherever possible, T-TC functionality at another level of complexity, with the need to not only
other nodes to reduce noise and improve performance deliver high-quality frequency and phase synchronization
over generic Ethernet clock-enabled devices. Some but also to manage relative phase synchronization error
operators are unable to utilize G.8275.1, with its between adjacent cell towers within the cluster of cells
better performance, and therefore need to utilize under the DU. The specific level of relative phase error
G.8275.2 partial on-path support when they upgrade budget is highly dependent on the functionality being
older networks. Generally speaking, G.8275.2 is not utilized within the RAN and within the specifications
recommended for 5G synchronization distribution due defined by the 3GPP’s technical specification (TS) 38.104.
to its limitations. It is possible to use assisted partial TS 38.104 defines the capabilities of eCPRI-based
timing support (APTS) mode to help mitigate the impact fronthaul networks and includes the maximum relative
of issues such as network islands without PTP support, phase error that is allowed when specific functions are
but this is complicated and potentially unreliable. In order used within the RAN.
to minimize the impact of the poorer synchronization
The most demanding functionality, such as inter- and
performance, these operators often need to introduce
intra-band carrier aggregation and the use of MIMO
more T-GM clocks into the core network to reduce the
antennas, have very demanding relative phase error
distance between the T-GM and the cell tower.
budgets in 5G networks. This is specified as relative
The specifications of both G.8275.1 and G.8275.2 |TE| measured at the UNI of the RU of as low as 190 ns
contain a range of features that are required to support to just 60 ns. The corresponding time alignment error
the profile across a network domain. The level of (TAE) specification, which is defined as the largest timing
support for these features by the products deployed difference between any two signals, of 260 ns to just
within the network will determine the overall level of 130 ns as measured at the antenna, is shown in Figure
support for either of the profiles and the corresponding 8. Of course, this should not be confused with the 1.5 µs
synchronization delivery performance. absolute phase error requirement, which is still required
at every cell site – this is an additional requirement to
3GPP TS 38.104 Time Alignment Error control the relative phase error between all the cell sites
The ITU-T specifications that have been described within a 4G BBU or 5G DU cluster. These relative |TE|
so far in this e-book provide the specifications for and TAE specifications also apply to cooperating cell site
frequency and phase synchronization delivery through clusters that are subtended from multiple DUs.
a transport network to meet the requirements for both
4G LTE and 5G TDD mobile networks, with frequency
synchronization of 50 ppb at the air interface of the
RAN and 16 ppb from the backhaul network, and phase TAE
RU
Putting It All Together to Provide cTE must also be managed and within budget for the
worst-case links, which are often longer routing paths
5G-Quality Synchronization
around networks in protection scenarios where the
Providing 5G-quality synchronization is a complex
shortest path has failed. cTE is mainly created within
problem that needs careful consideration early in the
the DWDM layer, and therefore, careful consideration
network design process. Many factors that impact
of the cTE performance of these devices is required.
synchronization quality are fundamentally linked to the
For long DWDM links that interconnect T-BC Class B/C
operational performance of networking hardware and
routers or switches, it may well be the case that even
cannot simply be solved for at a later stage without
Class D T-BC devices are needed at intermediate DWDM
replacing substandard networking hardware.
nodes in order to maintain the required synchronization
Networking hardware needs to meet the strict performance within the underlying DWDM network.
requirements outlined in the preceding sections of this
Fronthaul networks: eCPRI/Ethernet-centric domains
e-book and summarized below in Figure 9. Every aspect
where Ethernet switches must support the necessary
of the transport network needs to be considered from
synchronization standards and meet T-BC Class C to
a synchronization point of view. Each domain within the
support both the absolute and relative phase error
mobile network must be optimized for synchronization
budgets. dTE is largely from eCPRI/Ethernet switching
performance as follows:
devices that will typically also use Time-Sensitive
Midhaul and backhaul networks: IP-centric domains Networking (TSN) capabilities, such as preemption,
where any IP routers must support the necessary to prioritize latency-sensitive packets such as eCPRI
synchronization standards and meet T-BC Class B, or fronthaul traffic and PTP packets over other traffic within
ideally Class C, to enable type C networks with a larger the network. Again, cTE is largely from the underlying
allocation of cTE budget to support the links connecting DWDM layer, and as cTE must be considered from the
the T-BC clocks. Most of the dTE within the network will PRTC to the cell tower, fronthaul cTE must be managed
come from these IP devices, and any Layer 2 Ethernet from end to end along with midhaul and backhaul
devices in the network and dTE must be managed and domains.
within budget across the complete network.
3GPP TS 38.104 Time Alignment Error ITU-T G.8271.1 Network Limits IP Router or Ethernet
Switch with inbuilt T-BC
ITU-T G.8273.2 PTP T-BC Class C ITU-T G.8273.2 PTP T-BC Class B/C
ITU-T G.8275.1 Full On-path Support ITU-T G.8275.1 Full On-path Support
ITU-T G.8262.1 eEEC Synchronous Ethernet ITU-T G.8262.1 eEEC Synchronous Ethernet
5G T-TSC
Ethernet
eCPRI
4G
5G
IP/MPLS Access DWDM IP/MPLS Metro DWDM IP/MPLS Core DWDM
Inter antenna relative phase alignment TDD/LTE-A and 5G macro phase services traceable to UTC
Infinera’s Sync
Distribution Solution
for End-to-End
Synchronization
Delivery
E-BOOK
RU DU CU 5G
Core
Synchronization in Packet Optical the EMXP range with the EMXP-XH800, which is a
hardened 800 Gb/s device supporting a broad range of
Transport in Metro and Regional
functions required from fronthaul networks and hybrid
Networks xHaul networks, which encompass fronthaul, midhaul,
To interconnect the devices within the IP layer, DWDM is
and potentially backhaul traffic flows over the same
typically used for reach and fiber capacity or availability
infrastructure, such as TSN.
reasons. As outlined in part one of this e-book, the main
challenge in delivering synchronization over fiber and
DWDM is controlling asymmetry and corresponding
cTE, although any elements in the network that extend
into Layer 2 Ethernet switching will introduce a dTE
factor that also needs management. To understand how Figure 12: XTM Series EMXP-XH800 for fronthaul
and xHaul networks
cTE and dTE can be managed across the network, the
packet optical domain will be subdivided further into the From a synchronization perspective, the EMXP-XH800
Ethernet switching layer and the optical DWDM layer for brings the range of synchronization features needed
access and aggregation networks and legacy/long-haul to support 5G fronthaul and xHaul environments, such
networks. as fiber asymmetry compensation, SyncE/eEEC, and
nanosecond-level timestamping for very accurate T-TC
Packet Optical Transport with Layer 2 operation coupled with T-BC Class C performance that
Ethernet/eCPRI Switching significantly exceeds the required performance for Class
Fronthaul networks and those that support a combination C certification, as shown in Figure 13. Along with the
of front/mid/backhaul over an xHaul infrastructure rest of the EMXP range, the EMXP-XH800 also utilizes
utilize Ethernet switching capabilities that from a a hardware design with a highly accurate SyncE assist
synchronization perspective need to be considered in a mode for 1588v2 PTP operation that is optimized for
similar manner to the IP layer due to the predominately demanding 5G fronthaul applications.
dTE implications on synchronization. Due to the
extremely tight relative phase error budgets within
fronthaul networks, which can be as low as 60-190 ns,
synchronization performance is critically important within
these networks.
The XTM Series provides a range of DWDM restart there will be a random cTE within ± the quoted
transponders and muxponders that are OTN-based and figure. Devices with larger random cTE may well initially
use the same commercial off-the-shelf (COTS) OTN chips start up with a lower acceptable level of cTE but in a
as the rest of the industry. In addition, the XTM Series restart situation this may change to a much larger and
also contains devices that are optimized for applications unsupportable level of cTE.
such as mobile transport with a very tight focus on
By careful network design, it is therefore possible
optimal performance for a more limited set of services.
to build a DWDM transport layer that is capable of
These devices avoid the COTS OTN mapping chips and
supporting 1588v2 PTP in higher networking layers with
focus on providing a low-latency, low-power, and high-
a low enough cTE within DWDM links that the overall
density offering with the very positive side effect of a
G.8271.1 network limits can be achieved.
very low cTE on restart.
This challenge is compounded by the fact that optical
To put this into perspective, Infinera has tested a wide
layer design is built around fiber availability and routing,
range of transponders, muxponders, and packet optical
switches from the EMXP range for random cTE and dTE and while the normal working path may be a relatively
direct route between two T-BC-enabled routers or
performance, and the results are summarized below.
switches, the protection route may be substantially
The cTE figures quoted are maximum random cTE figures longer and involve a lot more DWDM components that
for cTE on restart of the device, and therefore on each will potentially have a substantial impact.
XTM Series Function Client Line Maximum Maximum dTEL 5G Phase Sync
Device Random cTE (Low-pass- Support?
filtered) MTIE
L2 Switch or Grandmaster
IP Router
Synchronization in DWDM Transport The TimeProvider 4100 supports the very broad range of
synchronization features required for 5G synchronization
over Regional, Long-haul, and Legacy
and timing distribution, such as a high-performance
Networks boundary clock operational mode, GNSS and network
Outside of the metro access, metro aggregation, and
inputs, multiple output options, and the full range of
regional footprint that is addressed with the XTM Series,
frequency and phase synchronization standards. The
Infinera has developed a very high-performance OTC2.0
device also couples T-BC Class D performance with a
solution in conjunction with Microchip, a market leader in
range of rubidium and OCXO local oscillator options to
network synchronization technology.
provide a very high-quality timing source for downstream
The OTC2.0 solution builds on the combined networking nodes. A summary of the TimeProvider 4100
synchronization and optical networking strengths of the features that are utilized in the OTC2.0 solution includes:
two companies to provide network operators with highly
■ IEEE 1588v2 PTP grandmaster
optimized and highly reliable synchronization distribution
solutions. OTC2.0 provides synchronization distribution ■ Timing distribution over T-BC 1588/SyncE via
over Infinera’s full portfolio of DWDM platforms, such overlay optical timing channel
as the 7100, 7300, FlexILS, and GX Series platforms, or ■ GNSS (GPS, GLONASS, BeiDou, QZSS, and Galileo)
even over third-party DWDM networks. The solution can and SBAS support
also be deployed over the XTM Series when extreme
■ PRTC Class A and Class B
performance and an enhanced synchronization/timing
feature set is required. OTC2.0 essentially couples ■ Enhanced PRTC (ePRTC) that meets 30 ns
Microchip’s industry-leading TimeProvider® 4100 with a performance and uses a combination of Cesium
broad range of DWDM optical timing channel capabilities and GNSS time sources
and a deep understanding of how the two systems can ■ Oscillator options – SuperOCXO (future support),
be optimized to meet the toughest synchronization OCXO, and rubidium (Rb)
requirements for mobile networks. ■ Standard base unit with 8 Ethernet ports, 4 E1/T1
ports, 1 craft port, 2 × 1PPS/ToD ports, 2 × 1PPS/10
MHz ports
■ Optional internal expansion module with 4 SFP and From an overall solution perspective, OTC2.0 provides
4 SFP+ for 10G support, 100M Fast Ethernet, and 1G network operators with a high-performance timing
fanout solution that is also highly robust and very scalable.
■ Support for multiple IEEE 1588v2 profiles per unit The solution is decoupled from the underlying DWDM
layer, which enables timing resiliency during network
■ Support for high-performance single domain/multi- upgrade and reconfiguration activities. The broad
domain boundary clock with Class C and D accuracy range of synchronization and optical timing channel
■ Fully supports ITU-T profiles for phase features, coupled with support for any transport network
synchronization: G.8275.1 and G.8275.2 topology, including meshed, ring, tree, and point-to-
■ Fully supports ITU-T profiles for frequency point architectures, enables network operators to bring
synchronization: G.8265.1, Telecom 2008, and 5G-quality synchronization to even the most demanding
default networks, including those with high levels of asymmetry.
■ ITU-T G.8273.4 APTS with enhanced automatic Field deployments of OTC2.0 using TimeProvider
asymmetry compensation over multiple network 4100 have shown that networks can exceed G.8273.2
variations Class D cTE performance over long-distance DWDM
■ Supports timing in DWDM networks with up to 6 networks. Figure 17 shows one-week live traffic test
DWDM degrees, optional extension to 14 DWDM data of the OTC2.0 solution in action over a 500-km,
degrees 96-channel DWDM network. The network comprises
six DWDM spans connecting a combination of ROADM
■ PTP timing path protection (bidirectional timing
and in-line amplifier (ILA) sites using a mix of EDFA-only
service for resiliency)
and hybrid EDFA/Raman amplification options. With
■ Monitoring and measurement capabilities TimeProvider 4100-based T-BC timing at the end nodes
■ Multiple management options – Microchip and the five intermediate sites, this is therefore seven
TimePictra® synchronization management system hops from a synchronization perspective. The results
support, Microchip Web GUI, CLI, SNMP, and Open show an impressive end-to-end cTE performance of 11 ns
API support planned (NETCONF) throughout the one-week monitoring period. To put this
performance into perspective, Class D performance over
Looking at the optical layer, OTC2.0 uses two very seven timing hops would result in a time error of 35 ns at
tightly spaced WDM channels, often bidirectionally 5 ns per hop.
over a single fiber, for transmit and receive channels
for PTP messages to minimize network asymmetry and
the corresponding impact on PTP operation. OTC2.0
provides a broad range of WDM options such as O-, E-,
and L-band timing channel options to optimize these
timing channels to the specific characteristics of the
DWDM network. Furthermore, the solution utilizes both
PTP T-BC and 3R DWDM regeneration options to ensure
a high-performance, robust, and economical network.
36.2 km 53.7 km
vPRTC (Timing Cloud)
9 km 13 km 56.6 km
with every DWDM node 5G Fronthaul/Midhaul
capable of delivering 16 km 39 km
timing/synchronization 24.3 km 6 km
9 km
10 km
48.3 km
58.2 km 56.2 km
5G Fronthaul 53.3 km 54 km
/Midhaul
100 ns 72 km
53.6 km 54.5 km
81 km 38.1 km
50 km
65 km 60 km
35.3 km Redundant GNSS
spoofing hardened
71 km GM/Cesium source
53 km
Range of 100 ns vPRTC 44 km 51 km 58 km 72 km 60 km Other timing-critical applications
(Timing Cloud) resilient (power distribution, scientific)
timing service
Figure 18: OTC2.0-enabled vPRTC regional/core network preserves G.8271.1 network limits for
fronthaul and midhaul packet optical networks and other timing-critical applications
Figure 20: Infinera’s Transcend 3D synchronization view with frequency and phase planes
It also removes the challenges of providing GNSS signals This e-book has focused on synchronization in mobile
into hard-to-reach cell sites planned for 5G, such as networks as this is a large focus area currently within the
those deep inside buildings or in underground metro telecom industry. But the benefits of high-performance
railway stations. synchronization are not limited to mobile networks.
Network operators are also benefiting from Infinera’s
The benefits that Infinera’s solution brings to high-quality synchronization in a broad range of
those operators that are building network-based applications such as power utility networks, financial
synchronization strategies over alternative approaches trading networks, TDM circuit emulation, and video/DAB
include: distribution networks.
■ Better overall synchronization performance
Further Reading
leading to potentially better RAN performance and
Infinera has a range of more detailed product-specific
spectrum utilization
synchronization documentation, such as product data
■ Better overall network economics with optimized sheets and sync performance testing documentation for
solutions for in-band synchronization delivery for the solutions outlined in this e-book. Please contact your
metro access and aggregation networks, OTC2.0 Infinera sales representative for more details.
for long-haul/core/legacy networks, and the ability
to blend the two solutions within the same network All Infinera and Microchip product feature lists and
specifications referenced in this e-book are subject
■ More resilient synchronization distribution
to change over time. Please refer to the appropriate
■ More stable synchronization environments requiring product data sheets and detailed documentation for the
less ongoing maintenance and support most up-to-date features lists and specifications.
© 2021 Infinera Corporation. All Rights Reserved. Infinera and logos that contain Infinera are trademarks or registered trademarks of Infinera Corporation in the United States and other
countries. All other trademarks are the property of their respective owners. Statements herein may contain projections regarding future products, features, or technology and resulting
commercial or technical benefits, which are subject to risk and may or may not occur. This publication is subject to change without notice and does not constitute legal obligation to deliver
any material, code, or functionality and is not intended to modify or supplement any product specifications or warranties. 0282-EB-RevA-0321