0% found this document useful (0 votes)
6 views62 pages

Lecture 2

The document provides an overview of Time-Division Multiplexing (TDM), a technique for transmitting multiple signals over a single communication channel by dividing time into slots for each signal. It discusses the importance of synchronization in TDM, the sampling rates, and the bandwidth requirements, as well as practical examples of TDM systems like PCM. Additionally, it compares TDM with Frequency-Division Multiplexing (FDM) and introduces digital transmission hierarchies such as Plesiochronous Digital Hierarchy (PDH) and Synchronous Digital Hierarchy (SDH).

Uploaded by

mohamd.khalid202
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views62 pages

Lecture 2

The document provides an overview of Time-Division Multiplexing (TDM), a technique for transmitting multiple signals over a single communication channel by dividing time into slots for each signal. It discusses the importance of synchronization in TDM, the sampling rates, and the bandwidth requirements, as well as practical examples of TDM systems like PCM. Additionally, it compares TDM with Frequency-Division Multiplexing (FDM) and introduces digital transmission hierarchies such as Plesiochronous Digital Hierarchy (PDH) and Synchronous Digital Hierarchy (SDH).

Uploaded by

mohamd.khalid202
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Gharyan University, Faculty of engineering, EECM515 Eng.

Asma Abdurahman
Spring 2025

Lecture 3:

Time-Division Multiplexing (TDM):

TDM is a technique used for transmitting several analog message signals over a
communication channel by dividing the time frame into slots, one slot for each meassage
signal. The importnant features of TDM are illustrated in the figure below.

Figure: Block diagram of a four channel TDM system

Four input signals, all bandlimited to fx by the input filters, are sequentially sampled
at the transmitter by a rotary switch or commutator. The switch makes fs revolutions per
second and extracts one sample from each input during each revolution. The output of the
switch is a PAM or PCM (The most popular types of TDM, PAM and PCM) waveform
containing samples of the input signals periodically interlaced in time. The samples from
adjacent input meassage channels are separated by Ts/M, where M is the number of input
channels. A set of M pulses consisting of one sample from each of the M-input channels is
called a frame.

At the receiver, the samples from individual channels are separated and distributed
by another rotary switch called a distributor or decommutator. The samples from each
channel are filltered to reproduce the original message signal. The rotary switches at the
transmitter and receiver are usually electronic circuits that are carefully synchronized.
Synchronizating is perhaps the most critical aspect of TDM. There are two levles of
synchronization in TDM: frame synchronization and sample (or word) synchronization.
frame synchronization is necessary to estaplish when each group of samples begin and word
synchronization is necessary to properly separate the samples within each frame.

• Time-division multiplexing (TDM) is a digital process that allows several connections to


share the high bandwidth of a link.

1
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

• Instead of sharing a portion of the bandwidth as in FDM, time is shared.

• Each connection occupies a portion of time in the link.


In the figure, portions of signals 1, 2, 3, and 4 occupy the link sequentially.

The interlaced sequence of samples may transmitted by direct PAM or the sample
values may be quantized and transmitted using PCM. TDM-PCM is used in a varity of
applications, the most important one is PCM telephone systems where voice and other
signals are multiplexed and transmitted over a varity of transmission media including pairs
of wires, wave guies, and optical fibers.

• With PCM-TDM system, two or more voice channels are sampled, converted to PCM
codes, and then time-division multiplexed onto a single metallic or optical fiber
cable.

The above figure shows a block diagram for a PCM carrier system comprised of two
DS-0 channels that have been time-division multiplexed. Each channel’s input is alternately
sampled at an 8-kHz rate and converted to an eight-bit PCM code. While the PCM code for
channel-1 is being transmitted, channel-2 is sampled and converted to PCM code. When its
turn for channel-2’s PCM code to be transmitted, the next sample is taken from channel-1
and converted to PCM code. This is a continuous process.

2
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Sampling Rate:

There are two times to consider for a sampled signal, first, duration of the pulse, and
second, time between samples. It was found that time duration of the sample is not critical
and can be reduced without degrading the information. However, time between samples
must be chosen so that sampling is at least twice the highest frequency in the modulating
signal, in order to receive the modulating signal without distortion since if the sampling rate
is at frequency less than twice the highest, the spectrum of the sampled signal will show
overlapping of side bands and the original modulating signal spectrum.

For telephone channel fsampling = 8000 Hz

= (2 x 4000 Hz)

i.e. each Telephone circuit is sampled every 125 𝜇𝑠𝑒𝑐

TS = 1/8000 sec = 125 𝜇𝑠𝑒𝑐

125 𝜇𝑠𝑒𝑐 is time between samples and may be used by other channels of the system.

Bandwidth:

Generally, transmission bandwidth required for digital signal is larger than that
needed for analog signal. As the number of channels increases the transmission bandwidth
increases. This because pulse train spectrum consists of a fundamental ( f0=8000 Hz for
speech) and harmonics. Where all frequencies (fundamental and harmonics) will have side
bands resulting from the modulation process.

Multiplexer and Frame Concept:

The multiplexer is simply an electronically controlled digital switch with two inputs and
one output. One eight-bit PCM code from each channel is called a TDM frame and the time it
takes to transmit one TDM frame is called frame time and it is equal to reciprocal of sample
rate. The figure below shows the frame allocation for a two channel PCM system. The PCM
code for each channel occupies a fixed time slot within the total TDM frame.

When sampling, quantizing and coding take place, a serial bit stream results which
requires some identification to where it begins as a scanning sequence.

3
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Framing is to identify to the far-end (receiver) when each full sampling sequency starts and
ends.
Frame: a full cycle of samples is called frame in PCM terminology.

Examples of Practical Frames:


a) D1 system 24 channel PCM (Bell system, USA) uses 7 digits (n=7), i.e., 128
quantizing steps, and to each 7 bits, 1 bit is added for signaling.

Another bit is added to each full sequence of signaling (one cycle) i.e., a sample from
each channel. This is called framing bit.
Therefore, number of bits per frame in 24 channels:
= (7 + 1) x 24 + 1 = 193 bit.
Since sampling is at 8000 Hz = 8000 cycles/sec.
= 8000 frames/sec.
Then bit rate = 8000 X 193
= 1544000 bits/sec
=1.544 Mbps
b) The CEPT (Conference of European Postal and Telecommunications
Administrations) uses 32 channels instead of 24 channels. Where, 30 channels are
for speech, 2 channels: one for signaling and one for synchronization.
Bit rate = 8 X 32 X 8000
= 204800 bits/sec.
= 2.048 Mbps.
In these 32 channels, each frame is divided into time slots (TS) as shown in the
figure:

Time per frame = 125 𝜇 𝑠𝑒𝑐


Bits per frame = 256

4
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

!"#
Time slot = $" = 3.9 𝜇 𝑠𝑒𝑐,
Number of bits per slot = 8
$.&
Time per bit = ' = 488 𝑛 𝑠𝑒𝑐.
In TSO (Synchronizing code) it is transmitted every second frame (odd frames)
occupying digits 2 to 8 as follows: 0011011

For even frames (frames without synch. Word), the second bit of TSO is frozen to (1)
so that the synchronizing cannot be imitated. The remaining bite are then used for the
transmission of supervisory signals.
The most common technique for framing is known as added digit framing, in this
scheme, typically, one control bit is added to each TDM frame. An identifiable pattern of bits,
from frame to frame, is used on this "control channel". A typical example is the alternating
bit pattern, 101010 … This is a pattern unlikely to be sustained on a data channel. Thus, to
synchronize, a receiver compares the incoming bits of one frame position to the expected
pattern. If the pattern dose not match, successive bit positions are searched until the pattern
persists over multiple frames. once framing synchronization is established, the receiver
continues to monitor the framing bit channel.

Synchronization in TDM:
Synchronization is vital in TDM. Tl system solved It by adding 1bit/frame, implies
8000 bps for framing with this high bit rate of Synchronization slips then pattern is
scorched and in few milli-seconds can be restored at later time if the channel bank
remains intact (24 ch. together). But if bank were split up into channels, then it
would be advantageous to have Synchronization bit sequence for each channel. This
implies 1 bit/word rather than 1 bit/frame, results in 8000 bps per ch. which is too
much. Their solution was to share the bit (framing-bit) between signaling and
Synchronization as in the CCITT 1.544 Mbps version I of signaling. For large
switched network the problem of Synchronization can be solved by one of two.

1. Fully synchronous approach:

To synchronize clocks of different terminals, and to compensate for any drift in


synchronization, this implies clocks at different locations must operate at exactly the
same Speed.

2. Quasi- synchronous approach:

Close but not perfect clock correlation is accepted.

5
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Comparison of FDM & TDM:

Table below summarized advantages and disadvantages of both FDM and TDM.

Description FDM TDM


Advantages. • Narrow band per Ch. • Regenerative
required. repeaters used.
• System of large • Uses state of the art
number of channels of solid-state
can be designed technology as IC,
easily. MSI, LSI and
microprocessors, etc.

Disadvantages. (Typical • Noise in repeaters. • Bandwidth is large.


distortion) • Linear (attenuation, • Quantization noise.
group delay) and • Jitter
nonlinear.
• Cross talk.

Space Division Multiplex (SDM):


SDM is simply the handling of much physically separate transmission into a
common cable, e.g., a telephone cable consisting of hundreds or thousands of
twisted pairs constitutes a SDM system since many conversations can be carried on
the single cable although each is assigned a unique pair in the cable.

Advantage of SDM is reducing transmission cost. But SDM requires:


• Combining of trafQic into speciQic routes, to achieve the desirably large
channel cross section.
• Achieving a true isolation between transmission media separated by
distances that can be 10-8 times their length is impossible.
As a consequence, SDM subjects to Interference results from coupling between
channels. Both FDM and TDM are SDM on parallel facilities sharing the same right of
way.

TDM Hierarchy and Standards

A block diagram of TDM for N channels is shown below. Where sampling, quantizing
and coding are shown before the multiplex. The output bit stream can then be multiplexed
again to produce higher bit streams.

6
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Digital Transmisssion Hierarchies in TDM Systems

The long-distance carrier system throughout the world was designed to transmit
voice signals over high-capacity transmission links, such as microwave, coaxial cable and
optical fiber.

The increasing demand for efficient communication, particularly for voice traffic, led to the
development of digital hierarchies in telecommunications. These hierarchies provide a standardized
and structured approach to multiplexing digital signals. The core idea is to combine multiple lower
data rate digital signals into a single, higher data rate signal for efficient transmission over long
distances. These higher-rate signals can then be demultiplexed at the receiving end to recover the
original lower-rate signals. By defining specific data rates and multiplexing levels, digital hierarchies
ensure interoperability between different network components and facilitate the scaling of
telecommunications networks to accommodate growing traffic volumes. This structured framework
became essential as telecommunications transitioned from analog to digital systems, allowing for
better management of bandwidth and improved quality of service.
Part of the evolution of these telecommunication networks has been the adaption of
synchronous TDM transmission, structure.
The widely used techniques as:
1- Plesiochronous digital hierarchy (PDH)
2- Synchronous digital hierarchy (SDH)
3- Synchronous optical network (SONET)

A brief description of these systems will be discussed.

1. Plesiochronous digital hierarchy (PDH)


Introduction to PDH (Preparing for Multiplexing)
• The Plesiochronous Digital Hierarchy (PDH) is a method of digital multiplexing that
allows for the efficient transmission of multiple data streams or voice channels over
a single communication channel, e.g., a single copper cable or optical fiber.

7
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

The introduction of newer digital multiplexers has been incremental, leading to


variations in the timing circuits used across different devices. Although these multiplexers
nominally operate at the same bit rates, small discrepancies in their actual bit rates
necessitate compensatory measures when multiplexing multiple lower-order streams.
• Compensation for Timing Differences: To address the timing differences between
multiplexed streams, the output bit rate is set slightly higher than the sum of the input
bit rates. This excess capacity allows for the inclusion of "justification bits," which fill
any unused bits in the output stream. This approach leads to the formation of higher-
order multiplexed rates within the plesiochronous digital hierarchy.
• Primary Multiplex Groups: The primary multiplex groups consist of two main
access circuits: 1.544 Mbps (DS1 or T1 circuit) and 2.048 Mbps (E1 circuit). These
circuits represent the lowest levels of their respective hierarchies, with higher-level
multiplexed groups being multiples of either 24 or 32 channels, each operating at 64
kbps.
• Byte Interleaving: Used for primary multiplex groups, where an 8-bit byte from
each channel is combined. This method is convenient because a PCM sample of a
speech signal is 8 bits.
• Bit Interleaving: Employed when multiplexing primary-rate groups together. In
this method, bits from each group are transmitted immediately as they arrive,
allowing for a more efficient use of the output stream.
• Framing and Maintenance Bits: Higher-level multiplexed streams require
additional bits for framing and maintenance. These bits help the receiving multiplexer
correctly interpret the bitstream and maintain synchronization. For instance, an E2
circuit, which combines four E1 circuits, has a calculated bit rate of 8.192 Mbps but
operates at an actual bit rate of 8.448 Mbps, with 0.256 Mbps allocated for framing
and maintenance.
• Outside North America and Japan, the ITU defines a frame into which 30 voice
channels can be multiplexed. two additional channels in the frame are used for
control (Framing, error checking and other control functions) and signaling (used to
carry the dialed digits for example).
• The E1 (PDH) device therefore needs to clock out at a rate of 64,000 times 32 =
2,048,000 bits per second.
• Each PDH device will use its own internal synchronization source - rather like each
person uses their own watch. This means that no two PDH devices will keep exactly
the same synchronization - almost, but not exactly.
• Therefore, when E1s are multiplexed into an E2, the E2 device needs to take each
individual E1 signal and bring them all up to a common (internal) synchronization.
This is done by inserting additional bits into the incoming signals. This process is
called "stuffing". A record of this "stuffing" needs to be kept within the signal, so that
the original signal can be reproduced at the receiving end, when the E2 is
demultiplexed. This additional information ("stuffing" and formatting) means that
the line rate for an E2 must be more than 4 * 2,048,000 bits per second.
• The same process needs to be applied at all multiplexing stages.

8
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

• A PDH is a digital hierarchy which is a family of multiplex equipment’s of different


levels, and network topologies to accommodate a higher level of data rates and
protection required by high-speed transmission.
• It is well adapted for telephony.
• In PDH a cascade of mux/demux is needed to drop or insert 2Mb/s. If the network is
reconfigured then channel rearrangement is needed at the DDF (Digital
Distribution Frame).
• In a PDH system, the DDF plays a crucial role in the multiplexing process by
helping to manage the various lower-speed data streams that are combined
into higher-speed channels. It ensures that the multiplexed signals can be
efficiently distributed to different parts of the network.
• In PCM bytes for each channel are transmitted in sequence. For this reason PCM is
said to be a byte-interleaved system. However, PDH is bit- interleaved where 1 bit
of each of the four tributaries is transmitted by the multiplexer at a time. PDH is based
on interleaving bit by bit, therefore each bit in a given byte is a port of different time
slot (Ts) or channel. In other words, bits making total conversation are distributed
among all bytes.
§ To separate a particular conversation (channel) from a bit stream, it is necessary to
demux all signals.

There are three systems of PDH:

9
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

10
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Figure: Plesiochronous digital hierarchies: (a) 1.544 Mbps derived multiplex hierarchy, the 24-channel PDH
TDM ; (b) 2.048 Mbps derived multiplex hierarchy, the 32-channels PDH TDM.

I. European E-carrier (CEPT) Hierarchy:

In TDM systems, the CEPT (Conference of European Postal and


Telecommunications Administrations) hierarchy, also known as the E-carrier system,
defines a specific set of digital transmission rates and frame structures used primarily
outside of North America and Japan. It's based on multiplexing 64 kbps voice channels (DS0
equivalents in the CEPT world are often referred to as E0).

Here's a breakdown of the CEPT hierarchy levels and their frame structure at the primary
level (E1):

CEPT Hierarchy Levels:

11
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

The CEPT hierarchy defines a series of increasing bit rates achieved by multiplexing lower-
level channels. The main levels are:

Level Designation Bit Rate (Mbps) Number of 64 kbps Channels


0 E0 0.064 1
1 E1 2.048 32
2 E2 8.448 128 (4 x E1)
3 E3 34.368 512 (4 x E2)
4 E4 139.264 2048 (4 x E3)
5 E5 565.148 8192 (4 x E4)

E1 Frame Structure:

The E1 line is the fundamental building block of the CEPT PDH (Plesiochronous Digital
Hierarchy). Its frame structure is as follows:

• Frame Duration: 125 microseconds (µs). This corresponds to the 8 kHz sampling
rate used for digitizing voice.
• Number of Time Slots: 32 time slots.
• Bits per Time Slot: 8 bits.
• Total Bits per Frame: 32 time slots * 8 bits/time slot = 256 bits.
• Frame Rate: 8000 frames per second (1 / 125 µs).
• Bit Rate: 256 bits/frame * 8000 frames/second = 2,048,000 bits per second (2.048
Mbps).

Detailed E1 Time Slot Usage:

• Time Slot 0 (TS0): This time slot is primarily used for framing and
synchronization. It carries a specific fixed pattern that allows the receiver to identify
the beginning of each frame and align the time slots correctly. It can also carry alarm
signals and some bits reserved for national use.
• Time Slots 1-15 and 17-31 (TS1-TS15, TS17-TS31): These 30 time slots are
typically used to carry data traffic, which can be digitized voice channels (E0) or
other digital data. Each time slot can carry a 64 kbps channel.
• Time Slot 16 (TS16): This time slot is often used for signaling. It carries information
related to call setup, supervision, and tear-down. However, it can also be used for data
in some configurations.

Key Points about the E1 Frame:

• The E1 frame repeats 8,000 times every second.


• Each of the 32 time slots effectively provides a 64 kbps channel (8 bits/slot * 8000
frames/second).
• In a standard channelized E1, 30 of these 64 kbps channels are used for user data, one
for framing (TS0), and one for signaling (TS16).

12
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

• There are also "clear channel E1" or "unchannelized E1" configurations where the
entire 2.048 Mbps bandwidth is used for a single high-speed data stream, and the
framing information in TS0 is still present.

Understanding the E1 frame structure with its 32 time slots, 8 bits per time slot, and
the specific roles of TS0 and often TS16 is crucial for recognizing and working with systems
based on the CEPT hierarchy. Higher levels in the hierarchy (E2, E3, E4, E5) are formed by
multiplexing four lower-level signals together, along with the addition of overhead bits for
synchronization and management.

II. The North American (DS/T-carrier Hierarchy)

The Digital Signal (DS) hierarchy is a standardized system used in North America and Japan
to define the data rates and formats for digital communication channels in Time Division
Multiplexing (TDM) systems. It was originally developed by Bell Labs to efficiently transmit
multiple voice signals over digital lines.

In TDM, multiple lower-bandwidth signals are interleaved in time to share a higher-


bandwidth transmission medium. The DS hierarchy provides a structured way to multiplex
these signals into progressively higher-capacity digital streams.

Here's a detailed breakdown of the DS hierarchy levels:

1. DS-0 (Digital Signal Level 0):

• Bit Rate: 64 kbps (kilobits per second)


• Channels: 1
• Description: This is the fundamental building block of the DS hierarchy. It
represents a single, digitized voice channel using Pulse Code Modulation (PCM). The
analog voice signal (nominally 4 kHz bandwidth) is sampled 8,000 times per second,
and each sample is encoded into an 8-bit word, resulting in the 64 kbps data rate
(8,000 samples/second * 8 bits/sample = 64,000 bits/second).
• Significance: DS-0 is the basic unit for carrying one telephone conversation or its
equivalent in data.

2. DS-1 (Digital Signal Level 1):

• Bit Rate: 1.544 Mbps (megabits per second)


• Channels: 24 DS-0 channels
• Carrier System: T1
• Multiplexing: 24 DS-0 channels are time-division multiplexed. Each DS-0
contributes 8 bits per frame. An additional framing bit is added to each frame for
synchronization.

13
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

• Frame Structure: A DS-1 frame consists of 193 bits (24 channels * 8 bits/channel +
1 framing bit). These frames are transmitted at a rate of 8,000 frames per second (to
match the sampling rate of DS-0).
• Calculation: (24 channels * 64 kbps/channel) + 8 kbps (framing overhead) = 1536
kbps + 8 kbps = 1544 kbps.
• Significance: DS-1 (often referred to as T1 lines) became a primary standard for
digital transmission of voice and data. It can carry 24 simultaneous voice calls or a
high-speed data connection.

3. DS-1C (Digital Signal Level 1C):

• Bit Rate: 3.152 Mbps


• Channels: 48 DS-0 channels (2 DS-1 channels)
• Carrier System: T1C
• Multiplexing: Two DS-1 signals are multiplexed together with some additional
overhead bits for synchronization and control.
• Significance: While defined, DS-1C did not see widespread deployment compared
to other levels.

4. DS-2 (Digital Signal Level 2):

• Bit Rate: 6.312 Mbps


• Channels: 96 DS-0 channels (4 DS-1 channels)
• Carrier System: T2
• Multiplexing: Four DS-1 signals are multiplexed.
• Overhead: Additional bits are added for framing and synchronization at this level.
• Significance: DS-2 offered a higher capacity than DS-1 but was also less commonly
used than DS-1 or DS-3.

5. DS-3 (Digital Signal Level 3):

• Bit Rate: 44.736 Mbps


• Channels: 672 DS-0 channels (28 DS-1 channels or 7 DS-2 channels)
• Carrier System: T3
• Multiplexing: 28 DS-1 signals (or 7 DS-2 signals) are multiplexed.
• Overhead: Significant overhead is added for framing, synchronization, and error
detection.
• Significance: DS-3 (often referred to as T3 lines) provided a substantial increase in
bandwidth and became a key rate for backbone networks and high-capacity data
transmission.

6. DS-4 (Digital Signal Level 4):

• Bit Rate: 274.176 Mbps


• Channels: 4,032 DS-0 channels (168 DS-1 channels or 28 DS-3 channels)

14
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

• Carrier System: T4
• Multiplexing: 168 DS-1 signals (or 28 DS-3 signals) are multiplexed.
• Overhead: Even more overhead is required for management and synchronization at
this high rate.
• Significance: DS-4 represented a very high capacity for its time and was used in
major telecommunications infrastructure.

Level Data Rate (Mbps) Equivalent DS0 Channels


DS0 0.064 1
DS1 1.544 24
DS2 6.312 96
DS3 44.736 672
DS4 274.176 4032

Higher Levels (Less Commonly Deployed):

The DS hierarchy originally extended to even higher levels, such as DS-4A (400.352 Mbps)
and DS-5 (565.148 Mbps), but these saw limited commercial deployment, especially with
the advent of newer technologies like SONET/SDH.

Key Concepts in the DS Hierarchy:

• Time Division Multiplexing (TDM): The fundamental technique used to combine


multiple signals by allocating them specific time slots within a frame.
• Framing: The addition of synchronization bits to delineate the start and end of a
frame, allowing the receiver to correctly demultiplex the signals.
• Overhead: Extra bits added at each multiplexing level for synchronization, control,
error detection, and other management functions. This overhead reduces the
effective payload capacity.
• Plesiochronous: The different levels in the PDH (Plesiochronous Digital Hierarchy,
which includes the DS hierarchy) operate with clocks that are nominally the same
but not perfectly synchronized. This necessitates the use of bit stuffing to
accommodate slight variations in clock rates during multiplexing.

15
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Relationship with T-Carrier System:

The "DS" designation refers to the digital signal rate and format, while the "T" designation
(e.g., T1, T3) refers to the physical transmission system or line that carries the DS signal. In
practice, the terms are often used interchangeably (e.g., a DS1 is often called a T1 line).

Evolution and Legacy:

The DS hierarchy was a crucial development in the evolution of digital


telecommunications. While newer synchronous technologies like SONET/SDH have become
dominant for very high-speed transmission, the concepts and the lower levels of the DS
hierarchy (especially DS-1 and DS-3) still have relevance in telecommunications
infrastructure and terminology.

In summary, the DS hierarchy provides a structured and standardized approach to


multiplexing digital signals in TDM systems, enabling efficient transmission of multiple
communication channels over shared physical media. Each level in the hierarchy represents
a specific bit rate and a defined number of lower-level channels combined together with
necessary overhead for synchronization and management.

24-Channel pulse-code modulation multiplex (USA DS1)

How DS-1 format can be used to transmitt data?

Figure below shows the frame structure for the 24-channel DS1 system.

DS-1 system which consists of 24 channels and transmit at a rate of 1.544 Mbps

Figure: The 24-channel pulse-code modulation multiplex structure

16
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

• All the 24 timeslots within the 125 micro sec timeframe are normally used for
speech channels.
• However, a single bit is contained at the front of the frame before TS1 which is used
to carry the frame alignment pattern. The pattern is sent one-bit at a time over odd
frames – that is dispersed over eight odd frames, rather than bunched into one
frame,
• The frame comprises of 24 eight-bit time slots plus one bit, totalling 193 bits. The
line rate is thus 193 bits per frame, with 8,000 frames/s, giving 1,544 kbit/s – which
is usually written as ‘1.5 Mbit/s’.
• There are two ways that signalling can be conveyed over the 24-ch. system.

o The earlier systems used a method known as ‘bit stealing’ in which the last bit
in each timeslot of every sixth frame was used to carry signalling related to that
channel. This periodic reduction in the size of the PCM word from 8 to 7 bits
introduced a slight, but acceptable, degradation to the quantisation noise.
However, it also prevented the timeslots from being used to carry data at the
full 64 kbit/s (i.e. 8 bits at 8,000 frames/s) and the reduced rate of 56 kbit/s
(i.e. 7 bits at 8,000 frames/s) was the maximum rate that could be supported.
o The recently introduced alternative of common-channel signalling avoids the
need for bit stealing, since the signalling messages are carried in a data stream
carried over the bit 1 at the start of even frames – that is outside of the speech
channels. However, this 4 kbit/s (i.e. 1 bit every other frame) of signalling
capacity was considered too slow and now the DS1 system has been upgraded
to carry common-channel signalling at 64 kbit/s in one of the time slots,
leaving 23 channels for speech.

The following figure shows a DS1/T1 carrier system time division multiplexes PCM encoded
samples from 24 voice band channels for transmission over a single metallic wire pair or a
fiber optic cable.

17
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

The multiplexer has 24 independent inputs and one time-division multiplexed output.
The 24 PCM output signals are sequentially selected and connected through the multiplexer
to the transmission line. To become a T1 carrier, the system has to be line encoded and
placed on special conditioned cables called T1 lines.

A transmitting portion of a Channel Bank digitally encodes the 24 analog channels,


adds signalling information into each channel, and multiplexes the digital stream onto the
transmission medium. The receiving portion reverses the process. Each of the 24 channels
contains an eight-bit PCM code and is sampled 8000 times a second. Each channel is sampled
at the same rate, but may not be at the same time. The line speed is calculated as:

𝑏𝑖𝑡𝑠
24 𝑐ℎ𝑎𝑛𝑛𝑒𝑙𝑠 8 𝑏𝑖𝑡𝑠 𝑏𝑖𝑡𝑠 192
𝑓𝑟𝑎𝑚𝑒 8000 𝑓𝑟𝑎𝑚𝑒𝑠
× = 192 => × = 1.536𝑀𝑏𝑝𝑠
𝑓𝑟𝑎𝑚𝑒 𝑐ℎ𝑎𝑛𝑛𝑒𝑙 𝑓𝑟𝑎𝑚𝑒 𝑓𝑟𝑎𝑚𝑒 𝑠𝑒𝑐

Later, an additional bit called the framing bit is added to each frame. The framing bit occurs
once per frame and is recovered at the receiver and its main purpose is to maintain frame
and sample synchronization between TDM transmitter and receiver.

As a result of this extra bit, each frame now contains 193 bits and the line speed for a T1
digital carrier system is 1.544 Mbps. { 193 bits × 8000 frames = 1.544 Mbps}.

III. The Japanese J-Carrier Hierarchy:

Introduction to the Japanese J-Carrier Hierarchy


The Japanese J-carrier hierarchy represents Japan's adaptation of the Plesiochronous
Digital Hierarchy (PDH) framework for its telecommunications infrastructure.2 PDH is
characterized as a quasi-synchronous system, meaning that while different parts of the
network strive to operate with clocks running at nearly the same speed, slight variations in
timing can occur.3 To manage these minor discrepancies in timing, PDH systems employ
specific techniques such as bit stuffing.
Japan's J-carrier hierarchy shares a close relationship with the North American T-
carrier system, particularly at its foundational level, exhibiting similarities in the primary

18
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

data rate and the number of channels accommodated. However, as one ascends the hierarchy
to higher levels, the Japanese system incorporates certain deviations, reflecting regional
adaptations and potentially distinct capacity planning strategies.3 This initial alignment with
the T-carrier system suggests a possible influence or a parallel evolution in response to
comparable demands in the telecommunications landscape.

Levels of the J-Carrier Hierarchy: Technical Specifications


The Japanese J-carrier hierarchy comprises five distinct levels, each with specific technical
characteristics:
• J1 Carrier
o The J1 carrier operates at a data rate of 1.544 Mbps. This primary rate is the
fundamental building block of the J-carrier system and is equivalent to the DS1
and T1 levels in the North American digital hierarchy.
o J1 is designed to carry 24 voice channels, or the equivalent in 64 kbps data
channels.2 This capacity is achieved by multiplexing 24 individual DS0 channels,
each operating at a rate of 64 kbps.2
o The framing format of J1 is similar to that of T1, utilizing a 193-bit frame
structure.2 This frame consists of one framing bit, which is used for
synchronization purposes, and 24 8-bit time slots that carry the data. The J1
carrier can employ either the Superframe (D4) or the Extended Superframe (ESF)
formats, mirroring the options available in the T1 system.
• J2 Carrier
o The J2 carrier has a data rate of 6.312 Mbps. This is four times the data rate of a
J1 carrier.
o J2 can accommodate 96 voice channels, or the equivalent in 64 kbps data
channels , which is four times the capacity of the J1 level.
o The framing format at the J2 level involves the multiplexing of four J1 (or DS1)
frames. This process is similar to the North American DS1 to DS2 multiplexing,
which includes the addition of overhead bits and bit stuffing to ensure
synchronization between the asynchronous tributary signals.
• J3 Carrier
o The J3 carrier operates at a data rate of 32.064 Mbps. This is achieved by
multiplexing five J2 signals.
o J3 can carry 480 voice channels, or the equivalent in 64 kbps data channels ,
which is five times the capacity of J2.
o The framing format at the J3 level involves the multiplexing of J2 frames, similar
to the process of multiplexing DS2 frames in the North American system,
including the addition of overhead and justification bits.
• J4 Carrier
o The J4 carrier operates at a data rate of 97.728 Mbps. This is achieved by
multiplexing three J3 signals.
o J4 can carry 1440 voice channels, or the equivalent in 64 kbps data channels ,
which is three times the capacity of J3.

19
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

o The framing format at the J4 level involves the multiplexing of J3-equivalent


frames with the addition of necessary overhead for synchronization and
management.
• J5 Carrier
o The J5 carrier has a data rate of 397.200 Mbps. This is achieved by multiplexing
four J4 signals.
o J5 can carry 5760 voice channels, or the equivalent in 64 kbps data channels ,
which is four times the capacity of J4.
o The framing format at the J5 level involves the multiplexing of J4-equivalent
frames along with the inclusion of overhead for proper transmission and
reception.

The Multiplexing Process in the J-Carrier Hierarchy


The J-carrier hierarchy employs Time Division Multiplexing (TDM) as the
fundamental process for combining lower-level signals into higher-level ones. This involves
a technique called bit-by-bit interleaving, where bits from each of the tributary signals are
combined sequentially to create a single, higher-rate bit stream.2 The multiplexing occurs in
a step-wise manner, typically combining four or five lower-level signals at each stage, with
the exception of the transition from J2 to J3, which involves a 5:1 multiplexing ratio, and from
J3 to J4, which uses a 3:1 ratio.
Given that PDH systems like the J-carrier hierarchy operate in a plesiochronous manner,
where the timing of different network elements is nearly but not perfectly synchronized, a
crucial aspect of the multiplexing process is bit stuffing, also known as justification. Bit
stuffing is employed to compensate for the slight variations in clock rates that can exist
between independent networks. During multiplexing, extra bits are strategically inserted
into the data stream to ensure proper alignment and synchronization at the receiving end.
Additionally, framing bits are incorporated at each level of the hierarchy to enable the
receiving equipment to identify the beginning of frames and the boundaries of individual
time slots, thus facilitating the demultiplexing process.

Japanese Telecommunications Standards Defining the J-Carrier Hierarchy


The Japanese J-carrier hierarchy is primarily defined and governed by standards
developed by the Telecommunication Technology Committee (TTC). The TTC, authorized by
Japan's Ministry of Internal Affairs and Communications, plays a central role in creating and
promoting telecommunications standards within Japan. While the Japanese Industrial
Standards (JIS) 1 cover a broad range of industrial activities, including potential aspects of
telecommunications equipment, the specific technical specifications for the J-carrier
hierarchy are mainly found within TTC standards.
Key TTC standards likely to define the J-carrier system include JT-G.703 , which likely
outlines the physical and electrical characteristics of the hierarchical digital interfaces, and
JT-G.704 , which probably specifies the synchronous frame structures used at the various
hierarchical levels relevant to the J-carrier system. Additionally, other TTC standards within
the JJ-series may offer more granular details regarding the J-carrier hierarchy's specific

20
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

implementation and parameters within Japan. Consulting these specific TTC documents is
essential for a thorough technical understanding of the J-carrier hierarchy.

Comparison with the North American DS Hierarchy


The Japanese J-carrier hierarchy exhibits both similarities and differences when
compared to the North American Digital Signal (DS) hierarchy. The following table
provides a comparison of the levels, data rates, and channel capacities of these two
systems:
Level J-Carrier (Japan) J-Carrier (Japan) DS Hierarchy (North DS Hierarchy (North
Data Rate (Mbps) Voice Channels America) Data Rate America) Voice
(Mbps) Channels
1 J1: 1.544 24 DS1: 1.544 24
2 J2: 6.312 96 DS2: 6.312 96
3 J3: 32.064 480 DS3: 44.736 672
4 J4: 97.728 1440 DS4: 274.176 4032
5 J5: 397.200 5760 DS5: 400.352 5760

As the table illustrates, the primary rate (Level 1) is identical in both hierarchies, with J1 and
DS1 operating at 1.544 Mbps and carrying 24 voice channels. The second level also shows
convergence, with J2 and DS2 both at approximately 6.3 Mbps and accommodating 96 voice
channels. However, a significant difference emerges at the third level, where J3 operates at
32.064 Mbps with 480 channels, while DS3 has a higher data rate of 44.736 Mbps and
supports 672 channels. The fourth and fifth levels also show variations in data rates and
channel capacities. Notably, the North American system includes an intermediate level, DS1C
(3.152 Mbps, 48 channels), which is not present in the J-carrier hierarchy.
Historically, both hierarchies evolved within the PDH framework, with the J-carrier system
likely influenced by the earlier development of the T-carrier system in North America by
AT&T's Bell Labs.

Comparison with the European E-Carrier Hierarchy


Comparing the Japanese J-carrier hierarchy with the European E-carrier hierarchy reveals
more fundamental differences, particularly at the primary rate:
Level J-Carrier (Japan) J-Carrier (Japan) E-Carrier (Europe) E-Carrier (Europe)
Data Rate (Mbps) Voice Channels Data Rate (Mbps) Voice Channels
1 J1: 1.544 24 E1: 2.048 32
2 J2: 6.312 96 E2: 8.448 128
3 J3: 32.064 480 E3: 34.368 512
4 J4: 97.728 1440 E4: 139.264 2048
5 J5: 397.200 5760 E5: 565.148 8192

The primary rate for the E-carrier system (E1) is 2.048 Mbps, which is higher than J1's 1.544
Mbps, and it accommodates 32 voice channels compared to J1's 24.This difference at the
foundational level propagates through the higher levels of the hierarchy, with the E-carrier

21
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

system generally exhibiting higher data rates and channel capacities at each corresponding
level. The E-carrier system was standardized by CEPT as a refinement and improvement of
the earlier T-carrier technology.

Frame Synchronization

With TDM systems, it is important not only that a frame has to be identified, but also
individual timeslots within the frame be identified. There are several methods used to
establish frame synchronization, including added digit, robbed digit, added channel,
statistical and unique coding. Considerable amount of overhead is added to transmission to
achieve frame synchronization.

1. Added-Digit Framing: - T1 carriers using D1, D2 or D3 channel banks use added-digit


framing. A special framing digit (framing pulse) is added to each frame. The maximum
2
average synchronization time is given by Synchronization time = 2NT = 2N tb
Where N is number of bits per frame and T is frame period of Ntb and tb is bit time.
2. Robbed-Digit Framing: - Added-digit framing is inefficient when a short frame is used
in the case of single-channel PCM systems. As an alternative, the least significant bit
th
of every n frame is replaced with a framing bit. This process is called robbed-digit
framing and it does not interrupt transmission, but instead periodically replaces
information bits with forced data errors to maintain frame synchronization.
3. Added-Channel Framing: - It is essentially same as added-digit framing except that
digits are added in groups or words instead of as individual bits. The average number
2 K
of bits to acquire frame synchronization using added-channel framing is N /2(2 –
1), where N is number of bits per frame and K is number of bits in the synchronizing
word.
4. Statistical Framing: - Here no robbing or adding digits is done. As a signal that has a
centrally peaked amplitude distribution generates a high probability of logic 1 in the
second digit, the second digit of a given channel can be used for the framing bit.
5. Unique-Line Code Framing: - Some property of the framing bit is different from the data
bits. The framing bit is either made higher or lower in amplitude or with a different
time duration. The advantage is that synchronization is immediate and automatic.
The disadvantage is additional processing requirements necessary to generate and
recognize the unique bit.

PHD Disadavantages & Key points

• Inability to identify individual channels in a higher-order bit stream.


• Accessing lower tributary requires the whole system to be de-multiplexed.
• Insufficient capacity for network management
• Most PDH network management is proprietary
• There's no standardized definition of PDH bit rates greater than 140 Mbit/s
• There are different hierarchies in use around the world

22
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

• 2 Mbit/s service signals are multiplexed to 140 Mbit/s for transmission over optical fiber
or radio
• In PDH, different frame is used for transmission and in data layer. Hence multiplexing and
de-multiplexing is very complex.
• The maximum capacity for PDH is 566 Mbps, which is limited in bandwidth.
• Tolerance is allowed in bit rates.
• PDH allows only Point-to-Point configuration.

Every manufacturer has its own standards; PDH also has different multiplexing hierarchies
making it difficult to integrate interconnecting networks together.

2. Synchronous digital Hierarchy SDH:


SDH is a digital Hierarch which is flexible, high quality and high level of protection with
capability of high-level rates. Its transmission rates arose from international agreement
adopted by ITU. CCITT was the international standards, including the initial specifications
for SDH.

CCITT Standards:
a) An international equivalent to Bell system level-1 of bit rate 1.544 Mbps and 24
channels, but not identical to the Bell’s T1-carrier.
b) 30 voice channel level 1 CEPT at 2.084 Mbps.

In its first standard (1.544 Mbps, 24 channels). It differs from Bell system T1-carrier
in the way framing and signaling are arranged for the block of channels (24 channel).
• 193 bit/frame with 8 bits per time slot for speech, and the alignment
bit (framing bit) is the first not the 193th.

• Channels are grouped in Multi-frames, each multi frame of 12 frames,


and there are two versions of signaling in these multi-frames.

Version 1:
With a common signaling channels associated with block of 24 channels. This is
arranged by having odd frames alignment bit in the form 101010… in successive
frames (i.e., frame 1, bit is 1, frame 3 bit is 0, frame 5, bit is 1 and so on).
In even frames of this multi frame the bit is used for signaling as a common signaling
channel at a bit rate of 4000 bps.
Version 2:
Signaling is associated with each channel and here there are 2-signaling bit streams
that are associated with each channel.
• Bit 8 of each channel in frame 6 is reserved for signaling channel A.
• Bit 8 of each channel in frame 12 is reserved for signaling channel B.
Figure below shows, version 2 of signaling in Mutli frame CCITT standard of 24 Ch.
at 1.544 Mbps.

23
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

In 1st scheme CCITT of 1.544 Mbps has a word length of 8 bits at 64,000 bps/ch.
whereas in the second scheme (12 multi- frame) 8 bit but every 6th word in a channel
has only 7 usable bits. For the CCITT 2.048 Mbps, 30 V.C. standard, signaling carrier
is accomplished by having one time slot in frame assigned to signaling. In multi-frame
of 16 frames with 256 bits per frame if results in 500 bps for each channel for
signaling, the overall signaling channel (TS 16) is sub multiplexed to give 4 each 500
bps per Ch.. the 4 each 500 bps came from assigning 4 bits of each word to a channel.

SDH Specifications:
§ It can be interfaced with PDH network via interfacing operating at, 1.5Mb/s, 2Mb/s,
6 Mb/s, 34 Mb/s, 45 Mb/s, and 140 Mb/s.
§ ATM (Asynchronous Transfer Mode).
§ It is based on byte interleaving, i.e. all bits in a byte belong to the same (Ts) channel.
§ Channels are added or dropped from each network at all rates down to 2 Mb/s as
shown in the figure below.
§ To drop 2 Mb/s no need to drop all other levels but it can be dropped directly from
the STM-1.
§ Reconfiguration can be made from a remote network management with no manual
changes.

The main advantages of SDH and synchronous networks are:


• an international Hierarch shared worldwide.
• A reduced quantity of equipments.
• Easy network management.
• High reliability.
• Flexibility and upgradability.
• Interfaces compatible with PDH network are shown below.
• Ability to transport ATM cells.

24
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Figure: Advantages of SDH

Figure: Synchrounous Digital Hierarch

The hierarchy levels are designed by the following abreviation STM-N (Synchronous
Transport Module level-N thus:

Level-N Bit rate


STM-1 155.52 Mbps
STM-4 622.08 Mbps
STM-16 2488.32 Mbps
STM-64 9953 Mbps

25
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

SDH Frame Format

The Ganenral format for STM-N, is shown below.

Figure: STM-N frame format

The fame can be logically viewed as a matrix of 9 rows of 270XN octates, each with
transmission being one row at a time from left to right and top to bottom.

The first 9XN columns of the frame are devoted to Section Overhead (SOH). The N is the level
of STM (N = 1, 4, 16 ,64).

Information payload, also known as Virtual Container level 4 (VC-4), used to transport low
speed tributary signals. Contains low rate signals and Path Overhead (POH), Location: rows
#1 ~ #9, columns #10 ~ #270.

The frame format of STM-1 : 155.52 Mbps is shown in the figure below.

26
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Figure: STM-1 frame chopped up into 9 segments, stacked on top of each other as shown

Thus, the No. of bits in each row= 270 X 8= 2160 bits

And the total No. of bits in 9 rows of the frame = 270 X8X9 = 19440 bits.

But we have 8000 frames per second.

Thus the total No. of bits/sec = 270x8x9x8000= 155.52 Mbps.

The advantage of SDH multiplexer is that it can accept inserting or droping 2 Mbps, or 1.5
Mbps, or 6 Mbps, or 34 Mbps, or 45 Mbps, or 140 Mbps. On the STM-1 as shown.

To find the payload rate we find the bit rate of the data sent with the over head bits.

• No. of bits per frame= 261x8x9 = 18792 bits in 125 𝜇 𝑠𝑒𝑐 of the frame

Interface and Frame Structure of SDH

Figure (a) below illustrates the relationship between various multiplexing elements of SDH
and shows generic multiplexing structures. Figure (b) illustrates one multiplexing example
for SDH, where there is direct multiplexing from container-1 using AU-3.

27
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Figure (a): Basic generalized SDH multiplexing structure. (From Figure 1/G.709, ITU-T Rec. G.709.)

Figure (b): SDH multiplexing method directly from container-1 using AU-3. (From, Figure 2-3/ G.708, ITU-T Rec.
G.708.)

28
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Terms of SDH:

Synchronous Transport Module (STM): An STM is the information structure used to


support section layer connections in the SDH. It is analogous to STS in the SONET regime.
STM consists of information payload and section overhead (SOH) information fields
organized in a block frame structure that repeats every 125 μsec. The information is suitably
conditioned for serial transmission on selected media at a rate that is synchronized to the
network. A basic STM (STM-1) is defined at 155,520 kbps. Higher-capacity STMs are formed
at rates equivalent to N times multiples of this basic rate. STM capacities for N = 4 and N =
16 are defined, and higher values are under consideration by ITU-T. An STM comprises a
single administrative unit group (AUG) together with the SOH. STM-N contains N AUGs
together with SOH.

Container, C -n (n = 1 to n = 4): This element is a defined unit of payload capacity, which is


dimensioned to carry any of the bit rates currently defined in Table 19.2, and may also
provide capacity for transport of broadband signals that are not yet defined by CCITT (ITU-
T Organization). A container is the information structure that forms the network
synchronous information payload for a virtual container. For each of the defined virtual
containers there is a corresponding container. Adaptation functions have been defined for
many common network rates into a limited number of standard containers. These include
standard E-1/DS-1 rates defined in ITU-T Rec. G.702.

Index of the container (n), n= (11, 12, 2, 3, 4). The rated of the container depends on the
signal which is being transported.

The mapping of a signal in the corresponding container is specified in ITU-T Rec. G.707.

Virtual Container-n (VC-n): A virtual container is the information structure used to support
path layer connection in the SDH. It consists of information payload and POH information
fields organized in a block frame that repeats every 125 μsec or 500 μsec. Alignment
information to identify VC-n frame start is provided by the server network layer. Two types
of virtual container have been identified:

1. Lower-Order Virtual Container-n, VC-n (n = 1, 2). This element comprises a single C-n (n =
1, 2), plus the basic virtual container POH appropriate to that level.

2. Higher-Order Virtual Container-n, to VC-n (n = 3, 4). This element comprises a single C-n (n
= 3, 4), an assembly of tributary unit groups (TUG-2s), or an assembly of TU-3s, together with
virtual container POH appropriate to that level.

So, VC-n: it is made up of C-n and POH. POH: LO Path OverHead. The POH is an additional
transport-capacity designed for the container (for management, supervision and
monitoring). It carries details on e.g. the payload contents.

VC-n = C-n + POH, n= 11, 12, 2, 3, 4.

29
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Administrative Unit-n (AU-n): An administrative unit is the information structure that


provides adaptation between the higher-order path layer and the multiplex section. It
consists of an information payload (the higher-order virtual container) and an
administrative unit pointer, which indicates the offset of the payload frame start relative to
the multiplex section frame start. Two administrative units are defined. The AU-4 consists of
a VC-4 plus an administrative unit pointer, which indicates the phase alignment of the VC-4
with respect to the STM-N frame. The AU-3 consists of a VC-3 plus an administrative unit
pointer, which indicates the phase alignment of the VC-3 with respect to the STM-N frame.
In each case the administrative unit pointer location is fixed with respect to the STM-N frame
(Ref. 6). One or more administrative units occupying fixed, defined positions in a STM
payload is termed an Administrative Unit Group (AUG). An AUG consists of a homogeneous
assembly of AU-3s or an AU-4.

30
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Two types of Virtual Container:

• Low Oreder VC-n (n= 11, 12, 2, 3)


• High Order VC-n (n= 3, 4)

Tributary Unit-n (TU-n): A tributary unit is an information structure that provides


adaptation between the lower-order path layer and the higher-order path layer. It consists
of an information payload (the lower-order virtual container) and a tributary unit pointer
(Alignment), which indicates the offset of the payload frame start relative to the higher-order
virtual container frame start. The TU-n (n = 1, 2, 3) consists of a VC-n together with a trib-
utary unit pointer. One or more tributary units occupying fixed, defined positions in a higher-
order VC-n payload is termed a tributary unit group (TUG). TUGs are defined in such a way
that mixed-capacity payloads made up of different-size tributary units can be constructed to
increase flexibility of the transport network. A TUG-2 consists of a homogeneous assembly
of identical TU-1s or a TU-2. A TUG-3 consists of a homogeneous assembly of TUG-2s or a
TU-3.

TU-n= VC-n + PTR, n= 11, 12, 2, 3, (PTR: indicates the beginning of VC-12)

31
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Figure: SDH Hierarchy (ITU-G)

32
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

From the figure the entry bits for each box are equivalent e.g. 1x34 Mbps = 7x6 Mbps = 21x2
Mbps = 28x1.5 Mbps, and any three of this are equivalent to 140 Mbps, which from STM-1.
Thus one ADM (Add-Drop Multiplexer) can be used to handle 155 Mbps with possibility to
add or drop any of the bit streams without adding any other lower level mixes, as shown
from the figure below.

Where, as for PDH, a drop of 2 Mbps stream requires additional lower level mixers as
shown below.

Thus, the figures below show the line transmission systems for SDH and PDH.

33
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

34
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

3. SONET

SONET (Synchronous Optical Network) is a transmission interface standardized by ANSI.


It is compatible to the SDH which is adopted by ITU-T. SONET is intended to provide a
specification for taking advantage of the high-speed digital transmission capability of optical
fiber.

Signal Hierarchy
The SONET specification defines a hierarchy standardized digital data rates as shown
in the Table below. the lowest level, referred to as STS-1 (Synchronous Transport Signal level
1) or OC-1 (optical carrier level 1), is 51.84 Mb/s. This rate can be used to carry a single T3
or DS-3 signals (i.e 44.736 Mb/s) or 34 Mb/s or a group of lower rate signals such as T1, T2,
E1, 4×E1. Multiple STS-1 signals can be combined to form an STS-N (or OC-N) signal. The
signal is created by interleaving by to from N. STS-1 signals that are mutually synchronized.
from the following table we see the STM-1 corresponds to STS-3.

Table: SONET / SDH signal Hierarchy

SONET designation ITU-T designation Data Rate Mbps Payload rate Mbps
STS-1 /oc-1 - 51.84 50.112
STS-3 /oc-3 STM-1 155.52 150.336
STS-9 /oc-9 STM-3 466.56 451.008
STS-12 /oc-12 STM-4 622.08 601.344
STS-18 /oc-18 STM-6 933.12 902.016
STS-24 /oc-24 STM-8 1244.16 1202.688
STS-36 /oc-36 STM-12 1866.24 1804.032
STS-48 /oc-48 STM-16 2488.32 2405.376
NOTE: OC: Optical Carier and STS: Synchronous Transport Signal

Synchronous Optical Network SONET/SDH


• The word SONET stands for Synchronous Optical Network, SONET in the USA, Canada,
and Japan, Synchronous Digital Hierarchy (SDH) elsewhere.
• The synchronous optical network SONET is a multiplexing system similar to
conventional time-division multiplexing except SONET was developed to be used
with optical fibers. SONET is the name for a standard family of interfaces for high
speed optical links. These start at 51.84 Mbps, which is referred to as synchronous
transport level 1 (STS-1). It is comprised of 28 DS-1 signals. Each DS-1 signal is
equivalent to a single 24-channel T1 digital carrier system. With STS-1, it is possible
to extract or add individual DS-1 signals with completely disassembling the entire
frame. OC-48 is the second level of SONET multiplexing. It has a transmission bit rate
of 2.48Gbps.
• For example in Libya SDH is used.

1. down time is absolutely not acceptable.

35
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Broad Features of SONET/SDH

Some of the broad features of SONET/SDH:

• It was first standardized by ANCI/ECSA, SDH by ITU–T.


• SONET is TDM system (pure TDM).
• SONET encompasses optical and electrical specifications. However, there are optical
specifications as well as electrical specifications. Usually at the user end, quite often
things start at the electrical level and the rates are low. But as you go more towards
the backbone of the network, the rates that are needed at the backbone becoming
higher and higher and finally at the real backbone it has to be very high-speed
network, and such high-speed networks are only possible through optical
communication and optical networking. Our specification, the SONET specification,
spans both the electrical side as well as the optical side, and that is a very good feature
of SONET.
• SONET uses octet multiplexing, octet means the same thing as a byte that means 8
bits, so SONET uses octet multiplexing. They are multiplexed byte by byte.
• And SONET provides support for Operation, Administration and Maintenance, (OAM).
• Transport performance Better. Because of lack of synchronicity, handling the signals
from different sources is not easy. What could happen is that when things are not
synchronous, but just almost synchronous, then to handle this “almost” part, you have
to do something; you have to incur some overhead; and you have to incur some
complexity. That was the difficulty with PDH; in SDH or SONET, this is eliminated, and
we get better transport performance.
• The ability to identify sub streams. This was another advantage of SONET over PDH,
which is that a particular user uses may be using a very small kind of bandwidth –
small in relative sense – and then, as more and more users, as all these data streams
or communications streams come towards the backbone of the network, the pipes
tend to get fatter (bigger in size). That means, we need faster and faster
communication. So between say two points in the backbone, there may be a very fast
communication going on and then after going to some other hops, this will again
diverge. SONET has this ability that different streams can get together, travel for some
time, and then again diverge. So the ability to identify sub streams is very important,
and that is also allowed in SONET, which was more difficult in the PDH system.
• And international connectivity.
• It enhanced control and administrative function that was also very good from the
point of view of service providers.

Some Multiplexing Standards

These are some of the multiplexing standards:


• DS0 is a 64 kbps channel and 24 of them constitute a T1 line. So T1 rate is
approximately about 1.5 mbps;

36
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

• 4 T1 gives T2 and
• 6 T2 gives T3 and so on.
• Similarly 30 DS0 –this is a European system – gives E1 line.
• So E1, is about 2 Mbps:
• 4 E1 gives E2;
• E3 is a 34 Mbps line.
• And then I suddenly jump right up to this thing called OC3 (Optical Container -3).
This 155 Mbps is 3 of the basic STS-1 rates that we mentioned earlier.

Compatibilty
It is worth noting that the internet working between SDH and SONET systems is
possible at matched bit rates; for example STM-4 and OC-12; so they interoperate. A slight
modification to the overhead is required as they are structured little differently so there will
always be a little something; but anyway that is not very serious. So they do interoperate.

SONET Terms
We will talk about some SONET terms now; for example,
• Envelope: This envelope is the payload. Basically, after all encapsulation, etc., you
remember that finally near the bottom we have this layer-2 and this layer-2 protocol
will encapsulate it and then hand it over to SONET at the lower level, maybe at the
physical level. So whatever this layer-2 hands over to SONET is the payload; the rest
of it are kind of system overheads –
payload + some end-system overhead also goes into this payload.
So these together form what is known as the envelope; this is a SONET term.
• Overhead: Other bits and bytes which are used for management, that means OAM&P
portion, goes as the overhead of SONET.
• Concatenation: Then there is the concept of concatenation; that means, unchannelized
envelope can carry super rate data payloads, for example, ATM, e.g. OC-3c, the method
of concatenation is different from that of T carrier hierarchy.

Some Nonstandard Functional Names


Then there are some nonstandard functional names in SONET, like
• TM: Terminal Multiplexer, also known as Line Terminating Equipment or LTE. These
are ends of point-to-point links.
• ADM: Add/Drop Multiplexer; we have mentioned this.
• DCC: Digital Cross Connect (wideband and broadban);
• MN: Matched Nodes and,
• D + R: means Drop and Repeat.
Anyway, these are just some terms. Now let us come to some important concepts in SONET
namely: section, line, and path.

Section, Line and Path


What is a section, Lime and Path? The following figure shows the main parts of SONET.

37
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

• From repeater to repeater,we call it a section. So from repeater to multiplexer, this is


also a section. So multiplexer to repeater, repeater to repeater, these are called
sections.
• And then, from multiplexer to multiplexer, we call it a line. At the repeater, nothing
happens excepting the signal is cleaned up. The signal may be boosted or there may
be other cleaning operation, synchronizing operation, etc., that may be done at the
repeater; but as such, the signals which are traveling here, the same set of signals are
traveling here. At the multiplexer, of course, some of the signals may go off in another
direction; some signals may go in some other direction, etc. So at the multiplexer,
there may be a convergence or divergence, depending on which way the signal is
flowing. That may happen at the multiplexer.
• and then from the end user point to end user point, we call it a path.
• The portion from a multiplexer to a repeater is known as a section or it could be a
repeater to a repeater also;
• The portion from a multiplexer to another multiplexer is a line.
• The portion from source to destination multiplexer is a path; below path line and
section is the photonic sub-layer; that means photonic sub-layer is whatever is
happening in the optical domain.
• Sections are bounded by repeaters or multiplexers that terminate the line; lines may
carry several tributary signals and are bounded by multiplexers, a path goes end to
end between terminating multiplexers.

STS-1 Frame:
• Each STS frame lasts 125 microseconds. As mentioned, this 125 microseconds time
period, time epoch, is sort of sacred in this whole domain because 125 microseconds
is what is required for a DS0 channel. Remember this is a time division multiplexing,
which means that if you have a 125 microsecond kind of slot, then some of the DS0
bytes can take these bytes. So if you have a 125 microsecond slot, if 1 byte travels in
this frame, then that is enough for 1 DS0 channel. In SONET we have very
sophisticated and very fast equipment; that means this is a time division multiplexing
system; within this 125 microseconds, not only 1 byte can go but a lot of other bytes

38
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

can go. That means a lot of channels can travel together in this 125 microseconds
frame. So each STH frame lasts 125 microseconds; how many bytes are going in there
depends on whether it is STS -1 or STS -2 or STS –N, etc. So 125 microseconds as I
mentioned is 8000 frames/s.
• STS -1 frame has 6480 bits = 810 octetes (bytes). That means in 125 microsecond slot
or frame, we are putting in 810 bytes. Theoretically, that means it can carry 810 DS0
or voice signals; actually it is not 810, it is lesser than that because a number of these
bytes are used for different types of overheads. We have these 810 bytes; the octets
are understood in terms of a table of 9 rows and 90 columns; so let us look at this
figure below. We have a SONET frame or an SDH frame, which has 9 rows; you can
see the 9 rows on this side and then 90 columns, total 90 columns. Out of these 90
columns, 3 columns, these are sort of used for overhead and these 87 columns are
used for payload or for envelope. The envelope contains the payload as well as little
bit of overhead, which we will come to later on. This is how after every 90 bytes, we
come back to again another 3 bytes of this overhead. This is how it is to be understood.

Figure: SONET Frame


STS-1 Frame - Columns
• The first 3 columns contain transport overhead (TOH) and
• TOH = 9 rows by 3 columns (27 bytes), which is subdivided into
o section overhead SOH (section overhead): 9 bytes, (3 rows of 3
columns);
o Line Overhead (LOH), which is 18 bytes (6 rows of 3 columns).
So we have section overhead and we have a line overhead – remember we have these
three concepts like section, line, and path. We have not talked about path overhead.
There is some path overhead and it goes into the envelope; so there is some path and
as far as these things as line and section are concerned, these are the overhead bytes.
Just to clarify why do we require the over bytes – the point is that the multiplexers or
the repeaters have to have some communication between them in the control plain
so as to give you this OAM capability. For that some information needs to be sent or
exchanged between the two points; anywhere there is a section, the section overhead
would consider those things which are central to the section about the signal strength
and other kind of things; line overhead maybe would contain something else and

39
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

similarly path overhead would contain something else. But these are required for
these OAM capabilities that we have in SONET.

Overhead Streams
Let us look at these overheads separately;
• Section Overhead (SOH), which defines and identifies frames and monitors section
errors and communication between sections terminating equipment. So these are its
functions: it identifies frames; monitors section errors – if there are errors, it monitors
section errors; and communication between section terminating equipment, maybe
two repeaters or a repeater and multiplexer, and so on.
• Line Overhead (LOH): locates first octet of SPE and monitors line errors and
communication between terminating equipment. We will come back to this locating
of the first octet of synchronous payload envelope (SPE). This is a very interesting
feature and we will talk about this separately. Previously we talking about section
errors; so line errors and communication between terminating equipment, etc., is
taken care of by the line overhead. Apart from that, line overhead contains this
pointer, which points to the first byte of the SPE. And then there is a path overhead;
and as mentioned earlier path overhead is really inside the envelope. Path overhead
verifies connection path; you remember path means from end to end; that means
from the end to end multiplexer is a path. Whether the connection has been
established or not, it monitors path errors, receivers’ status, communication between
path termination equipment, and so on. This is the POH .

STS-1 Frame - SPE


• We talked about the synchronous payload envelope or SPE. That is, the other 87
columns hold the SPE (Synchronous Payload Envelope). So SPE has 9 roews by 87
columns, which are divided into: path overhead and payload, which means the path
overhead goes along with the envelope that is in the SPE, whereas other overheads
have separate bytes or separate columns associated with them as shown.
• This SPE does not necessarily start in the column 4, which means that the SPE does
not necessarily stay within one frame; these are two very important points in SONET.
The point is that although you have these 87 columns, actually the data may start
getting transmitted at some arbitrary points inside those 87 columns. What is the idea?
And why do you want to leave something and then only start from the middle? The
point is that if there are some kind of mismatches of late, etc., if everything in the
world were absolutely synchronous, all activities and all equipments, etc., then you
could have started from the beginning. But that is not the case and this is where we
absorb this kind of variation and this gives great flexibility to SONET, which was not
there earlier. And the other interesting thing is that the SPE does not necessarily stay
within one frame, which means that the SPE may start in one frame and then end in
another.
• We will just look at a diagram of this; from the figure, the SPE in the middle; it really
starts from somewhere. Which meant, somewhere after leaving some of the rows, and
the path overhead, and there are two frames. So SPE is really spanning both the
frames.

40
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Figure: SONET Frame


Floating Payload: SONET LOH Pointers
• SPE is not frame aligned: it overlaps multiple frames;
• Avoids buffer management complexity and artificial delays. Whenever there is
something to send, you can just send it in the envelope; just put that pointer to that
yellow edge, so that yellow edge will point to the first byte of the SPE.
• It allows direct access to byte synchronous lower level signals, for example, DS1,
with just one frame recovery procedure.

Headers: Path Overhead (POH)


Now of course where is the path overhead?
• There are two fields, H1 and H2 in LOH; LOH (Line Overhead), which points to the
beginning of the path overhead.
• POH beginning floats within the frame;
• 9 bytes that is one column may span frame along with the SPE;
• Originated and terminated by all path devices; and
• This gives you end-to-end support.
These are the features of path overhead. The point is that if you remember the path is
end to end, that means it is close to the end users; just as the end user may start
somewhere arbitrarily in-between, a path overhead also goes along with the SPE and it starts
over there and at LOH, we keep a pointer to this path overhead.

Add/Drop Multiplexers
Just as some of the equipment that we use in SONET, one of the most important of
these is the add/drop multiplexer. They are important because at certain point in the
network, what might happen is that there are some sources which want to send into the
network. They will sort of go so there is this SONET equipment, which is ADM let us say, and
SONET stream is flowing let us say like this. There may be something that wants to upload

41
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

and travel along with this thing. At the same time, this may be the destination location for
some of the other signals which originated elsewhere; they have to be dropped here. So some
signals have to be dropped, some signals have to be added. So this multiplexer can handle
that and that is very important. That is why they are called add drop multiplexers. This
stream is itself of course flowing at a tremendous rate, whatever that rate is.
• SONET/SDH is a synchronous system with the master clock accuracy of 1 in 109,
which you will see is highly accurate. It shows when you come in some kind of CCM
clock somewhere and then there is a protocol for distributing and maintaining this
clock over the entire network.
• Frames are sent byte by byte.
• ADMs can add drop smaller tributaries into the main SONET/SDH stream and it was
mensioned earlier how that is done. Within that frame you can send a lot of bytes; you
can take out some of the bytes and add some of the bytes. That is how you take out
some of the smaller tributaries and add some of the smaller tributaries.

Digital Cross Connects (DCS)

• DCS, which is an optical layer equipment, is also very important.


• It cross connects thousands of streams and software control, so it replaces patch
panel; that is a good thing about the digital cross connect and a software control is
coming where the control is coming from the control plane of the switches. You can
connect the streams from may be one fiber to another; it handles performance
monitoring, PDH/SONET streams, and also provides ADM functions; that means add
drop multiplexing functions.

Grooming
Grooming means, we group the traffic in some format. So you want to keep this
group in one particular way; it could be that there is a one group of streams for whom you
want to give higher priority or you want to give higher quality of service Qos. So you have
to group them together. Similarly there may be multiple groups;
• so it enables grouping traffic with similar destination, Qos, etc., which is a part of
grooming.
• It enables multiplexing or extracting streams also – that is also part of grooming.
• Narrow /wider, broad-band and optical crossconnects may be used for grooming.
If you look at figure, you have this narrowband, this SONET layer and optical layer. In the
narrow band, we have this DS0 grooming and then in the DS1 grooming, there is a whide
band and then the broadband DS3 grooming – so the rates are going up, starting from the 64
kbps, it is going up. When you are going up for the STS 48, you are in optical domain; that
means STS-48 is STM-16, so that is a high rate. The point is that, at that rate, most probably,
you are well in the optical domain. Then, finally, you can go to all optical domain; that means
wavelength, waveband, and fiber grooming – there are different levels of grooming,
depending on what you want to do.

42
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Figure: Grooming

Virtual Tributaries (Containers)


We have already talked a little bit about it.
• Opposite of STS-N : Submultiplexing. This is the opposite of STM; actually in some
sense this is called sub multiplexing; that is, different streams coming together to
form one very fat or very fast stream. This is the other thing – how do we, sort of,
differentiate these sub streams within this, which has to do with sub multiplexing?
• STS -1 is divided into 7 virtual tributary groups (12 cloumns each) whaic can be
subdivided further. SDH uses the term virtual containers or VCs. We talked about VCs;
we just mentioned what are called VTs or virtual tributaries in SONET lingo. So we
have 7 virtual tributaries,(12 columns each), which can be subdivided further. You
see that there are 12 columns each, with 7 virtual tributary groups – we have got 84
columns and these 84 columns are out of the 87 you have in STS -1.
• VT groups are byte interleaved to create a basic SONET SPE. So this VT groups are
byte interleaved. They may be again extracted from each other.
• VT 1.5: is the most popular, quickly accessed, T1 line within the STS-1 frame. So the
idea is that you have a T1 line, which is approximately 1.5 Mbps line, which is coming
out of your small business, and you have a 1.5 Mbps line. So that is your bandwidth
requirement, you want to connect it to a distant location somewhere. And you do not
want your thing to get mixed up with others. At the same time, as a small business
you cannot have infrastructure of connecting to another location which is wide apart.

43
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

So you will go with this public infrastructure or public switched tele PSDN network
or whoever is maintaining this communication equipment. Usually telecom people
maintain it in most of the places. Anyway, they have a sort of fiber going from one
place to another, which contains very high-speed links. What you want is your T1 line
should join them, sort of get transported over the distance and then go and feed into
another T1 line at the destination. That is what you want. You want your T1 line to
sort of have a separate sort of existence – just like in a compartment, we have different
passengers. Passengers have their own individual entity but together they are packed
into one compartment and then they travel. Similarly your T1 line is going to ride onto
to this very fast stream and travel to the destination.
• VT 1.5: Most popular quickley access T1 line within the STS-1 frame.
• VT payload (a.k.a VT SPE) How do you find out about the difference? How do you
separate them in the SPE? The point is,
• you require one more level of pointer used to access it.
o You can access a T1 with just a 2-pointer operation, first from the LOH – you
remember, you go to the SPE, just like that. Similarly, you go to the different
tributaries or different zcontainers using just one more level of pointer. This
flexibility was not there earlier;
• So it was very complex to do the same function in DS-3. For example, accessing DS-0
within DS-3 requires full demultiplexing, stacked multiplexing, etc. So you require full
demultiplexing; that is not required in SONET. The point is that the other streams may
go; where in that frame your bytes really are traveling for the stream or for the
container or for the tributary that you are interested; you just extract it, others keep
on traveling as they are. So you do not demultiplex the whole thing and that gives a
great advantage of add/drop multiplexing.

Virtual Tributaries: Pointers


The following figure showing that you can have various types of lines, all feeding into the
same infrastructure. DS1, which is 1.544 Mbps, E1, 2.048 Mbps, DS1C, DS2, DS3, ATM 48.384,
E4, which is 139.264 Mbps, and ATM is about 150 Mbps. They are sort of traveling; they are
getting in different containers.
From VT 1.5, different tributaries, that is 1. 5236, form a VT group and ride on a higher
strength or higher speed stream. These are sort of identified through a pointer; so we have
this transport overhead. We use some bytes for that out of those 87 columns we have. So we
use some columns of that and then we put a pointer, which gives to the STS payload pointer.
Then there is a VT pointer, Virtual Tributary Pointer, and this much is the VT SPE within the
overall STS-1 SPE, which is the payload. Even now SONET is the most widely used technology
in wide area networking that is existing today. Of course, as technology grows, maybe we
will go out of SONET. People are already talking about going out of SONET because one
disadvantage of SONET is that its equipment tends to be expensive. What is cheap today and
what we think is cheap today may sound very expensive tomorrow; that is how the
technology grows. So people are talking about direct transport over the optical layer. May be
we will touch those aspects later on. But all that is still in a sort of experimental stage and on
the field, actually, SDH/SONET equipment is almost everywhere; all types of telephone

44
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

companies are connected through that and major service providers use this as a means of
transport.

Figure: SONET Hieararchy

45
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Types of TDM:

• The two basic forms of TDM are: Synchronous TDM (STDM) and Asynchronous (or)
Statistical TDM (STATDM)
• In synchronous TDM, time slot ‘x’ is assigned to user m alone and cannot be used by
any other user or other device. T-1 and ISDN telephone lines are common examples
of synchronous time division multiplexing. Asynchronous TDM (or statistical TDM
STATDM) networks assign time slots only when they are to be used and delete them
when they are idle. STATDM is used in high density and high traffic applications.

I. Synchronous TDM:

Multiple digital signals can be carried on a single transmission by interleaving


poritions of each signal in time. In synchronous TDM, each input connection has an allotment
in the output even if it is not sending data.

Interleaving:

The interleaving can be at the bit level or in blocks of bytes or larger quantities.

• TDM can be visualized as two fast-rotating switches, one on the multiplexing side
and the other on the demultiplexing side.
• The switches are synchronized and rotate at the same speed, but in opposite
directions.
• On the multiplexing side, as the switch opens in front of a connection, that
connection has the opportunity to send a unit onto the path. This process is called
interleaving.
• On the demultiplexing side, as the switch opens in front of a connection, that
connection has the opportunity to receive a unit from the path.

46
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

For example, a multiplexer with six inputs each of 9.6 kb/s can result a multiplexed
signal that can be transmitted in a single line with a capacity of at least 57.6 kbps.

• The bit interleaving technique is used with synchronous sources and may be used
with asynchronous source.
• The character interleaving technique is used with synchronous sources. Each Time
slot contains one character of data. Typically, the start and stop bits of each
character are eliminated before transmission and reinserted by the received thus
improving effected.
Figure below shows a number of signals mi (t), i =1, N, are multiplexer into the same
transmission medium. The incoming data from each source are briefly buffered. Each
buffer is typically one bit and one character in length. The buffers are scanned
sequentially to form a composite digital signal stream me (t). The scan operation is
sufficiently rapid so that each buffer is emptied before more data can arrive. Thus, the
data rate of mc (t) must at least equal the sum of data rates of mi (t). mc (t) can be
transmitted directly or passed through modem. In either case transmission is
typically synchronous.

At the receiver, the interleaved data are demultiplexed and routed the appropriate
destination buffer.
• Synchronous TDM is called synchronous because the time slots are reassigned to
source and fixed. The time slots for each source are transmitted whether or not the
source has data to send. Synchronous TDM can handle sources of different data rates.
For example, the slowest input device could be assigned one time slot per frame,
while the faster device is assigned more than one time slot per frame.

47
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Empty Slots:

Synchronous TDM is not as efficient as it could be. If a source does not have data to
send, the corresponding slot in the output frame is empty. Following Figure shows a case in
which one of the input lines has no data to send and one slot in another input line has
discontinuous data.

Data Rate Management:

One problem with TDM is how to handle a disparity in the input data rates. We
assumed that the data rates of all input lines were the same. However, if data rates are not
the same, three strategies, or a combination of them, can be used. We call these three
strategies multilevel multiplexing, multiple-slot allocation, and pulse stuffing.

a) Multilevel Multiplexing: Multilevel multiplexing is a technique used when


the data rate of an input line is a multiple of others. For example, in Following
Figure, we have two inputs of 20 kbps and three inputs of 40 kbps. The first
two input lines can be multiplexed together to provide a data rate equal to
the last three. A second level of multiplexing can create an output of 160
kbps.

b) Multiple-Slot Allocation: Sometimes it is more efficient to allotmore than


one slot in a frame to a single input line. For example, we might have an input
line that has a data rate that is a multiple of another input. In Following
Figure, the input line with a 50-kbps data rate can be given two slots in the
output. We insert a serial- to-parallel converter in the line to make two
inputs out of one.

48
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

c) Pulse Stuffing: Sometimes the bit rates of sources are not multiple integers of
each other. Therefore, neither of the above two techniques can be applied. One
solution is to make the highest input data rate the dominant data rate and then
add dummy bits to the input lines with lower rates. This will increase their
rates. This technique is called pulse stuffing, bit padding, or bit stuffing. The
idea is shown in Following Figure. The input with a data rate of 46 kbps is
pulse-stuffed to increase the rate to 50 kbps. Now multiplexing can take place.

The extra capacity is used by stufQing extra dummy bits or pulses into each
incoming signal until its rate is raised to that of a locally-generated clock
signal.

The stuffed pulses are Inserted at Qixed locations in the multiplexer frame
format so that they may be identiQied and removed at the de multiplexer.

• If frame consists of 2 bits, then:

49
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

T= 2Tb

1/T0= 2/T = 2 x8 = 16 Kbps

T = 2Tb = 125 𝜇 𝑠𝑒𝑐

• If the frame consists of 4 bits, thus frame time T = 4Tb

Tb= 1/16x103= (125/2) x 10-6 = 62.5 𝜇 𝑠𝑒𝑐

T= 4x 62.5 = 250 𝜇 𝑠𝑒𝑐

The stuffed bits of source 2 is 8 x 103 – 7.2 x 103 = 0.8 kb/s

Thus in 1 second there is 800 bits stuffed.

That is 1 bit is stuffed every 1/800 = 0.125 x 10-2 = 1250 𝜇 𝑠𝑒𝑐.

From the figure:

rb2> rb1 or Tb2<Tb1

for a given durstion T we want to have n pulses at the output and thus we have (n-a)
pulses at the input where a is No. of stuff bits.

i.e. nTb2 = (n-a) Tb1 = T

Tb2 is known then n= T/Tb2 then a can be obtained from:

(n-a)Tb1= T or (n-a)= T/Tb1

50
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Then, a = n- T/Tb1

If T = 1250 𝜇 𝑠𝑒𝑐, which is repletion period of stuffing pulse, then,

a = n- 1250/139 = n – 8.99

n= T/Tb2 = 1250/125 = 10

a= 10 − 8.99 ≈ 10 − 9 ≈ 1

thus we have 1 stuffing pulse every 1250 𝜇 𝑠𝑒𝑐

or we have 1 stuffing pulse after the 9th pulse of the signal ( i.e 1250/139 ≈ 9)

if T = 250 𝜇 𝑠𝑒𝑐

n= T/Tb2 = 250/125 = 2

T/Tb1 = 250/139 = 1.8

a = n- T/Tb1 = 2-1.8 = 0.2 pulse

thus, wither a duration T = 250 𝜇 𝑠𝑒𝑐 there is 0.2 stuff pulse, which means we have 1 stuff
pulse every T/0.2 = 5 T = 5 X 250 = 1250 𝜇 𝑠𝑒𝑐.

Example:
A synchronous TDM is used to multiplex 3 analog sources and 11 digital sources on a single
link. The analog signals are sampled and PCM encoded with 4 bits per sample. Design this
multiplexer. The sources are:
Source 1 analog, 2khz bandwidth
Source 2 analog, 4khz bandwidth
Source 3 analog, 2khz bandwidth
Source 4-11 digital, each 7200bps synchronous.
Sampling frequency of source 1 is 4khz.
Sampling frequency of source 2 is 8khz
Sampling frequency of source 3 is 4khz
These sources can be multiplexed using commentator with rotation rate
4000/sec.

51
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Other approach is to multiplex the 8 digital signals to obtain a digital data rate of 8X8 kbps
= 64 kbps, then multiplex it with the multiplexed analog signals 0f 64 kbps.

The buffers can be 1 bit or 2 bit.

The frame length is 32 bits if 2 bits buffer is used.

The frame length is 16 bits if 1 bit buffer is used.

52
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

For the output of both to have the same rate then,

Tb=T1/32, rb=32/T1, frame rate 1 =1/T1

Tb=T2/16, rb=16/T2 , frame rate 2 =1/T2

Frame rate 1 =1/T1= rb/32=128x103/32= 4000 frame/sec

Frame rate 2 =1/T2= rb/16=128x103/16= 8000 frame/sec

If the buffer is 2 bits, then,

fs= 4000 frame/sec

II. Statistical Time-Division Multiplexing:

• In statistical time-division multiplexing, slots are dynamically allocated to improve


bandwidth efficiency.
• Only when an input line has a slot's worth of data to send is it given a slot in the
output frame.
• In statistical multiplexing, the number of slots in each frame is less than the number
of input lines.
• The multiplexer checks each input line in round robin fashion; it allocates a slot for
an input line if the line has data to send; otherwise, it skips the line and checks the
next line. Following Figure shows a synchronous and a statistical TDM example. In

53
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

the former, some slots are empty because the corresponding line does not have data
to send. In the latter, however, no slot is left empty as long as there are data to be
sent by any input line.

• The statistical TDM allocate dynamically time slots on demand and as with
synchronous TDM, the statistical multiplexer has a number of I/O lines on one
side and a higher speed multiplexed line on the other.
• Each I/O line has a buffer associated with it. So there are n I/O lines but only k
(where k < n) time slots are available on the TDM frame for input, the
function of the multiplexer is to scan the Input buffers, collecting data until a
frame is Qilled, as then send the frame.
• On the output, the multiplexer receives a frame and distributes the slots of
data to the appropriate output buffers.
• The data rate on the statistical TDM multiplexed line is less than the sum of the
data rates of the attached devices. This because statistical TDM takes
advantage of the fact that the attached devices are not all transmitting all of the
time.
• With Statistical multiplexing, control bits must be included in the frame. The
following figure shows the overall frame format for a statistical TDM multiplexer.

• The frame includes a beginning flag and ending flag to indicate the start and end of
frame, an address field that indicates the transmitting device, a control field, a
statistical TDM subframe, and a Frame Check Sequence field (FCS), which provides
error detection.

54
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

The above figure shows the frame when only one data source is transmitting. The
transmitting devise is identified in the address field. The data field is variable and this
scheme works well in times of light loads, but inefficient for heavy loads.

The above figure shows a way to improve efficiency by allowing more than one data source
to be included within a single frame.

Addressing:

• Above Figure also shows a major difference between slots in synchronous TDM and
statistical TDM.
• An output slot in synchronous TDM is totally occupied by data; in statistical TDM, a
slot needs to carry data as well as the address of the destination.
• In synchronous TDM, there is no need for addressing; synchronization and
preassigned relationships between the inputs and outputs serve as an address.
• In statistical multiplexing, there is no fixed relationship between the inputs and
outputs because there are no preassigned or reserved slots. We need to include the
address of the receiver inside each slot to show where it is to be delivered. The
addressing in its simplest form can be n bits to define N different output lines with
n=log2N. For example, for eight different output lines, we need a 3-bit address.

Slot Size:

Since a slot carries both data and an address in statistical TDM, the ratio of the data size
to address size must be reasonable to make transmission efficient. For example, it would be
inefficient to send 1 bit per slot as data when the address is 3 bits. This would mean an
overhead of 300 percent. In statistical TDM, a block of data is usually many bytes while the
address is just a few bytes.

No Synchronization Bit:

There is another difference between synchronous and statistical TDM, but this time it is at
the frame level. The frames in statistical TDM need not be synchronized, so we do not need
synchronization bits.

55
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Bandwidth:
In statistical TDM, the capacity of the link is normally less than the sum of the capacities of
each channel. The designers of statistical TDM define the capacity of the link based on the
statistics of the load for each channel. If on average only x percent of the input slots are
filled, the capacity of the link reflects this. Of course, during peak times, some slots need to
wait.

Performance:
In statistical multiplexers the data rate of its output is less than the sum of data rates
of the inputs. This is anticipated that the average amount of input is less than the capacity of
the multiplexed line. But in general cases may arise where during peak periods the input
exceeds capacity. The solution to this problem is to include buffer in the multiplexer to hold
temporary excess input.
There is a trade-off between the size of the buffer used and the data rate of the line.
We would like to use the smaller possible buffer and smallest possible data rate. But a
reduction in one requires an increase in the other. Note that we are not concerned with the
cost of the buffer, but the more buffering there is the longer the delay. Thus, the trade-off is
really between the system response time and the speed of the multiplexed line.
To examine the mentioned trade off. Consider the following parameters:
N = number of input sources
R = data rate of each source bps
M = effective capacity of multiplexed line (bps) (taking into account the
overhead bits introduced by the multiplexer)
i.e. M represents the maximum rate at which data bits can be transmitted.
𝛼 = mean fraction of time each source is transmitting
0< 𝛼 <1
The compression of the multiplexer is: K=M/NR
which is ratio of multiplexed line capacity to total maximum input.

Thus, for a given data rate M, if K=0.25 means there are 4 times as many devices being
handed as by a synchronous TDM using the same link capacity.

if K= 0.25, this mean that N1/N2= 0.25 or N2= N1x4

that is:

M = N1R and M= KN2R

N1R= KN2R -> N1 = KN2

56
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

N2 = N1/K or K=N1/N2

The value of K can be bounded by,

𝛼<𝐾<1

The value of K=1 corresponds to synchronous TDM, and if 𝐾 < 𝛼 thus the input will exceed
the multiplexer capacity.

The parameter 𝜌 can be defined as the utilization or fraction of total link capacity being
used. Thus,

𝛼𝑁𝑅 𝛼 𝜆
𝜌= = = 𝜆𝑆 =
𝑀 𝐾 𝑀

Where,

𝑏
𝜆 = 𝛼𝑁𝑅 = 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑎𝑟𝑟𝑖𝑣𝑎𝑙 𝑟𝑎𝑡𝑒 𝑖𝑛
𝑠

S= 1/M= service time for one bit

Viewing Multiplexer as a Single Server Queue:

Let

𝜆 = 𝑚𝑒𝑎𝑛 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑎𝑟𝑟𝑖𝑣𝑎𝑙𝑠/𝑠𝑒𝑐.

𝑆 = 𝑢𝑡𝑖𝑙𝑖𝑧𝑎𝑡𝑖𝑜𝑛, 𝑓𝑟𝑎𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑡𝑖𝑚𝑒 𝑡ℎ𝑒 𝑠𝑒𝑣𝑒𝑟 𝑖𝑠 𝑏𝑢𝑠𝑦.

𝑞 = 𝑚𝑒𝑎𝑛 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑖𝑡𝑒𝑚𝑠 𝑖𝑛 𝑠𝑦𝑠𝑡𝑒𝑚 (𝑤𝑎𝑖𝑡𝑖𝑛𝑔 𝑎𝑛𝑑 𝑏𝑒𝑖𝑛𝑔 𝑠𝑒𝑟𝑣𝑒𝑑)

𝑡( = 𝑚𝑒𝑎𝑛 𝑡𝑖𝑚𝑒 𝑎𝑛 𝑖𝑡𝑒𝑚 𝑠𝑝𝑒𝑛𝑑𝑠 𝑖𝑛 𝑠𝑦𝑠𝑡𝑒𝑚

𝜎( = 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑞

Thus, for this single queue the following equations holds

𝜌 = 𝜆𝑆

57
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

𝜌"
𝑞= +𝜌
2(1 − 𝜌)

𝑆(2 − 𝜌)
𝑡( =
2(1 − 𝜌)

1 3𝜌" 5𝜌$ 𝜌)
𝜎( = X𝜌 − + −
1−𝜌 2 𝜎 12

• The service time is assumed constant.


• The delay incurred by a customer is the time spent waiting in the queue plus the
time for service.
• The delay depends on the pattern of arriving traffic and characteristic of the server.

This model is easily related to the statistical multiplexer by letting:

𝜆 = 𝛼𝑁𝑅

𝑆 = 1/𝑀

• The average arrival rate 𝜆 b/s is the total input NR times the fraction of time 𝛼 that
each source is transmitting.
• The service time S in seconds is the time it takes to transmit one bit which is 1/M.
• Note that,
𝛼𝑁𝑅 𝛼 𝜆
𝜌 = 𝜆𝑆 = = =
𝑀 𝐾 𝑀

The parameter 𝜌 is the utilization or fraction of total link capacity being used.

• For example, if the capacity M is 50 kb/s and 𝜌 = 0.5, the load on the system is 25
kb/s.
• The parameter q is a measure of the amount of buffer space being used in the
multiplexer.
• 𝑡( is a measure of the average delay encountered by an input source. The figure
below gives mean buffer size versus utilization.

58
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

The above figures assume that data are being transmitted in 1000 bit frames, figure(a),
shows average number of frames that must be buffered as a function of the average
utilization of the multiplexed line. Thus, as utilization of the line increases the buffer
size increases, and consequently the delay increases as shown in Fig. (b). The
utilization is expressed as a percent of the total line capacity.
• If average input load is 5000 b/s and line capacity is 5000 b/s then
utilization is 100%.
• But if the line capacity is 7000 bps, then the utilization is 5000/7000×100.
• From the two Qigures we conclude that utilization above 80% is clearly
undesirable. In this case (i.e., when 𝜌 = 80%) the mean buffer size is 2.4.
• Note that the average buffer size being used depends only on 𝜌 and not
directly on M. For example, consider the following two cases.

We see that the Case 1 Case 2


utilization 𝜌 = 𝟎. 𝟖 for N=10 N=100
both cases R=100 b/s R=100 b/s
𝛼 = 0.4 𝛼 = 0.4
NM=5000 b/s NM=5000 b/s

So the mean buffer size is 2.4 frames (from fig. (a))

Thus proportionally, a smaller amount of buffer space per source is needed for
multiplexers that handle a larger number of sources.

Also Fig (b) shows that the average delay will be small as the link capacity increases
for constant utilization.

59
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

So far we have been considering average queue length, and hence the average
amount of buffer capacity needed. There will be some fixed upper bound on the
buffer size avallable. The varlance of the queue size grows with u@liza@on. Thus
at a higher level of u@liza@on, a larger buffer is needed to hold the backlog. Fig.
3.9.6 shows strong dependence of over flow probability on u@liza@on.

Transmultiplexers:

The interface between FDM and TDM systems can be done by two methods:

A. Use of back analog and digital channel banks with individual voice channels
connected between as shown:

60
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

B. Use of transmultiplexers which directly translates FDM into PCM signals and
vice versa.

The primary application of Transmultiplexers is the interface of analog FDM


transmission systems with TDM digital transmission systems, and the interface of digital
switches with analog transmission facilities.
The advantage of Transmultiplexers is the use of a single piece of equipment versus
two channel banks, also the system performance is equal or exceeds that of tandem
connection of PCM and FDM channel banks mentioned in (A) above.
Three standard configurations of transmultiplexers have been adopted:
1. Translation between the 60 channels super group and two 30 channels CEPT
standard PCM multiplexers.
2. Translation between 12 channel groups and the 24 channel American standard, PCM
multiplexers.
3. Translation between two 60 channels super groups and five 24 channels American
standard PCM multiplexers.

The first two configurations have been recognized by the ITU-T Recommendation G793 and
G794 respectively.
The algorithms used in FDM-TDM translation in general are based on the use of digital signal
processing as illustrated below. The FDM signal is digitized and processed such as filtering,
modulator, and amplification is performed on the digital representation of the FDM signal to
produce a PCM signal.
In the reverse direction, the PCM signal is digitally processed to produce a digital version of
a FDM signal. A digital to analog converter is then used to produce the conventional FDM
signal.
Many design approaches have been used, and no single technique is considered standard
practice. The following figure illustrates the block diagram of 60 channel Transmultiplexer.

61
Gharyan University, Faculty of engineering, EECM515 Eng. Asma Abdurahman
Spring 2025

Figure: The block diagram of 60 channel Transmultiplexer

62

You might also like