0% found this document useful (0 votes)
169 views114 pages

Fundamentals of Digital Communications and Data Transmission

Digital communication systems convert analog signals to digital signals for transmission. This involves three main steps: 1. Sampling - Converting the continuous time signal to a discrete time signal by taking samples at regular intervals. 2. Quantization - Approximating the continuous range of signal values into a smaller set of discrete values by dividing the signal dynamic range into amplitude levels. 3. Coding - Assigning a binary code to each discrete amplitude level. Together, these steps allow analog signals like voice, video and images to be transmitted reliably in digital form over communication channels.

Uploaded by

Ritu Jangra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
169 views114 pages

Fundamentals of Digital Communications and Data Transmission

Digital communication systems convert analog signals to digital signals for transmission. This involves three main steps: 1. Sampling - Converting the continuous time signal to a discrete time signal by taking samples at regular intervals. 2. Quantization - Approximating the continuous range of signal values into a smaller set of discrete values by dividing the signal dynamic range into amplitude levels. 3. Coding - Assigning a binary code to each discrete amplitude level. Together, these steps allow analog signals like voice, video and images to be transmitted reliably in digital form over communication channels.

Uploaded by

Ritu Jangra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 114

Fundamentals of Digital

Communications and Data Transmission

14th March 2014

Prof. Asodariya Bhavesh


SSASIT, Surat
Outline
• What is communication?
• Analog to Digital Conversion (A/D)
• Source Coding
• Channel Encoding
• Modulation Techniques
• Modulation Techniques (Part II)
What is Communication?
• Communication is transferring data reliably
from one point to another
– Data could be: voice, video, codes etc…
• It is important to receive the same
information that was sent from the
transmitter.
• Communication system
– A system that allows transfer of information
realiably
Information Transmitter Channel Receiver Information
Source Sink

Block Diagram of a typical communication system


• Information Source
– The source of data
• Data could be: human voice, data storage device CD,
video etc..
– Data types:
• Discrete: Finite set of outcomes “Digital”
• Continuous : Infinite set of outcomes “Analog”
• Transmitter
– Converts the source data into a suitable form for
transmission through signal processing
– Data form depends on the channel
• Channel:
– The physical medium used to send the signal
– The medium where the signal propagates till
arriving to the receiver
– Physical Mediums (Channels):
• Wired : twisted pairs, coaxial cable, fiber optics
• Wireless: Air, vacuum and water
– Each physical channel has a certain limited range
of frequencies ,( fmin  fmax ), that is called the
channel bandwidth
– Physical channels have another important
limitation which is the NOISE
• Channel:
• Noise is undesired random signal that corrupts the original
signal and degrades it
• Noise sources:
» Electronic equipments in the communication system
» Thermal noise
» Atmospheric electromagnetic noise (Interference with
another signals that are being transmitted at the same
channel)
– Another Limitation of noise is the attenuation
• Weakens the signal strength as it travels over the
transmission medium
• Attenuation increases as frequency increases
– One Last important limitation is the delay distortion
• Mainly in the wired transmission
• Delays the transmitted signals  Violates the reliability of
the communication system
• Receiver
– Extracting the message/code in the received signal
• Example
– Speech signal at transmitter is converted into electromagnetic
waves to travel over the channel
– Once the electromagnetic waves are received properly, the
receiver converts it back to a speech form
– Information Sink
• The final stage
• The user
Effect of Noise On a transmitted signal
Digital Communication System
• Data of a digital format “i.e binary numbers”

Information A/D Source Channel Modulator


Source Converter Encoder
Encoder

Channel

Information D/A Source Channel Demodulator


Sink Converter Decoder
Decoder
• Information source
– Analog Data: Microphone, speech signal, image,
video etc…
– Discrete (Digital) Data: keyboard, binary numbers,
hex numbers, etc…
• Analog to Digital Converter (A/D)
– Sampling:
• Converting continuous time signal to a digital signal
– Quantization:
• Converting the amplitude of the analog signal to a
digital value
– Coding:
• Assigning a binary code to each finite amplitude in the
• Source encoder
– Represent the transmitted data more efficiently
and remove redundant information
• How? “write Vs. rite”
• Speech signals frequency and human ear “20 kHz”
– Two types of encoding:
– Lossless data compression (encoding)
• Data can be recovered without any missing information
– Lossy data compression (encoding)
• Smaller size of data
• Data removed in encoding can not be recovered again
• Channel encoder:
– To control the noise and to detect and correct the
errors that can occur in the transmitted data due
the noise.
• Modulator:
– Represent the data in a form to make it
compatible with the channel
• Carrier signal “high frequency signal”
• Demodulator:
– Removes the carrier signal and reverse the
process of the Modulator
• Channel decoder:
– Detects and corrects the errors in the signal
gained from the channel
• Source decoder:
– Decompresses the data into it’s original format.
• Digital to Analog Converter:
– Reverses the operation of the A/D
– Needs techniques and knowledge about sampling,
quantization, and coding methods.
• Information Sink
– The User
Why should we use digital communication?
• Ease of regeneration
– Pulses “ 0 , 1”
– Easy to use repeaters
• Noise immunity
– Better noise handling when using repeaters that repeats
the original signal
– Easy to differentiate between the values “either 0 or 1”
• Ease of Transmission
– Less errors
– Faster !
– Better productivity
Why should we use digital communication?

• Ease of multiplexing
– Transmitting several signals simultaneously
• Use of modern technology
– Less cost !
• Ease of encryption
– Security and privacy guarantee
– Handles most of the encryption techniques
Disadvantage !
• The major disadvantage of digital transmission
is that it requires a greater transmission
bandwidth or channel bandwidth to
communicate the same information in digital
format as compared to analog format.
• Another disadvantage of digital transmission is
that digital detection requires system
synchronization, whereas analog signals
generally have no such requirement.
Chapter 2: Analog to Digital
Conversion (A/D)
14th March 2014

Prof. Asodariya Bhavesh


SSASIT, Surat
Digital Communication System

Information A/D Source Channel Modulator


Source Converter Encoder
Encoder

Channel

Information D/A Source Channel Demodulator


Sink Converter Decoder
Decoder
2.1 Basic Concepts in Signals
• A/D is the process of converting an analog
signal to digital signal, in order to transmit it
through a digital communication system.
• Electric Signals can be represented either in
Time domain or frequency domain.
– Time domain i.e v(t) 2sin(2 1000t 45)
– We can get the value of that signal at any time (t)
by substituting in the v(t) equation.
Time Domain Frequency Domain

Amp. Amp.

Time(s) Frequency (Hz)


Converting an Analog Signal to a Discrete
Signal (A/D)

• Can be done through three basic steps:

1- Sampling

2- Quantization

3- Coding
Sampling
• Process of converting the continuous time
signal to a discrete time signal.
• Sampling is done by taking “Samples” at
specific times spaced regularly.
– V(t) is an analog signal
– V(nTs) is the sampled signal
• Ts = positive real number that represent the spacing of
the sampling time
• n = sample number integer
Sampling

Original Analog Signal Sampled Analog Signal


“Before Sampling” “After Sampling”
Sampling
• The closer the Ts value, the closer the sampled
signal resemble the original signal.
• Note that we have lost some values of the
original signal, the parts between each
successive samples.

• Can we recover these values? And How?


• Can we go back from the discrete signal to
the original continuous signal?
Sampling Theorem
• A bandlimited signal having no spectral components
above fmax (Hz), can be determined uniquely by values
sampled at uniform intervals of Ts seconds, where

• An analog signal can be reconstructed from a sampled


signal without any loss of information if and only if it is:
– Band limited signal
– The sampling frequency is at least twice the signal
bandwidth 1
Ts
2 f max
Quantization
• Quantization is a process of approximating a
continuous range of values, very large set of
possible discrete values, by a relatively small
range of values, small set of discrete values.

• Continuous range  infinte set of values

• Discrete range  finite set of values


Quantization
• Dynamic range of a signal
– The difference between the highest to lowest
value the signal can takes.
Quantization
• In the Quantization process, the dynamic range of a
signal is divided into L amplitude levels denoted by mk,
where k = 1, 2, 3, .. L
• L is an integer power of 2
• L = 2k
• K is the number of bits needed to represent the amplitude
level.
• For example:
– If we divide the dynamic range into 8 levels,
• L = 8 = 23
– We need 3 bits to represent each level.
Quantization

• Example:
– Suppose we have an analog signal with the values
between [0, 10]. If we divide the signal into four
levels. We have
• m1  [ 0, 2.5 ]
• m2  [ 2.5, 5 ]
• m3  [ 5 , 7.5]
• m4  [ 7.5, 10]
Quantization
• For every level, we assign a value for the signal
if it falls within the same level.

M1 = 1.25 if the signal in m1

M2 = 3.75 if the signal in m2


Q [ v(t) ] =
M3 = 6.25 if the signal in m3

M4 = 8.75 if the signal in m4


Quantization

Original Analog Signal Quantized Analog Signal


“Before Quantization” “After Quantization”
Quantization

Original Discrete Signal Quantized Discrete Signal


“Before Quantization” “After Quantization”
Quantization
• The more quantization levels we take the
smaller the error between the original and
quantized signal.
• Quantization step

Dynamic Range Smax Smin


No. of Quantization levels L

• The smaller the Δ the smaller the error.


Coding
• Assigning a binary code to each quantization
level.
• For example, if we have quantized a signal into
16 levels, the coding process is done as the
following:

Step Code Step Code Step Code Step Code

0 0000 4 0100 8 1000 12 1100

1 0001 5 0101 9 1001 13 1101

2 0010 6 0110 10 1010 14 1110

3 0011 7 0111 11 1011 15 1111


Coding
• The binary codes are represented as pulses

• Pulse means 1
• No pulse means 0

• After coding process, the signal is ready to be


transmitted through the channel. And
Therefore, completing the A/D conversion of
an analog signal.
Chapter 3: Source Coding

14th March 2014

Prof. Asodariya Bhavesh


SSASIT, Surat
3.1 Measure of Information
• What is the definition of “Information” ?
• News, text data, images, videos, sound etc..
• In Information Theory
– Information is linked with the element of surprise or
uncertainty
– In terms of probability
– Information
• The more probable some event to occur the less information
related to its occurrence.
• The less probable some event to occur the more information
we get when it occurs.
Example1:
• The rush hour in Kuwait is between 7.00 am –
8.00 am
– A person leaving his home to work at 7.30 will
NOT be surprised about the traffic jam  almost
no information is gained here
– A person leaving his home to work at 7.30 will BE
surprised if THERE IS NO traffic jam:
– He will start asking people / family / friends
– Unusual experience
– Gaining more information
Example 2

• The weather temperature in Kuwait at summer


season is usually above 30o
• It is known that from the historical data of the
weather, the chance that it rains in summer is
very rare chance.
– A person who lives in Kuwait will not be surprised by
this fact about the weather
– A person who lived in Kuwait will BE SURPRISED if it
rains during summer, therefore asking about the
phenomena. Therefore gaining more knowledge
“information”
How can we measure information?
• Measure of Information
– Given a digital source with N possible outcomes
“messages”, the information sent from the digital
source when the jth message is transmitted is
given by the following equation:

1 [ Bits ]
Ij log 2 ( )
pj
Example 1
• Find the information content of a message
that takes on one of four possible outcomes
equally likely
• Solution
The probability of each outcome = P = 1
Therefore, 0.25
1
1 log( )
I log 2 ( ) 0.25 2 bits
0.25 log(2)
Example 2

• Suppose we have a digital source that


generates binary bits. The probability that it
generates “0” is 0.25, while the probability
that it generates “1” is 0.75. Calculate the
amount of information conveyed by every bit.
Example 2 (Solution)
• For the binary “0” : 1
I log 2 ( ) 2 bits
0.25

• For the binary “1”: 1


I log 2 ( ) 0.42 bits
0.75

• Information conveyed by the “0” is more than the


information conveyed by the “1”
Example 3:
• A discrete source generates a sequence of ( n )
bits. How many possible messages can we
receive from this source?
• Assuming all the messages are equally likely to
occur, how much information is conveyed by
each message?
Example 3 (solution):
• The source generates a sequence of n bits,
each bit takes one of two possible values
– a discrete source generates either “0” or “1”
• Therefore:
– We have 2N possible outcomes

• The Information Conveyed by each outcome


n
1 log(2 ) n log(2)
I log 2 ( n ) n bits
2 log(2) log(2)
3.3 Entropy
• The entropy of a discrete source S is the
average amount of information ( or
uncertainty ) associated with that source.

m
1
H(s) p j log 2 ( ) b[ its]
j 1
pj
• m = number of possible outcomes
• Pj = probability of the jth message
Importance of Entropy
• Entropy is considered one of the most
important quantities in information theory.

• There are two types of source coding:


– Lossless coding “lossless data compression”
– Lossy coding “lossy data compression”
• Entropy is the threshold quantity that
separates lossy from lossless data
compression.
Example 4
• Consider an experiment of selecting a card at
random from a cards deck of 52 cards. Suppose
we’re interested in the following events:
12
– Getting a picture, with probability of :
52

– Getting a number less than 3, with probability of: 8


52
32
– Getting a number between 3 and 10, with a probability of:
52

• Calculate the Entropy of this random experiment.


Example 4 (solution) :
• The entropy is given by :
3
1
H(s) p j log 2 ( ) b[ its]
j 1
pj
• Therefore,

12 52 8 52 32 52
H(s) log 2 ( ) log 2 ( ) log 2 ( ) 1.335 bits
52 12 52 8 52 32
Source Coding Theorem
• First discovered by Claude Shannon.
• Source coding theorem
“A discrete source with entropy rate H can be
encoded with arbitrarily small error probability at
any rate L bits per source output as long as L > H”
Where
H = Entropy rate
L = codeword length
If we encode the source with L > H  Trivial Amount of errors
If we encode the source with L < H  we’re certain that an error will
occur
3.4 Lossless data compression

• Data compression
– Encoding information in a relatively smaller size than their original size
• Like ZIP files (WinZIP), RAR files (WinRAR),TAR files etc..
• Data compression:
– Lossless: the compressed data are an exact copy of the original data
– Lossy: the compressed data may be different than the original data
• Loseless data compression techniques:
– Huffman coding algorithm
– Lempel-Ziv Source coding algorithm
Chapter 4: Channel Encoding

14th March 2014

Prof. Asodariya Bhavesh


SSASIT, Surat
Overview
• Channel encoding definition and importance

• Error Handling techniques

• Error Detection techniques

• Error Correction techniques


Channel Encoding - Definition
• In digital communication systems an optimum
system might be defined as one that
minimizes the probability of bit error.

• Error occurs in the transmitted signal due to


the transmission in a non-ideal channel
– Noise exists in channels
– Noise signals corrupt the transmitted data
Channel Encoding - Imporatance

• Channel encoding
– Techniques used to protect the transmitted signal
from the noise effect
• Two basic approaches of channel encoding
– Automatic Repeat Request (ARQ)
– Forward Error Correction (FEC)
Automatic Repeat Request (ARQ)
• Whenever the receiver detects an error in the
transmitted block of data, it requests the
transmitter to send the block again to
overcome the error.
• The request continue “repeats” until the block
is received correctly
• ARQ is used in two-way communication
systems
– Transmitter  Receiver
Automatic Repeat Request (ARQ)
• Advantages:
– Error detection is simple and requires much
simpler decoding equipments than the other
techniques
• Disadvantages:
– If we have a channel with high error rate, the
information must be sent too frequently.
– This results in sending less information thus
producing a less efficient system
Forward Error Correction (FEC)
• The transmitted data are encoded so that the
receiver can detect AND correct any errors.
• Commonly known as Channel Encoding
• Can be Used in both two-way or one-way
transmission.
• FEC is the most common technique used in
the digital communication because of its
improved performance in correcting the
errors.
Forward Error Correction (FEC)

• Improved performance because:

– It introduces redundancy in the transmitted data


in a controlled way

– Noise averaging : the receiver can average out the


noise over long time of periods.
Error Control Coding
• There are two basic categories for error
control coding
– Block codes
– Tree Codes
• Block Codes:
– A block of k bits is mapped into a block of n bits

Block of K bits Block of n bits


Error Control Coding
• tree codes are also known as codes with
memory, in this type of codes the encoder
operates on the incoming message sequence
continuously in a serial manner.

• Protecting data from noise can be done


through:
– Error Detection
– Error Correction
Error Control Coding

• Error Detection
– We basically check if we have an error in the
received data or not.
• There are many techniques for the detection
stage
• Parity Check
• Cyclic Redundancy Check (CRC)
Error Control Coding
• Error Correction
– If we have detected an error “or more” in the
received data and we can correct them, then we
proceed in the correction phase
• There are many techniques for error
correction as well:
• Repetition Code
• Hamming Code
Error Detection Techniques

• Parity Check
– Very simple technique used to detect errors
• In Parity check, a parity bit is added to the
data block
– Assume a data block of size k bits
– Adding a parity bit will result in a block of size k+1
bits
• The value of the parity bit depends on the
number of “1”s in the k bits data block
Parity Check
• Suppose we want to make the number of 1’s in the
transmitted data block odd, in this case the value of the parity
bit depends on the number of 1’s in the original data
– if we transmit a message = 1010111
• k = 7 bits
– Adding a parity check so that the number of 1’s is even
• The message would be : 10101111
• k+1 = 8 bits
• At the reciever ,if one bit changes its values, then an error can
be detected
Example - 1
• At the transmitter, we need to send the
message M= 1011100.
– We need to make the number of one’s odd
• Transmitter:
– k=7 bits , M =1011100
– k+1=8 bits , M’=10111001
• Receiver:
– If we receive M’ = 10111001  no error is
detected
– If we receive M’= 10111000  an Error is
Parity Check
• If an odd number of errors occurred, then the
error still can be detected “assuming a parity
bit that makes an odd number of 1’s”

• Disadvantage:
– If an even number of errors occurred, the the
error can NOT be detected “assuming a parity bit
that makes an odd number of 1’s”
Chapter 5:
Modulation Techniques
14th March 2014

Prof. Asodariya Bhavesh


SSASIT, Surat
Introduction
• After encoding the binary data, the data is
now ready to be transmitted through the
physical channel
• In order to transmit the data in the physical
channel we must convert the data back to an
electrical signal
– Convert it back to an analog form
• This process is called modulation
Modulation - Definition
• Modulation is the process of changing a
parameter of a signal using another signal.
• The most commonly used signal type is the
sinusoidal signal that has the form of :
• V(t) = A sin ( wt + θ )
• A : amplitude of the signla
• w : radian frequency
• θ : Phase shift
Modulation
• In modulation process, we need to use two
types of signals:
– Information, message or transmitted signal
– Carrier signal
• Let’s assume the carrier signal is of a
sinusoidal type of the form x(t) = A sin (wt + θ
)
• Modulation is letting the message signal to
change one of the carrier signal parameters
Modulation

• If we let the carrier signal amplitude changes


in accordance with the message signal then
we call the process amplitude modulation

• If we let the carrier signal frequency changes


in accordance with the message signal then
we call this process frequency modulation
Modulation Types AM, FM, PAM
Modulation Types AM, FM, PAM
Digital Data Transmission
• There are two types of Digital Data
Transmission:

1) Base-Band data transmission


– Uses low frequency carrier signal to transmit the data
2) Band-Pass data transmission
– Uses high frequency carrier signal to transmit the data
Base-Band Data Transmission
• Base-Band data transmission = Line coding
• The binary data is converted into an electrical
signal in order to transmit them in the channel
• Binary data are represented using amplitudes
for the 1’s and 0’s
• We will presenting some of the common base-
band signaling techniques used to transmit
the information
Line Coding Techniques
• Non-Return to Zero (NRZ)

• Unipolar Return to Zero (Unipolar-RZ)

• Bi-Polar Return to Zero (Bi-polar RZ)

• Return to Zero Alternate Mark Inversion (RZ-AMI)

• Non-Return to Zero – Mark (NRZ-Mark)

• Manchester coding (Biphase)


Non-Return to Zero (NRZ)
• The “1” is represented by some level
• The “0” is represented by the opposite
• The term non-return to zero means the signal
switched from one level to another without
taking the zero value at any time during
transmission.
NRZ - Example
• We want to transmit m=1011010
Unipolar Return to Zero (Unipolar RZ)

• Binary “1” is represented by some level that is


half the width of the signal

• Binary “0” is represented by the absence of


the pulse
Unipolar RZ - Example
• We want to transmit m=1011010
Bipolar Return to Zero (Bipolar RZ)

• Binary “1” is represented by some level that is


half the width of the signal

• Binary “0” is represented a pulse that is half


width the signal but with the opposite sign
Bipolar RZ - Example
• We want to transmit m=1011010
Return to Zero Alternate Mark
Inversion (RZ-AMI)

• Binary “1” is represented by a pulse


alternating in sign

• Binary “0” is represented with the absence of


the pulse
RZ-AMI - Example
• We want to transmit m=1011010
Non-Return to Zero – Mark (NRZ-Mark)

• Also known as differential encoding

• Binary “1” represented in the change of the


level
– High to low
– Low to high

• Binary “0” represents no change in the level


NRZ-Mark - Example
• We want to transmit m=1011010
Manchester coding (Biphase)

• Binary “1” is represented by a positive pulse


half width the signal followed by a negative
pulse

• Binary “0” is represented by a negative pulse


half width the signal followed by a positive
pulse
Manchester coding - Example
• We want to transmit m=1011010
Transmission

• Transmission bandwidth: the transmission


bandwidth of a communication system is the
band of frequencies allowed for signal
transmission, in another word it is the band of
frequencies at which we are allowed to use to
transmit the data.
Bit Rate

• Bit Rate : is the number of bits transferred


between devices per second

• If each bit is represented by a pulse of width


Tb, then the bit rate

1
Rb bits/sec
Tb
Example – Bit rate calculation
• Suppose that we have a binary data source
that generates bits. Each bit is represented by
a pulse of width Tb = 0.1 mSec
• Calculate the bit rate for the source

• Solution

1 1
Rb 3
10000 bits/sec
Tb 0.1 10
Example – Bit rate calculation
• Suppose we have an image frame of size
200x200 pixels. Each pixel is represented by
three primary colors red, green and blue
(RGB). Each one of these colors is represented
by 8 bits, if we transmit 1000 frames in 5
seconds what is the bit rate for this image?
Example – Bit rate calculation
• We have a total size of 200x200 = 40000 pixels
• Each pixel has three colors, RGB that each of them has 8
bits.
– 3 x 8 = 24 bits ( for each pixel with RGB)
• Therefore, for the whole image we have a total size of 24
x 40000 = 960000 bits
• Since we have 1000 frames in 5 seconds, then the total
number of bits transmitted will be 1000 x 960000 =
960000000 bits in 5 seconds
• Bit rate = 96000000/5 = 192000000 bits/second
Baud rate (Symbol rate)
• The number of symbols transmitted per second
through the communication channel.
• The symbol rate is related to the bit rate by the
following equation:

• Rb = bit rate Rb
Rs
• Rs = symbol rate N
• N = Number of bits per symbol
Baud rate (Symbol rate)
• We usually use symbols to transmit data when the
transmission bandwidth is limited
• For example, we need to transmit a data at high rate and the
bit duration Tb is very small; to overcome this problem we
take a group of more than one bit, say 2, therefore :

1
Tb fo
Tb
1 1
2Tb f fo
2Tb 2
1 1
4Tb f fo
4Tb 4
Baud rate (Symbol rate)
• We notice that by transmitting symbols rather
than bits we can reduce the spectrum of the
transmitted signal.
• Hence, we can use symbol transmission rather
than bit transmission when the transmission
bandwidth is limited
Example
• A binary data source transmits binary data,
the bit duration is 1µsec, Suppose we want to
transmit symbols rather than bits, if each
symbol is represented by four bits. what is the
symbol rate?
• Each bit is represented by a pulse of duration
1µ second, hence the bit rate

1
Rb 6
1000000 bits/sec
1 10
Example (Continue)
• Therefore, the symbol rate will be

Rb 1000000
Rs 250000 symbols/sec
N 4
Chapter 6:
Modulation Techniques (Part II)
14th March 2014

Prof. Asodariya Bhavesh


SSASIT, Surat
Introduction
• Bandpass data transmission

• Amplitude Shift Keying (ASK)

• Phase Shift Keying (PSK)

• Frequency Shift Keying (FSK)

• Multilevel Signaling (Mary Modulation)


Bandpass Data Transmission
• In communication, we use modulation for several
reasons in particular:
– To transmit the message signal through the
communication channel efficiently.
– To transmit several signals at the same time over a
communication link through the process of
multiplexing or multiple access.
– To simplify the design of the electronic systems used
to transmit the message.
– by using modulation we can easily transmit data with
low loss
Bandpass Digital Transmission
• Digital modulation is the process by which
digital symbols are transformed into wave-
forms that are compatible with the
characteristics of the channel.
• The following are the general steps used by
the modulator to transmit data
– 1. Accept incoming digital data
– 2. Group the data into symbols
– 3. Use these symbols to set or change the phase, frequency or
amplitude of the reference carrier signal appropriately.
Bandpass Modulation Techniques
• Amplitude Shift Keying (ASK)
• Phase Shift Keying (PSK)
• Frequency Shift Keying (FSK)
• Multilevel Signaling (Mary Modulation)
• Mary Amplitude Modulation
• Mary Phase Shift Keying (Mary PSK)
• Mary Frequency Shift Keying (Mary FSK)
• Quadrature Amplitude Modulation (QAM)
Amplitude Shift Keying (ASK)
• In ASK the binary data modulates the
amplitude of the carrier signal
Phase Shift Keying (PSK)
• In PSK the binary data modulates the phase of
the carrier signal
Frequency Shift Keying (FSK)
• In FSK the binary data modulates the
frequency of the carrier signal
Modulation Types – 4 Level ASK,
FSK, PSK
Constellation Diagram
BPSK
Constellation Diagram
QPSK
Constellation Diagram
16-QAM

You might also like