0% found this document useful (0 votes)
69 views12 pages

Chapter 2

This document discusses various types of communication channels including symmetric channels, binary symmetric channels (BSC), ternary symmetric channels (TSC), binary erasure channels (BEC), lossless channels, deterministic channels, ideal channels, noisy channels, and additive white Gaussian noise (AWGN) channels. It also summarizes Shannon's theorem stating that error-free transmission is possible if the information rate is less than or equal to the channel capacity, and errors cannot be avoided if the rate is greater than capacity.

Uploaded by

Al7asanAmmar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views12 pages

Chapter 2

This document discusses various types of communication channels including symmetric channels, binary symmetric channels (BSC), ternary symmetric channels (TSC), binary erasure channels (BEC), lossless channels, deterministic channels, ideal channels, noisy channels, and additive white Gaussian noise (AWGN) channels. It also summarizes Shannon's theorem stating that error-free transmission is possible if the information rate is less than or equal to the channel capacity, and errors cannot be avoided if the rate is greater than capacity.

Uploaded by

Al7asanAmmar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Electrical Engineering Technical College

Chapter Two

1- Channel:
In telecommunications and computer networking, a communication channel
or channel, refers either to a physical transmission medium such as a wire, or to
a logical connection over a multiplexed medium such as a radio channel. A channel is
used to convey an information signal, for example a digital bit stream, from one or
several senders (or transmitters) to one or several receivers. A channel has a certain
capacity for transmitting information, often measured by its bandwidth in Hz or its data
rate in bits per second.
2- Symmetric channel:

The symmetric channel have the following condition:

a- Equal number of symbol in X&Y, i.e. P(Y∣X) is a square matrix.


b- Any row in P(Y∣X) matrix comes from some permutation of other rows.

For example, the following conditional probability of various channel types as shown:

0.9 0.1
a- 𝑃( 𝑌 ∣ 𝑋 ) = [ ] is a BSC, because it is square matrix and 1st row is the
0.1 0.9
permutation of 2nd row.
0.9 0.05 0.05
b- 𝑃( 𝑌 ∣ 𝑋 ) = [0.05 0.9 0.05] is TSC, because it is square matrix and each
0.05 0.05 0.9
row is a permutation of others.
0.8 0.1 0.1
c- 𝑃( 𝑌 ∣ 𝑋 ) = [ ] is a non-symmetric since since it is not square
0.1 0.8 0.1
although each row is permutation of others.
0.8 0.1 0.1
d- 𝑃( 𝑌 ∣ 𝑋 ) = [0.1 0.7 0.2] is a non-symmetric although it is square since 2nd
0.1 0.1 0.8
row is not permutation of other rows.

\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

2.1- Binary symmetric channel (BSC)


It is a common communications channel model used in coding theory and information
theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver
receives a bit. It is assumed that the bit is usually transmitted correctly, but that it will
be "flipped" with a small probability (the "crossover probability").
1-P
0 0
P

1 1
1-P

A binary symmetric channel with crossover probability p denoted by BSCp, is a


channel with binary input and binary output and probability of error p; that is, if X is the
transmitted random variable and Y the received variable, then the channel is
characterized by the conditional probabilities:

Pr( 𝑌 = 0 ∣ 𝑋 = 0 ) = 1 − 𝑃

Pr( 𝑌 = 0 ∣ 𝑋 = 1 ) = 𝑃

Pr( 𝑌 = 1 ∣ 𝑋 = 0 ) = 𝑃

Pr( 𝑌 = 1 ∣ 𝑋 = 1 ) = 1 − 𝑃

2.2- Ternary symmetric channel (TSC):

The transitional probability of TSC is:


𝑦1 𝑦2 𝑦3
𝑥1
1 − 2𝑃𝑒 𝑃𝑒 𝑃𝑒
𝑃( 𝑌 ∣ 𝑋 ) = 𝑥2 [ 𝑃 1 − 2𝑃𝑒 𝑃𝑒 ]
𝑥3 𝑒
𝑃𝑒 𝑃𝑒 1 − 2𝑃𝑒

The TSC is symmetric but not very practical since practically 𝑥1 and 𝑥3 are not affected
so much as 𝑥2 . In fact the interference between 𝑥1 and 𝑥3 is much less than the
interference between 𝑥1 and 𝑥2 or 𝑥2 and 𝑥3 .
\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

1-2Pe
X1 Y1
Pe

X2 1-2Pe Y2
Pe
X3 Y3
1-2Pe

Hence the more practice but nonsymmetric channel has the trans. prob.
𝑦 𝑦2 𝑦3
𝑥1 1
1 − 𝑃𝑒 𝑃𝑒 0
𝑃( 𝑌 ∣ 𝑋 ) = 𝑥2 [𝑃 1 − 2𝑃𝑒 𝑃𝑒 ]
𝑥3 𝑒
0 𝑃𝑒 1 − 𝑃𝑒

Where 𝑥1 interfere with 𝑥2 exactly the same as interference between 𝑥2 and 𝑥3 , but 𝑥1
and 𝑥3 are not interfere.

1-Pe
X1 Y1
Pe

X2 1-2Pe Y2
Pe

X3 Y3
1-Pe

3- Binary Erasure Channel (BEC):

The Binary Erasure Channel (BEC) model are widely used to represent channels or links
that “losses” data. Prime examples of such channels are Internet links and routes. A
BEC channel has a binary input X and a ternary output Y.
1-Pe
X1 Y1
Pe
Erasure

X2 Y2
1-Pe

\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

Note that for the BEC, the probability of “bit error” is zero. In other words, the
following conditional probabilities hold for any BEC model:

Pr( 𝑌 = "𝑒𝑟𝑎𝑠𝑢𝑟𝑒" ∣ 𝑋 = 0 ) = 𝑃

Pr( 𝑌 = "𝑒𝑟𝑎𝑠𝑢𝑟𝑒" ∣ 𝑋 = 1 ) = 𝑃

Pr( 𝑌 = 0 ∣ 𝑋 = 0 ) = 1 − 𝑃

Pr( 𝑌 = 1 ∣ 𝑋 = 1 ) = 1 − 𝑃

Pr( 𝑌 = 0 ∣ 𝑋 = 1 ) = 0

Pr( 𝑌 = 1 ∣ 𝑋 = 0 ) = 0

4- Special Channels:

a- Lossless channel: It has only one nonzero element in each column of the
transitional matrix P(Y∣X).
𝑦1 𝑦2 𝑦3 𝑦4 𝑦5
𝑥1
3/4 1/4 0 0 0
𝑃( 𝑌 ∣ 𝑋 ) = 𝑥2 [ ]
𝑥3 0 0 1/3 2/3 0
0 0 0 0 1
This channel has H(X∣Y)=0 and I(X, Y)=H(X) with zero losses entropy.
b- Deterministic channel: It has only one nonzero element in each row, the
transitional matrix P(Y∣X), as an example:
𝑦1 𝑦2 𝑦3
𝑥1 1 0 0
1 0 0
𝑃( 𝑌 ∣ 𝑋 ) = 𝑥2
𝑥3 0 0 1
0 1 0
[ 0 1 0 ]
This channel has H(Y∣X)=0 and I(Y, X)=H(Y) with zero noisy entropy.
c- Ideal channel: It has only one nonzero element in each row and column, the
transitional matrix P(Y∣X), i.e. it is an identity matrix, as an example:

\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

𝑥1 𝑦1 𝑦2 𝑦3
1 0 0
𝑃( 𝑌 ∣ 𝑋 ) = 𝑥2 [ ]
𝑥3 0 1 0
0 0 1
This channel has H(Y∣X)= H(X∣Y)=0 and I(Y, X)=H(Y)=H(X).
d- Noisy channel: No relation between input and output:
H( X ∣ Y ) = 𝐻(𝑌), H( Y ∣ X ) = 𝐻(𝑋)
𝐼(𝑋, 𝑌) = 0, 𝐶 = 0
𝐻(𝑋, 𝑌) = 𝐻(𝑋) + 𝐻(𝑌)

5- Shannon’s theorem:

a- A given communication system has a maximum rate of information C known as


the channel capacity.
b- If the information rate R is less than C, then one can approach arbitrarily small
error probabilities by using intelligent coding techniques.
c- To get lower error probabilities, the encoder has to work on longer blocks of
signal data. This entails longer delays and higher computational requirements.

Thus, if R ≤ C then transmission may be accomplished without error in the presence


of noise. The negation of this theorem is also true: if R > C, then errors cannot be
avoided regardless of the coding technique used.

6- Additive White Gaussian Noise Channel (AWGN):

Consider a bandlimited Gaussian channel operating in the presence of additive


Gaussian noise:

The Shannon-Hartley theorem states that the channel capacity is given by:

\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

𝑆
𝐶 = 𝐵𝑙𝑜𝑔2 (1 + )
𝑁

Where C is the capacity in bits per second, B is the bandwidth of the channel in Hertz,
and S/N is the signal-to-noise ratio.

Example 1: A channel having a bandwidth of 1 kHz, the power of signal is 12 W and


the power of noise is 4 W. Determine the channel capacity.

Solution:

𝑆 12
𝐶 = 𝐵𝑙𝑜𝑔2 (1 + ) = 1 × 103 𝑙𝑜𝑔2 (1 + ) = 1 × 103 𝑙𝑜𝑔2 4 = 2𝑏𝑝𝑠
𝑁 4

Or: A channel having a bandwidth of 1 kHz and Signal to Noise Ratio (SNR) is 3.
Determine the channel capacity.
Example 2: If the channel capacity is 2000 bps, find its bandwidth if the SNR is 3.
Solution:

𝑆
𝐶 = 𝐵𝑙𝑜𝑔2 (1 + )
𝑁
2000 = 𝐵𝑙𝑜𝑔2 + 3) = 𝐵𝑙𝑜𝑔2 (4)
(1
2 000 = 𝐵 × 2 → 𝐵 = 1000𝑘𝐻𝑧
Example 3: Determine the SNR required to achieve a channel capacity of 1585 bps, if
the bandwidth is 1 kHz.
Solution:

𝐶 = 𝐵𝑙𝑜𝑔2 (1 + 𝑆𝑁𝑅)
1585 = 1000𝑙𝑜𝑔2 (1 + 𝑆𝑁𝑅)
1585
= 𝑙𝑜𝑔2 (1 + 𝑆𝑁𝑅)
1000
1.585 = 𝑙𝑜𝑔2 (1 + 𝑆𝑁𝑅)
𝑙𝑜𝑔2−1 (1.585) = (1 + 𝑆𝑁𝑅)
3 = (1 + 𝑆𝑁𝑅)
𝑆𝑁𝑅 = 3 − 1 = 2

\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

Example 4: The terminal of a computer used to enter alphabetic data connected to the
computer through a voice grade telephone line having usable bandwidth of 3KHz and
SNR=10. Assume that terminal has 128 characters. Determine:
a- The capacity of channel.
b- The maximum rate of transmission without error.

Solution:

𝑆
a- 𝐶 = 𝐵𝑙𝑜𝑔2 (1 + ) = 3000𝑙𝑜𝑔2 (1 + 10) = 10.378𝐾𝑏𝑝𝑠
𝑁

b- 𝐻(𝑋) = 𝑙𝑜𝑔2 𝑛 = 𝑙𝑜𝑔2 128 = 7𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙

For error free transmission: if R ≤ C

We have 𝑅 = 𝑟𝑠 𝐻, then must 𝑟𝑠 𝐻 ≤ C for error free or:

7𝑟𝑠 ≤ 10378

The maximum rate without error (𝑟𝑠 ) = 1482 𝑠𝑦𝑚𝑏𝑜𝑙/𝑠𝑒𝑐

7- Channel Capacity (Discrete channel)

This is defined as the maximum of I(X,Y):

𝐶 = 𝑐ℎ𝑎𝑛𝑛𝑒𝑙 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 = max[𝐼(𝑋, 𝑌)] 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙

Physically it is the maximum amount of information each symbol can carry to the
receiver. Sometimes this capacity is also expressed in bits/sec if related to the rate of
producing symbols r:

𝑅(𝑋, 𝑌) = 𝑟 × 𝐼(𝑋, 𝑌) 𝑏𝑖𝑡𝑠/ sec 𝑜𝑟 𝑅(𝑋, 𝑌) = 𝐼(𝑋, 𝑌)/ 𝜏̅

Channel capacity of Symmetric channels:

The channel capacity is defined as max [𝐼(𝑋, 𝑌)]:


𝐼(𝑋, 𝑌) = 𝐻(𝑌) − 𝐻( 𝑌 ∣ 𝑋 )
\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

𝑚 𝑛

𝐼(𝑋, 𝑌) = 𝐻(𝑌) + ∑ ∑ 𝑃(𝑥𝑖 , 𝑦𝑗 ) log 2 𝑃(𝑦𝑗 ∣ 𝑥𝑖 )


𝑗=1 𝑖=1

But we have
𝑃(𝑥𝑖 , 𝑦𝑗 ) = 𝑃(𝑥𝑖 )𝑃(𝑦𝑗 ∣ 𝑥𝑖 ) 𝑝𝑢𝑡 𝑖𝑛 𝑎𝑏𝑜𝑣𝑒 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛 𝑦𝑖𝑒𝑙𝑑𝑒𝑠:

𝑚 𝑛

𝐼(𝑋, 𝑌) = 𝐻(𝑌) + ∑ ∑ 𝑃(𝑥𝑖 )𝑃(𝑦𝑗 ∣ 𝑥𝑖 ) log 2 𝑃(𝑦𝑗 ∣ 𝑥𝑖 )


𝑗=1 𝑖=1

If the channel is symmetric the quantity:


𝑚

∑ 𝑃(𝑦𝑗 ∣ 𝑥𝑖 )log 2 𝑃(𝑦𝑗 ∣ 𝑥𝑖 ) = 𝐾


𝑗=1

Where K is constant and independent of the row number i so that the equation
becomes:
𝑛

𝐼(𝑋, 𝑌) = 𝐻(𝑌) + 𝐾 ∑ 𝑃(𝑥𝑖 )


𝑖=1

Hence 𝐼(𝑋, 𝑌) = 𝐻(𝑌) + 𝐾 for symmetric channels


Max of 𝐼(𝑋, 𝑌) = max[𝐻(𝑌) + 𝐾] = max[𝐻(𝑌)] + 𝐾
When Y has equiprobable symbols then max[𝐻(𝑌)] = 𝑙𝑜𝑔2 𝑚
Then
𝐼(𝑋, 𝑌) = 𝑙𝑜𝑔2 𝑚 + 𝐾
Or
𝐶 = 𝑙𝑜𝑔2 𝑚 + 𝐾
Example 1:
For the BSC shown:
0.7
X1 Y1

X2 0.7 Y2

\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

Find the channel capacity and efficiency if 𝐼(𝑥1 ) = 2𝑏𝑖𝑡𝑠


Solution:
0.7 0.3
The conditional probability matrix 𝑃( 𝑌 ∣ 𝑋 ) = [ ]
0.3 0.7
Since the channel is symmetric then:
𝐶 = 𝑙𝑜𝑔2 𝑚 + 𝐾 and 𝑛 = 𝑚
𝑤ℎ𝑒𝑟𝑒 𝑛 𝑎𝑛𝑑 𝑚 𝑎𝑟𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑟𝑜𝑤 𝑎𝑛𝑑 𝑐𝑜𝑙𝑢𝑚𝑛 𝑟𝑒𝑝𝑒𝑠𝑡𝑖𝑣𝑒𝑙𝑦
𝑚

𝐾 = ∑ 𝑃(𝑦𝑗 ∣ 𝑥𝑖 )log 2 𝑃(𝑦𝑗 ∣ 𝑥𝑖 )


𝑗=1

𝐾 = 0.7𝑙𝑜𝑔2 0.7 + 0.3𝑙𝑜𝑔2 0.3 = −0.88129


𝐶 = 1 − 0.88129 = 0.1187 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
𝐼(𝑋,𝑌)
The channel efficiency 𝜂 =
𝐶

𝐼(𝑋, 𝑌) = 𝐻(𝑌) + 𝐾
𝑚

𝐻(𝑌) = − ∑ 𝑃(𝑦𝑗 ) log 2 𝑃(𝑦𝑗 ) 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙


𝑗=1

We need to find 𝑃(𝑌) from joint probability matrix. So, it must covert the
conditional probability matrix to joint.
And we have 𝑃(𝑥𝑖 , 𝑦𝑗 ) = 𝑃(𝑦𝑗 ∣ 𝑥𝑖 ), we must find 𝑃(𝑥𝑖 ).
𝐼(𝑥1 ) = −𝑙𝑜𝑔2 𝑃(𝑥1 ) = 2
𝑙𝑜𝑔2 𝑃(𝑥1 ) = −2
By taking log invers:
𝑙𝑜𝑔2−1 [𝑙𝑜𝑔2 𝑃(𝑥1 ) = −2] → 𝑃(𝑥1 ) = 2−2

1
𝑃(𝑥1 ) = = 0.25 𝑎𝑛𝑑 𝑃(𝑥2 ) = 1 − 0.25 = 0.75
22

𝑡ℎ𝑒𝑛 𝑃(𝑋) = [0.25 0.75]𝑇

0.7 × 0.25 0.3 × 0.25 0.175 0.075


𝑃(𝑋, 𝑌) = [ ]=[ ]
0.3 × 0.75 0.7 × 0.75 0.225 0.525

\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

By summing its rows yield:


𝑃(𝑌) = [0.4 0.6]
𝐻(𝑌) = −[0.4𝑙𝑜𝑔2 (0.4) + 0.6𝑙𝑜𝑔2 (0.6)] = 0.97095 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
𝐼(𝑋, 𝑌) = 𝐻(𝑌) + 𝐾 = 0.97095 − 0.88129 = 0.0896 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
0.0896
Then 𝜂 = = 75.6%
0.1187

To find the channel redundancy:


𝑅 = 1 − 𝜂 = 1 − 0.756 = 0.244 𝑜𝑟 24.4%

Review questions:
A binary source sending 𝑥1 with a probability of 0.4 and 𝑥2 with 0.6 probability
through a channel with a probabilities of errors of 0.1 for 𝑥1 and 0.2 for 𝑥2 .Determine:
1- Source entropy.
2- Marginal entropy.
3- Joint entropy.
4- Conditional entropy 𝐻(𝑌𝑋).
5- Losses entropy 𝐻(𝑋𝑌).
6- Transinformation.
Solution:
1- The channel diagram:
0.9
0.4 𝑥1 𝑦1
0.1

0.2
0.6 𝑥2 𝑦2
0.8
0.9 0.1
Or 𝑃(𝑌X) = [ ]
0.2 0.8
𝑛

𝐻(𝑋) = − ∑ 𝑝(𝑥𝑖 ) 𝑙𝑜𝑔2 𝑝(𝑥𝑖 )


𝑖=1

[0.4 ln(0.4) + 0.6 ln(0.6)] 𝑏𝑖𝑡𝑠


𝐻(𝑋) = − = 0.971
𝑙𝑛2 𝑠𝑦𝑚𝑏𝑜𝑙
2- 𝑃(𝑋, 𝑌) = 𝑃(𝑌X) × 𝑃(𝑋)
\\\
PROF. DR. MAHMOOD 2022-11-20 25
Electrical Engineering Technical College

0.9 × 0.4 0.1 × 0.4 0.36 0.04


∴ 𝑃(𝑋, 𝑌) = [ ]=[ ]
0.2 × 0.6 0.8 × 0.6 0.12 0.48
∴ 𝑃(𝑌) = [0.48 0.52]
𝑚

𝐻(𝑌) = − ∑ 𝑝(𝑦𝑗 ) 𝑙𝑜𝑔2 𝑝(𝑦𝑗 )


𝑗=1

[0.48 ln(0.48) + 0.52 ln(0.52)]


𝐻(𝑌) = − = 0.999 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
ln(2)

3- 𝐻(𝑋, 𝑌)
𝑚 𝑛

𝐻(𝑋, 𝑌) = − ∑ ∑ 𝑃(𝑥𝑖 , 𝑦𝑗 ) log 2 𝑃(𝑥𝑖 , 𝑦𝑗 )


𝑗=1 𝑖=1
[0.36 ln(0.36) + 0.04 ln(0.04) + 0.12 ln(0.12) + 0.48 ln(0.48)]
𝐻(𝑋, 𝑌) = −
ln(2)
= 1.592 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
4- 𝐻(𝑌X)
𝑚 𝑛

𝐻(𝑌X) == − ∑ ∑ 𝑃(𝑥𝑖 , 𝑦𝑗 ) log 2 𝑃(𝑦𝑗 𝑥𝑖 )


𝑗=1 𝑖=1
[0.36 ln(0.9) + 0.12 ln(0.2) + 0.04 ln(0.1) + 0.48 ln(0.8)]
𝐻(𝑌X) = −
ln(2)
𝑏𝑖𝑡𝑠
= 0.621
𝑠𝑦𝑚𝑏𝑜𝑙
𝑏𝑖𝑡𝑠
Or 𝐻(𝑌X) = 𝐻(𝑋, 𝑌) − 𝐻(𝑋) = 1.592 − 0.971 = 0.621
𝑠𝑦𝑚𝑏𝑜𝑙
5- 𝐻(𝑋Y) = 𝐻(𝑋, 𝑌) − 𝐻(𝑌) = 1.592 − 0.999 = 0.593 𝑏𝑖𝑡𝑠/𝑠𝑦𝑚𝑏𝑜𝑙
6- 𝐼(𝑋, 𝑌) = 𝐻(𝑋) − 𝐻(𝑋Y) = 0.971 − 0.593 = 0.378 bits/symbol

9- Cascading of Channels
If two channels are cascaded, then the overall transition matrix is the product of the two
transition matrices.
p( z / x) = p( y / x). p( z / y)
(n  k ) ( n  m) ( m  k )
matrix matrix matrix

1 1 1
\\\ .
. .
PROF. DR. MAHMOOD 2022-11-20 25
. Channel 1 . Channel 2 .
n m k
Electrical Engineering Technical College

For the series information channel, the overall channel capacity is not exceed any of
each channel individually.
𝐼(𝑋, 𝑍) ≤ 𝐼(𝑋, 𝑌) & 𝐼(𝑋, 𝑍) ≤ 𝐼(𝑌, 𝑍)
Example:
Find the transition matrix p( z / x) for the cascaded channel shown.
0.8
0.7
0.2 0.3
0.3 1

0.7
1

0.7 0.3
 0. 8 0 . 2 0  ,
p( y / x) =   p( z / y ) =  1 0
 0. 3 0 0 . 7   
 1 0 

0.7 0.3
0.8 0.2 0    = 0.76 0.24
p ( z / x) =   1 0
0.3 0 0.7    0.91 0.09
 1 0 

\\\
PROF. DR. MAHMOOD 2022-11-20 25

You might also like