0% found this document useful (0 votes)
53 views56 pages

Capacity of Multiple-Input Multiple-Output (MIMO) Systems in Wireless Communications

The document summarizes key concepts related to capacity in single-input single-output (SISO) wireless communication systems. It defines SISO systems and channel capacity. It describes how mutual information is used to determine capacity and outlines assumptions made. It also provides the equations for differential entropy of noise and shows the derivation of the final expression for noise differential entropy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views56 pages

Capacity of Multiple-Input Multiple-Output (MIMO) Systems in Wireless Communications

The document summarizes key concepts related to capacity in single-input single-output (SISO) wireless communication systems. It defines SISO systems and channel capacity. It describes how mutual information is used to determine capacity and outlines assumptions made. It also provides the equations for differential entropy of noise and shows the derivation of the final expression for noise differential entropy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

15/11/02

Capacity of multiple-input multiple-output


(MIMO) systems in wireless communications

Bengt Holter

Department of Telecommunications
Norwegian University of Science and Technology

NTNU
1
Outline 15/11/02

• Introduction

• Channel capacity
– Single-Input Single-Output (SISO)
– Single-Input Multiple-Output (SIMO)
– Multiple-Input Multiple-Output (MIMO)
– MIMO capacity employing space-time block coding (STBC)

• Outage capacity
– SISO
– SIMO
– MIMO employing STBC

• Summary

NTNU
2
Introduction 15/11/02

MIMO = Multiple-Input Multiple-Output

• Initial MIMO papers

– I. Telatar, ”Capacity of multi-antenna gaussian channels,” AT&T Technical


Memorandum, jun. 1995

– G. J. Foschini, ”Layered space-time architecture for wireless communication in


a fading environment when using multi-element antennas”, Bell Labs Technical
Journal, 1996

• MIMO systems are used to (dramatically) increase the capacity and


quality of a wireless transmission.

• Increased capacity obtained with spatial multiplexing of transmitted


data.

• Increased quality obtained by using space-time coding at the trans-


mitter.

NTNU
3
Entropy 15/11/02

• For a discrete random variable X with alphabet X and distributed


according to the probability mass function p(x), the entropy is defined
as
   
1 1
H(X) = log2 · p(x) = − log2 p(x) · p(x) = E log2 .
p(x) p(x)
x∈X x∈X
(1)

• The entropy of a random variable is a measure of the uncertainty of


the random variable; it is a measure of the amount of information
required on the average to describe the random variable.

• With a base 2 logarithm, entropy is measured in bits.

NTNU
4
Gamma distribution 15/11/02

• X follows a gamma distribution with shape parameter α > 0 and scale


parameter β > 0 when the probability density function (PDF) of X is
given by
xα−1 e−x/β
fX (x) = . (2)
β α Γ(α)
∞
where Γ(·) is the gamma function (Γ(α) = 0 e−t tα−1dt [(α) > 0]).

• The short hand notation X ∼ G(α, β) is used to denote that X follows


a gamma distribution with shape parameter α and scale parameter β.

• Mean: E{X} = µx = α · β.

• Variance: E{X 2 } = σx2 − µ2x = α · β 2.

NTNU
5
SISO 15/11/02

SISO = Single-input Single-output

• Representing the input and and output of a memoryless wireless chan-


nel with the random variables X and Y respectively, the channel ca-
pacity is defined as
C = max I(X; Y ). (3)
p(x)

• I(X; Y ) denotes the mutual information between X and Y and it


is measure of the amount of information that one random variable
contains about another random variable.

• According to the definition in (3), the mutual information is maximized


with respect to all possible transmitter statistical distributions p(x).

NTNU
6
SISO cont’d 15/11/02

• The mutual information between X and Y can also be written as


I(X; Y ) = H(Y ) − H(Y |X). (4)

• From the equation above, it can be seen that mutual information


can be described as the reduction in the uncertainty of one random
variable due to the knowledge of the other.

• The mutual information between X and Y will depend on the prop-


erties of the wireless channel used to convey information from the
transmitter to the receiver.

• For a SISO flat fading wireless channel, the input/output relations


(per channel use) can be modelled by the complex baseband notation
y = hx + n (5)

NTNU
7
SISO cont’d 15/11/02

• y represents a single realization of the random variable Y (per channel


use).

• h represents the complex channel between the transmitter and the


receiver.

• x represents the transmitted complex symbol.

• n represents complex additive white gaussian noise (AWGN).

• Note that in previous lectures by Prof. Alouini, the channel gain |h|
was denoted α. In this presentation, α is used as the shape parameter
of a gamma distributed random variable.

• Based on different communication scenarios, |h| may be modelled by


various statistical distributions.

• Common multipath fading models are Rayleigh, Nakagami-q (Hoyt),


Nakagami-n (Rice), and Nakagami-m.

NTNU
8
SISO cont’d 15/11/02

Capacity with a transmit power constraint

• With an average transmit power constraint PT , the channel capacity


is defined as
C= max I(X; Y ). (6)
p(x):P ≤PT

• If each symbol per channel use at the transmitter is denoted by x, the


average power constraint can be expressed as P = E{|x|2 } ≤ PT .

• Compared to the original definition in (3), the capacity of the channel


is now defined as the maximum of the mutual information between the
input random variable X and the output random variable Y over all
statistical distributions on the input that satisfy the power constraint.

• Since both x and y are continuous upon transmission and reception,


the channel is modelled as an amplitude continuous but time discrete
channel.

NTNU
9
SISO cont’d 15/11/02

Assumptions

• Perfect channel knowledge at the receiver.

• X is independent of N .

• N ∼ N (0, σn2)

Mutual information: With hd (·) denoting differential entropy (entropy of


a continuous random variable), the mutual information may be expressed
as
I(X; Y ) = hd (Y ) − hd (Y |X) (7)
= hd (Y ) − hd (hX + N |X) (8)
= hd (Y ) − hd (N |X) (9)
= hd (Y ) − hd (N ) (10)

• (9) follows from the fact that since h is assumed perfectly known by
the receiver, there is no uncertainty in hX conditioned on X.

• (10) follows from the fact that N is assumed independent of X, i.e.,


there is no information in X which reduces the uncertainty of N .

NTNU
10
SISO cont’d 15/11/02

Noise differential entropy

• Since N already is assumed to be a complex gaussian random variable,


i.e., the noise PDF is given by

1 − nσ22
fN (n) = 2
e n (11)
πσn

• Differential entropy


hd (N ) = − fN (n) log2 fN (n)dn (12)

NTNU
11
SISO cont’d 15/11/02

• Inserting the noise PDF into (12)


  2 
n log2 e  2
hd (N ) = − fN (n) − − log2 πσn dn
σn2
 
log2 e 2
 2

= n f (n)dn + log2 πσ n f (n)dn
σn2

E N2  2
= log 2 e + log 2 πσn
σn2
 
= log2 e + log2 πσn2
 2

= log2 πeσn ,

where E{N 2} = σn2.

NTNU
12
SISO cont’d 15/11/02

Received signal power

• Since hd (N ) is given, the mutual information I(X; Y ) = hd (Y ) − hd (N )


is maximized by maximizing hd (Y ).

• Since the normal distribution maximizes the entropy over all distri-
butions with the same covariance, I(X; Y ) is maximized when Y is
assumed gaussian, i.e., hd (Y ) = log2 (πeσy2), where E{Y 2} = σy2.

• Assuming the optimal gaussian distribution for X, the received average


signal power σy2 may be expressed as

E{Y 2} = E{(hX + N )(h∗X ∗ + N ∗)} (13)


= σx2|h|2 + σn2. (14)

NTNU
13
SISO cont’d 15/11/02

SISO fading channel capacity


C = hd (Y ) − hd (N ) (15)
= log2 (πe(σx2|h|2 + σn2)) − log2(πeσn2) (16)


σx2 2
= log2 1 + 2 |h| (17)
σn


PT 2
= log2 1 + 2 |h| , (18)
σn

where it is assumed that σx2 = PT .

• Denoting the total received signal-to-noise ratio (SNR) γt = PT


σn2
|h|2,
the SISO fadig channel capacity is given by
C = log2 (1 + γt)

• Note that since γt is a random variable, the capacity also becomes a


random variable.

NTNU
14
SISO cont’d 15/11/02

Nakagami-m fading ⇒ Gamma distributed SNR

• With the assumption that the fading amplitude |h| is a Nakagami-m


distributed random variable, the PDF is given by


2mm |h|2m−1 m|h|2
fα(α) = exp (19)
Ωm Γ(m) Ω

where Ω = E{|h|2} and m is the Nakagami-m fading parameter which


ranges from 1/2 (half Gaussian model) to ∞ (AWGN channel).

• Using transformation of random variables, it can be shown that the


overall received SNR γt is a gamma distributed random variable G(α, β),
γtm−1 e−γt/β
fγt (γt ) = m , (20)
β Γ(m)

γt
where α = m and β = γ t/m. In short γt ∼ G m, m where γ t = PT Ω
σn2
.

NTNU
15
SISO cont’d 15/11/02

Ergodic channel capacity of SISO channel with Rayleigh fading

Capacity [bit/s/Hz]
5

0
0 3 6 9 12 15 18 21 24
SNR [dB]

Ergodic capacity of a Rayleigh fading SISO channel (dotted line) compared to the
Shannon capacity of a SISO channel (solid line)

3dB increase in SNR ⇒ 1 bit/s/Hz capacity increase

NTNU
16
SIMO 15/11/02

SIMO = Single-Input Multiple-Output

• For a SIMO flat fading wireless channel, the input/output relations


(per channel use) can be modelled by the complex baseband notation
y = hx + n (21)

• y represents a single realization of the multivariate random variable Y


(array repsonse per channel use).

• h represents the complex channel vector between a single transmit


antenna and nR receive antennas, i.e., h = [h11, h21 , . . . , hnR 1 ]T.

• x represents the transmitted complex symbol per channel use.

• n represents a complex additive white gaussian noise (AWGN) vector.

NTNU
17
SIMO 15/11/02

Mutual information

• With hd (·) denoting differential entropy (entropy of a continuous ran-


dom variable), the mutual information may be expressed as

I(X; Y) = hd (Y) − hd (Y|X) (22)


= hd (Y) − hd (hX + N|X) (23)
= hd (Y) − hd (N|X) (24)
= hd (Y) − hd (N) (25)

• It will be assumed that N ∼ N (0, Kn ), where Kn = E{NNH } is the


noise covariance matrix.

• Since the normal distribution maxmizes the entropy over all distribu-
tions with the same covariance (i.e. the power constraint), the mutual
information is maximized when Y represents a multivariate Gaussian
random variable, i.e., Y = N (0, Ky ) where Ky = E{YYH} is the co-
variance matrix of the desired signal.

NTNU
18
SIMO cont’d 15/11/02

Desired signal covariance matrix

• For a complex gaussian vector Y, the differential entropy is less than


or equal to log2 det(πeKy ), with equality if and only if y is a circularly
symmetric complex Gaussian with E{YYH} = Ky .

• With the assumption that the signal X is uncorrelated with all elements
in N, the received covariance matrix Ky may be expressed as
E{YYH} = E{(hX + N)(hX + N)H} (26)
= σx2hhH + Kn (27)

where σx2 = E{X 2 }.

NTNU
19
SIMO cont’d 15/11/02

SIMO fading channel capacity


C = hd (Y) − hd (N) (28)
= log2[det(πe(σx2hhH + Kn ))] − log2[det(πeKn)] (29)
= log2[det(σx2hhH + Kn)] − log2 [det Kn ] (30)
= log2[det((σx2hhH + Kn)(Kn )−1 )] (31)
= log2[det(σx2hhH(Kn)−1 + InR )] (32)
= log2[det(InR + σx2(Kn)−1 hHh)] (33)


PT
= log2 1 + 2 ||h||2 · det(InR ) (34)
σn


PT
= log2 1 + 2 ||h||2 (35)
σn

where it is assumed that Kn = σn2InR and σx2 = PT .

• Note that for the SISO fading channel, Kn = σn2.

NTNU
20
SIMO cont’d 15/11/02

• The capacity formula for the SIMO fading channel could also have
been found by assuming maximum ratio combining at the receiver.

• With perfect channel knowledge at the receiver, the optimal weights


are given by
wopt = (Kn )−1 h. (36)

• Using these weights together with the assumption that Kn = σn2InR ,


the overall (instantaneous) SNR γt for the current observed channel h
is equal to
PT
γt = 2
||h||2. (37)
σn

• Thus, since γt in this case represents the maximum available SNR, the
capacity can be written as
PT
C = log2 (1 + γt ) = log2 (1 + || h || 2
). (38)
σn2

NTNU
21
SIMO cont’d 15/11/02

Nakagami-m fading ⇒ Gamma distributed SNR

• With the assumption that all channel gains in the channel vector h are
independent and indentically distributed (i.i.d.) Nakagami-m random
variables (i.e. ml = m), then the overall SNR γt is a gamma distributed
random variable with shape parameter α = nR · m and scale parameter
β = γ l /m)

• In short, γt ∼ G(nR · m, γ l /m).

• γ l represents the average SNR per receiver branch (assumed equal for
all branches in this case)
σγt
• Coefficient of variation τ = µ γt
= √ 1 .
nR ·m

• Effective diversity order [Nabar,02]: Ndiv = 1


τ2
= nR · m.

NTNU
22
MIMO 15/11/02

MIMO = Multiple-Input Multiple-Output

• For a MIMO flat fading wireless channel, the input/output relations


(per channel use) can be modelled by the complex baseband notation
y = Hx + n (39)

• x is the (nT × 1) transmit vector.

• y is the (nR × 1) (array response) receive vector.

• H is the (nR × nT ) channel matrix.

• n is the (nR × 1) additive white Gaussian noise (AWGN) vector.

 
h11 ··· h1nT
 h21 ··· h2nT 
H=
 ... ... ... 
hnR 1 ··· hnR nT

NTNU
23
MIMO cont’d 15/11/02

Mutual information

• With hd (·) denoting differential entropy (entropy of a continuous ran-


dom variable), the mutual information may be expressed as

I(X; Y) = hd (Y) − hd (Y|X) (40)


= hd (Y) − hd (HX + N|X) (41)
= hd (Y) − hd (N|X) (42)
= hd (Y) − hd (N) (43)

• Assuming N ∼ N (0, Kn ).

• Since the normal distribution maxmizes the entropy over all distribu-
tions with the same covariance (i.e. the power constraint), the mutual
information is maxmized when Y represents a multivariate Gaussian
random variable.

NTNU
24
MIMO cont’d 15/11/02

Desired signal covariance matrix

• With the assumption that X and N are uncorrelated, the received


covariance matrix Ky may be expressed as
E{YYH } = E{(HX + N)(HX + N)H} (44)
= HKx HH + Kn (45)

where Kx = E{XXH}.

NTNU
25
MIMO cont’d 15/11/02

MIMO fading channel capacity


C = hd (Y) − hd (N) (46)
= log2 [det(πe(HKx HH + Kn ))] − log2[det(πeKn)] (47)
= log2 [det(HKx HH + Kn)] − log2 [det Kn ] (48)
= log2 [det((HKx HH + Kn)(Kn )−1 )] (49)
= log2 [det(HKx HH (Kn)−1 + InR )] (50)
= log2 [det(InR + (Kn)−1 HKx HH )] (51)

• When the transmitter has no knowledge of the channel, it is optimal to


evenly distribute the available power PT among the transmit antennas,
i.e., Kx = PnTT InT .

• Assuming that the noise is uncorrelated between branches, the noise


covariance matrix Kn = σn2InR .

• The MIMO fading channel capacity can then be written as




PT
C = log2 det InR + HHH
. (52)
nT σn2

NTNU
26
MIMO cont’d 15/11/02

• By the law of large numbers, the term n1T HHH ⇒ InR as nT gets large
and nR is fixed. Thus the capacity in the limit of large nT is


PT
C = nR · log2 1 + 2 (53)
σn
  
SISO capacity

NTNU
27
MIMO cont’d 15/11/02

• Further analysis of the MIMO channel capacity is possible by diago-


nalizing the product matrix HHH either by eigenvalue decomposition
or singular value decomposition.

• Eigenvalue decomposition of the matrix product HHH = EΛEH :




PT
C = log2 det InR + 2 EΛEH (54)
σ n nT
where E is the eigenvector matrix with orthonormal columns and Λ is
a diagonal matrix with the eigenvalues on the main diagonal.

• Singular value decomposition of the channel matrix H = UΣVH :




PT
C = log2 det InR + 2 UΣΣH UH (55)
σ n nT
where U and V are unitary matrices of left and right singular vectors
respectively, and Σ is a diagonal matrix with singular values on the
main diagonal.

NTNU
28
MIMO cont’d 15/11/02

• Using the singular value decomposition approach, the capacity can


now be expressed as


PT
C = log2 det InR + 2 UΣΣH UH (56)
σ n nT


PT
= log2 det InT + 2 UHUΣ2 (57)
σ n nT


PT
= log2 det InT + 2 Σ2 (58)
σ n nT




PT 2 PT 2 PT 2
= log2 1 + 2 σ1 1 + 2 σ2 · · · 1 + 2 σk (59)
σ n nT σ n nT σ n nT
 k

PT 2
= 1 + 2 σi (60)
i=1
σ n n T

where k = rank{H} ≤ min{nT , nR }, Σ is a real matrix, and det(IAB +


AB) = det(IBA + BA)

NTNU
29
MIMO cont’d 15/11/02

• Using the same approach with an eigenvalue decomposition of the


matrix product HHH, the capacity can also be expressed as
k

PT
C= 1 + 2 λi (61)
i=1
σ n nT

where λi are the eigenvalues of the matrix Λ.

NTNU
30
MIMO cont’d 15/11/02

Ergodic channel capacity of a MIMO fading channel

66

60

54

Capacity [bit/s/Hz]
48

42

36

30

24

18

12

0
−9 −6 −3 0 3 6 9 12 15 18 21 24 27 30 33 36 39
SNR [dB]

The Shannon capacity of a SISO channel (dotted line) compared to the ergodic
capacity of a Rayleigh fading MIMO channel (solid line) with nT = nR = 6

3dB increase in SNR ⇒ 6 bits/s/Hz capacity increase!

NTNU
31
MIMO with STBC 15/11/02

Transmit diversity

• Antenna diversity techniques are commonly utilized at the base sta-


tions due to less constraints on both antenna space and power. In
addition, it is more economical to add more complex equipment to
the base stations rather than at the remote units.

• To increase the quality of the transmission and reduce multipath fading


at the remote unit, it would be beneficial if space diversity also could
be utilized at the remote units.

• In 1998, S. M. Alamouti published a paper entitled ”A simple transmit


diversity technique for wireless communications”. This paper showed
that it was possible to generate the same diversity order tradition-
ally obtained with SIMO system with a Multiple-Input Single-Output
(MISO) system.

• The generalized transmission scheme introduced by Alamouti has later


been known as Space-Time Block Codes (STBC).

NTNU
32
MIMO with STBC 15/11/02

Alamouti STBC

• With the Alamouti space-time code [Alamouti,1998], two consecutive


symbols {s0, s1} are mapped into a matrix codeword S according to
the following mapping:
 
s1 s2
S= , (62)
−s∗2 s∗1

• The individual rows represent time diversity and the individual columns
space (antenna) diversity.

• Assuming a block fading model, i.e., the channel remains constant


for at least T channel uses, the received signal vector x (array re-
sponse/per channel use) may be expressed as
xk = Hsk + nk , k = 1, . . . , T. (63)

[Alamouti,1998] S. M. Alamouti, ”A simple transmit diversity technique for wireless com-


munications,” IEEE J. Select. Areas Comm., Vol.16, No.8, October 1998

NTNU
33
MIMO with STBC 15/11/02

xk = Hsk + nk , k = 1, . . . , T.

• xk ∈ C nR denotes the received signal vector per channel use.

• sk ∈ C nT denotes the transmitted signal vector (a single row from the


matrix codeword S transposed into a column vector).

• H ∈ C nR ×nT denotes the channel matrix with (possibly correlated) zero-


mean complex Gaussian random variable entries.

• nk ∈ C nR denotes the additive white Gaussian noise where each entry


of the vector is a zero-mean complex Gaussian random variable.

NTNU
34
MIMO with STBC 15/11/02

• For T consecutive uses of the channel, the received signal may be


expressed as
X = HS + N, (64)
where X = [x1, x2, · · · , xT ] (T consecutive array responses ⇒ time re-
sponses in nR branches), S = [s1, s2, · · · , sT ], and N = [n1, n2, · · · , nT ].

• For notational simplicity [Hassibi,2001], the already introduced matri-


ces X, S, and N may be redefined as X = [x1, x2 , · · · , xT ]T, S =
[s1, s2, · · · , sT ]T, and N = [n1, n2, · · · , nT ]T.

• With this new definition of the matrices X, S, and N, time runs verti-
cally and space runs horizontally and the received signal for T channel
uses may now be expressed as
X = SHT + N. (65)

[Hassibi,2001] B. Hassibi, B. M. Hochwald, ”High-rate codes that are linear in space and
time,” 2001

NTNU
35
MIMO with STBC 15/11/02

• In [Hassibi,2001], the transpose notation on H is omitted and H is just


redefined to have dimension nT × nR .

• For a 2 × 2 MIMO channel, equation (65) becomes [Hassibi,2001]


      
x11 x12 s1 s2 h11 h12 n11 n12
= + (66)
x21 x22 −s∗2 −s1 h21 h22 n21 n22

• x11 and x12 represent the received symbols at antenna element no.1
and 2 at time index t and likewise x21 and x22 represent the received
symbols at antenna element no.1 and 2 at time index t + Ts

• This can be reorganized [Alamouti,1998] and written as


     
x11 h11 h21   n 11
∗ ∗ ∗
 x21   h21 −h11  s1  n∗21 
 x = h h22  s2
+
n12 
(67)
12 12
x∗22 h∗22 −h∗12    n∗22
      s   
x H n

NTNU
36
MIMO with STBC 15/11/02

• With matched filtering at the receiver (perfect channel knowledge):


y = HH x
= HH Hs + HH n
= ||H||2F s + HHn. (68)
where ||H||2F represents the squared Frobenius norm of the matrix H.

 
  h11 h21
h∗11 h21 h∗12 h22  h∗21 −h∗11 
H H =
H
 h
h∗21 −h11 h∗22 −h12 12 h22 
h∗22 −h∗12
 
|h11 |2 + |h12 |2 + |h21 |2 + |h22 |2 0
=
0 |h11 |2 + |h12 |2 + |h21 |2 + |h22|2
= ||H||2F · I2 .

NTNU
37
MIMO with STBC 15/11/02

• This means that the received signals after matched filtering are de-
coupled and they can be written individually as
y1 = ||H||2F s1 + HHn (69)
y2 = ||H||2F s2 + HHn (70)

• In general, the effective channel induced by space-time block coding of


complex symbols (before detection) can be represented as [Sandhu,2000]
yk = ||H||2F sk + HHn. (71)

[Sandhu,2000] S. Sandhu, A. Paulraj, ”Space-Time Block Codes: A Capacity Perspec-


tive,” IEEE Comm. Letter, Vol.4, No.12, December 2000.

NTNU
38
MIMO with STBC 15/11/02

• The overall SNR before detection of each symbol is equal to


||H||4F |sk |2 ||H||4F PnTT
γtmimo = = = P || H|| 2
F. (72)
E{|HHn|2} ||H||2F σn2
PT
where P = σn2 nT
.

• For each transmitted symbol, the effective channel is a scaled AWGN


channel with SNR= P ||H||2F .

• The capacity of a MIMO fading channel using STBC can then be


written as


K PT
C= · log2 1 + 2 ||H||2F . (73)
T σ n nT
K
where T
in front of the equation denotes the rate of the STBC.

• With the Alamouti STBC, two symbols (K = 2) are transmitted in


two time slots (T = 2), i.e., the Alamouti code is a full rate STBC.

NTNU
39
MIMO with STBC 15/11/02

• Assuming uncorrelated channels and that all channel envelopes are


i.i.d. Nakagami-m distributed random variables with equal average
power E{|hij |2} = Ω, the overall SNR may be expressed as a gamma
distributed random variable:

PT
γtmimo = 2
· ||H||2F (74)
nT σ n

||H||2F ∼ G(nT · nR , Ω) (75)

γtmimo ∼ G(N · m, γ l /m) (76)

where N = nT · nR and γ l = PT Ω
σn2 nT
.

• Effective diversity order Ndiv = 1


τ2
= N · m.

NTNU
40
MIMO with STBC 15/11/02

Capacity summary

• Note that the capacity formulas given below are obtained with the
assumption of an average power constraint PT at the transmitter, un-
correlated equal noise power σn2 in all branches, perfect channel knowl-
edge at the receiver and no channel knowledge at the transmitter.


• SISO: C = log2 1 + PT
σn2
|h|2 .


• SIMO: C = log2 1 + PT
σn2
||h||2 .


• MIMO: C = log2 InR + PT
σn2 nT
HHH .


• MIMO with STBC: C = log2 1 + PT
σn2 nT
||H||2F .

NTNU
41
MIMO with STBC 15/11/02

STBC - a capacity perspective

• STBC arec useful since they are able to provide full diversity over the
coherent, flat-fading channel.

• In addition, they require simple encoding and decoding.

• Although STBC provide full diversity at a low computational cost, it


can be shown that they incur a loss in capacity because they convert
the matrix channel into a scalar AWGN channel whose capacity is
smaller than the true channel capacity.

S. Sandu, A. Paulraj,”Sapce-time block codes: A capacity perspective,” IEEE Commu-


nications Letters, Vol.4, No.12, December 2000.

NTNU
42
MIMO with STBC 15/11/02



PT
C = log2 InR + 2 HHH
σ n nT
k

PT 2
= log2 1 + 2 σi
σ n nT

i=1

 k 
i1 <i2 i1

<i2 <i3

k
= log2 1 + P 2
σi + P 2 2 2
σi1 σi2 + P 3
σi1 σi2 σi3 + · · · + P
2 2 2 k
σi2
i=1 i1 =i2 i1 =i2 =i3 i=1
 
i
1 <i2

i1 <i2 <i3

k
= log2 1 + P ||H||2F + P 2 σi21 σi22 + P 3 σi21 σi22 σi23 + · · · + P k σi2
i1 =i2 i1 =i2 =i3 i=1
 
≥ log2 1 + P ||H||F2

K  
≥ · log2 1 + P ||H||2F
T
• The capacity difference is a function of the channel singular values.
This can used to determine under which conditions STBC is optimal
in terms of capacity.

NTNU
43
MIMO with STBC 15/11/02

• When the channel matrix is a rank one matrix, there is only a single
non-zero singular value, i.e., a space-time block code is optimal (with
respect to capacity) when it is rate one (K = T ) and it is used over
a channel of rank one [Sandhu,2000].

• For the i.i.d. Rayleigh channel with nR > 1, the rank of the channel
matrix is greater than one, thus a space- time block code of any rate
used over the i.i.d. Rayleigh channel with multiple receive antennas
always incurs a loss in capacity.

• A full rate space-time block code used over any channel with one
receive antenna is always optimal with respect to capacity.

• Essentially, STBC trades off capacity benefits for low complexity en-


coding and decoding.

• Note that with spatial multiplexing, the simplification is opposite of


STBC. It trades of diversity benefits for lower complexity.

NTNU
44
Outage capacity 15/11/02

Outage capacity

• Defined as the probability that the instantaneous capacity falls below


a certain threshold or target capacity Cth
 Cth
Pout (Cth ) = Prob[C ≤ Cth ] = fC (C)dC = PC (Cth) (77)
0

NTNU
45
Outage capacity - SISO 15/11/02

SISO capacity


PT  
C = log2 1 + 2 · |h|2 = log2 1 + γtsiso . (78)
σn

• Assuming that |h| is Nakagami-m distributed random variable,

• γtsiso is a Gamma distributed random variable with shape parameter


α = m and scale parameter β = γ l /m.

• γ l = E{γtsiso } = E{ PTσ|h|
2

2 }= PT Ω
σn2
.
n

• E{|h|2 } = Ω.

• γtsiso ∼ G(m, γ l /m).

NTNU
46
Outage capacity - SISO 15/11/02

Transformation of random variables

• Let X and Y be continuous random variables with Y = g(X). Sup-


pose g is one-to-one, and both g and its inverse function, g −1, are
continuously differentiable. Then
 −1 
 dg (y) 
−1 
fY (y) = fX [g (y)]  . (79)
dy 

• Let C = g(γtsiso) = log2(1 + γtsiso ).

• Then γtsiso = g −1(C) = 2C − 1.

• Capacity PDF
(2C − 1)m−1 e−(2 −1)/β
C

fC (C) = fγtsiso (2 − 1) · 2 ln 2 =
C C
· 2C ln 2 (80)
β m Γ(m)

NTNU
47
Outage capacity - SISO 15/11/02

• The SISO outage capacity can be obtained by solving the integral


 Cth C
(2 − 1)m−1 e−(2 −1)/β
C

Pout(Cth ) = m Γ(m)
· 2C
ln 2 · dC (81)
β
0


(2Cth − 1)m
= 1 − Q m, (82)
γl

• Q(·, ·) is the normalized complementary incomplete gamma function


defined as
Γ(a, b)
Q(a, b) = (83)
Γ(a)
∞
• Γ(a, b) = b
e−t ta−1 dt.

NTNU
48
Outage capacity - SIMO 15/11/02

SIMO capacity


PT  
C = log2 1 + 2 · ||h||2 = log2 1 + γtsimo . (84)
σn

• Assuming that every channel gain in the vector h, |hl |, is a Nakagami-m


distributed random variable with the same m parameter.

• γtsimo is a Gamma distributed random variable with shape parameter


α = nR · m and scale parameter β = γ l /m.

• γtsimo ∼ G(nR · m, γ l /m).

NTNU
49
Outage capacity - SIMO 15/11/02

Transformation of random variables

• Let C = g(γtsimo) = log2(1 + γtsimo).

• Then γtsimo = g −1 (C) = 2C − 1.

• Capacity PDF
(2C − 1)nR m−1e−(2 −1)/β
C

fC (C) = fγtsimo (2 − 1) · 2 ln 2 =
C C
· 2C
ln 2 (85)
β nR ·m Γ(nR · m)

• The SIMO outage capacity can be obtained by solving the integral


 Cth C
(2 − 1)nR m−1e−(2 −1)/β
C

Pout (Cth) = nR ·m Γ(n · m)


· 2C
ln 2 · dC (86)
β
0

R

(2Cth − 1)m
= 1 − Q nR · m, (87)
γl

NTNU
50
Outage capacity - MIMO with STBC 15/11/02

MIMO with STBC




K PT K  
C= log2 1 + 2 · ||H||F =
2 mimo
log2 1 + γt . (88)
T σn T

• Assuming that every channel gain in the matrix H, |hij |, is a Nakagami-


m distributed random variable with the same m parameter.

• γtmimo is a Gamma distributed random variable with shape parameter


α = N · m (N = nT · nR ) and scale parameter β = γ l /(nT m).

• γtmimo ∼ G(N · m, γ l /(nT m)).

NTNU
51
Outage capacity - MIMO with STBC 15/11/02

Transformation of random variables

• Let C = g(γtmimo) = K
T
log2(1 + γtmimo).

• Then γtmimo = g −1(C) = 2(C·T )/K − 1.

• Capacity PDF
K
fC (C) = fγtmimo (2(C·T )/K − 1) · 2(C·T )/K ln 2 (89)
T

• The MIMO outage capacity can be obtained by solving the integral


 Cth (C·T )/K
− 1)N m−1e−(2 −1)/β
(C·T )/K
(2
Pout(Cth ) = N ·m Γ(N · m)
· 2(C·T )/K
ln 2 · dC
β
0


(2(Cth ·T )/K − 1)m · nT
= 1 − Q N · m, (90)
γl

NTNU
52
Outage capacity - MIMO 15/11/02

MIMO capacity
k
• Recall that C = PT
i=1 log2 1 + σ 2 λi . n

• With the assumption that all eigenvalues are i.i.d random variables
and nT = nR , the maximum capacity can be expressed as C = nT ·
log2(1 + PσT2 λ).
n

• Let C = g(λ) = nT · log2 (1 + PT


σn2
λ).

2C/nT −1
• Then λ = g −1 (C) = PT /σn2
.

• Capacity PDF


2C/nT − 1 2
C/nT nT σn
fC (C) = fλ ·2 ln 2. (91)
PT /σn2 PT

• Need to know the PDF of λ to obtain the capacity PDF.

NTNU
53
Outage capacity 15/11/02

Capacity CDF at 10dB SNR


1
1x1
3x3
0.9 1x8
10x10

0.8

Prob. capacity ≤ abscissa


0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 5 10 15 20 25 30 35
Capacity in bits/s/Hz

Outage capacity of i.i.d. Rayleigh fading channels at 10dB branch SNR

NTNU
54
Outage capacity 15/11/02

Capacity CDF at 1dB SNR


1

0.9
2x2
2x2(STBC)
0.8

Prob. capacity ≤ abscissa


0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Capacity in bits/s/Hz

Outage capacity of a 2x2 MIMO Rayleigh fading channel using the Alamouti STBC at
the transmitter at 1dB branch SNR

NTNU
55
Summary 15/11/02

• The capacity formulas of SISO, SIMO and MIMO fading channels have
been derived based on maximizing the mutual information between the
transmitted and received signal.

• The Alamouti space-time block code has been presented. Although


capable of increasing the diversity benefits, the use of STBC trades
off capacity for low complexity encoding and decoding.

• By using transformation of random variables, closed-form expressions


for the outage capacity for SISO, SIMO and MIMO (STBC at the
transmitter) i.i.d. Nakagami-m fading channels were derived.

NTNU
56

You might also like