Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
100%
(1)
100% found this document useful (1 vote)
441 views
624 pages
(S.Lin - and.D.J.Costello) Error Control Coding Fund (B-Ok - Xyz) PDF
Uploaded by
ganeshlaveti
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save [S.Lin.and.D.J.Costello]_Error_Control_Coding_Fund... For Later
Share
100%
100% found this document useful, undefined
0%
, undefined
Print
Embed
Report
100%
(1)
100% found this document useful (1 vote)
441 views
624 pages
(S.Lin - and.D.J.Costello) Error Control Coding Fund (B-Ok - Xyz) PDF
Uploaded by
ganeshlaveti
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save [S.Lin.and.D.J.Costello]_Error_Control_Coding_Fund... For Later
Share
100%
100% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save [S.Lin.and.D.J.Costello]_Error_Control_Coding_Fund... For Later
You are on page 1
/ 624
Search
Fullscreen
Bie Si IN/COSTELLO SHU LIN / DANIEL J. COSTELLO, Jr. Error Control Coding: TCL Wal loo bo) tT foie Moelle Bev Er | Sra iy a a 3 o i H rs > % = Fy i) PI a Cr} 8 PRENTICE-HALL SERIES IN COMPUTER APPLICATIONS IN ELECTRICAL ENGINEERING eae ag ee adERROR CONTROL CODING Fundamentals and ApplicationsPRENTICE-HALL COMPUTER APPLICATIONS IN ELECTRICAL ENGINEERING SERIES FRANKLIN F. KUO, editor ApkaMsoN and Kuo, Computer-Communication Networks Bowens and Sepore, Sceptre: A Computer Program for Circuit and Systems Anal Canzow, Discrete Time Systems: An Introduction with Interdisciplinary Applications Capzow and Martens, Discrete-Time and Computer Control Systems Davis, Computer Data Displays FRIEDMAN and MExoN, Fault Detection in Digital Circuits HueLsMan, Basic Circuit Theory Jensen and LieveRMaN, IBM Circuit Analysis Program: Techniques and Applications ‘Jensen and WATKINS, Network Analysis: Theory and Computer Methods KLINE, Digital Computer Design Kocnennurcer, Computer Simulation of Dynamic Systems Kuo, (ed.) Protocols and Techniques for Data Communication Networks Kvo and MaGNuson, Computer Oriented Circuit Design Lin, An Introduction to Error-Correcting Codes LIN and CosreLto, Error Control Coding: Fundamentals and Applications NAGLE, CanRoLt, and InwiN, An Introduction to Computer Logic Ruyye, Fundamentals of Digitals Systems Design SUFFERLEN and VARTANIAN, Digital Electronics with Engineering Applications ‘StAupHAMMER, Circuit Analysis by Digital Computer Srourewver, PL/I Programming for Engineering and ScienceERROR CONTROL CODING Fundamentals and Applications SHU LIN University of Hawaii Texas A&M University DANIEL J. COSTELLO, JR. Illinois Institute of Technology Prentice-Hall, Inc. Englewood Cliffs, New Jersey 07632Liar of Const Cattoing in Pulcton Das Lb tecicaensineson srs) Tnsldes biographical references nd inden 1 Erorcorrecting code (Information these) Qa26sLss poise s2nss ISBN O-D28796% AKcRa uitoral/production supervision and interior design by Anne Simpson Cover design by Marvin Warshaw Manufacturing buyer: Joyce Levatino © 1985 by Prentice-Hall, Inc, Englewood Cliffs, N.. 07632 All rights reserved. No part ofthis book may be reproduced in any form or by any means without permission in writing from the publisher, Printed in the United States of America 10987 ISBN 0-13-283794-x PRENTICE-HALL INTERNATIONAL, INC,, London PRENTICE-HALL OF AUSTRALIA PIY. LIMITED, Sydney EDITORA PRENTICE-HALL DO BRAZIL, LTDA, Rio de Jonero PRENTICEHALL CANADA INC, Toronto PRENTICE-HALL OF INDIA PRIVATE LIMITED, New Dethi PRENTICE-HALL OF JAPAN, INC. Tokyo SRENTICE-ALL OF SOUTHEAST ASIA PTE. LTD., Singepore WHITEHALL BOOKS LIMITED, Wellington, New ZeclondWith Love and Affection for Ivy, Julian, Patrick, and Michelle Lin and Lucretia, Kevin, Nick, Daniel, and Anthony CostelloContents CHAPTER 1 CHAPTER 2 PREFACE xiii CODING FOR RELIABLE DIGITAL TRANSMISSION AND STORAGE i 1.1 Introduction 1 1.2 Types of Codes 3 1,3 Modulation and Demodulation 5 1.4 Maximum Likelihood Decoding 8 1.5 Types of Errors ” 1.6 Error Control Strategies 12 References 14 INTRODUCTION TO ALGEBRA 15 241 Groups 15 22 Fields 19 2.3 Binary Field Arithmetic 24 2.4 Construction of Galois Field GF(2") 29 2.5 Basic Properties of Galois Field GF") 34 2.6 Computations Using Galois Field GF(2*) Arithmetic 39 2.7 Vector Spaces 40 28 Matrices 46 Problems 48 References 50CHAPTER 3 CHAPTER 4 CHAPTER 5 CHAPTER 6 LINEAR BLOCK CODES 51 3.1 Introduction to Linear Block Codes ‘1 3.2 Syndrome and Error Detection 58 3.3 Minimum Distance of a Block Code 63. 3.4 Error-Detecting and Error-Correcting Capabilities ofa Block Code 65 3.5 Standard Array and Syndrome Decoding 68 3.6 Probability of an Undetected Error for Linear Codes overaBSC 76 3.7 Hamming Codes 79 Problems 82 References 84 CYCLIC CODES 8s 4.1 Description of Cyclic Codes as 4.2 Generator and Parity-Check Matrices of Cyclic Codes 92 43 Encoding of Cyclic Codes 95 4.4 Syndrome Computation and Error Detection 98 4.5 Decoding of Cyelic Codes 103 46 Cyclic Hamming Codes =m 4.7 Shortened Cyclic Codes 116 Problems 121 References 128 ERROR-TRAPPING DECODING FOR CYCLIC CODES — 125 5.1 Error-Trapping Decoding 125 5.2 Improved Error-Trapping Decoding 131 53 The Golay Code 134 Problems 139 References 139 BCH CODES 14 6.1 Description of the Codes 142 6.2 Decoding of the BCH Codes 151 6.3 Implementation of Galois Field Arithmetic 161 64 Implementation of Error Correction 167 6.5 Nonbinary BCH Codes and Reed-Solomon Codes 70 6.6 Weight Distribution and Error Detection of Binary BCH Codes 177 Problems 180 References 182 ContentsCHAPTER 7 MAJORITY-LOGIC DECODING FOR CYCLIC CODES = 184 7.1 One-Step Majority-Logic Decoding 184 7.2 Class of One-Step Majority-Logic Decodable Codes 194 73 Other One-Step Majority-Logic Decodable Codes 201 74 Multiple-Step Majority-Logic Decoding 209 Problems 219 References 221 CHAPTER 8 FINITE GEOMETRY CODES = 223 8.1 Euclidean Geometry 223, 8.2. Majority-Logic Decodable Cyclic Codes Based on Euclidean Geometry 227 8.3 Projective Geometry and Projective Geometry Codes 240 8.4 Modifications of Majority-Logic Decoding 245 Problems 253. References 255 CHAPTER 9 BURST-ERROR-CORRECTING CODES =—257 9.1 Introduction 257 9.2 Decoding of Single-Burst-Error-Correcting Cyclic Codes 259 9,3 Single-Burst-Error-Correcting Codes 267 9.4 Interleaved Codes 277 9.5 Phased-Burst-Error-Correcting Codes 272 9.6 Burst-and-Random-Error-Correcting Codes 24 9.7 Modified Fire Codes for Simultaneous Correction of Burst and Random Errors 280 Problems 282 References 284 CHAPTER 10 CONVOLUTIONAL CODES =— 287 10.1 Encoding of Convolutional Codes 288 10.2 Structural Properties of Convolutional Codes 295 10.3 Distance Properties of Convolutional Codes 308 Problems 312 References 313 CHAPTER 11 MAXIMUM LIKELIHOOD DECODING OF CONVOLUTIONAL CODES — 315 ILI The Viterbi Algorithm 315 11.2 Performance Bounds for Convolutional Codes 322 Contents xCHAPTER 12 CHAPTER 13 CHAPTER 14 CHAPTER 15 11.3 Construction of Good Convolutional Codes 329 11.4 Implementation of the Viterbi Algorithm 337 11.5 Modifications of the Viterbi Algorithm 345 Problems 346 References 348 SEQUENTIAL DECODING OF CONVOLUTIONAL CODES = 350 12.1 The Stack Algorithm 351 12.2 The Fano Algorithm 360 12.3 Performance Characteristics of Sequential Decoding 364 12.4 Code Construction for Sequential Decoding 374 12.5 Other Approaches to Sequential Decoding 380. Problems 384 References 386 MAJORITY-LOGIC DECODING OF CONVOLUTIONAL CODES — 388 13.1 Feedback Decoding 389 13.2 Error Propagation and Definite Decoding 406 133 Distance Properties and Code Performance 408 13.4 Code Construction for Majority-Logic Decoding a4 13.5 Comparison with Probabilistic Decoding 424 Problems 426 References 428 BURST-ERROR-CORRECTING CONVOLUTIONAL CODES = 429 14.1 Bounds on Burst-Error-Correcting Capability 430 14.2 Burst-Error-Correcting Convolutional Codes 430 14.3 Interleaved Convolutional Codes 441 14.4 Burst-and-Random-Error-Correcting Convolutional Codes 442 Problems. 455 References 456 AUTOMATIC-REPEAT-REQUEST STRATEGIES = 458 15.1 Basic ARQ Schemes 459 15.2 Sclective-Repeat ARQ System with Finite Receiver Buffer 465 15.3 ARQ Schemes with Mixed Modes of Retransmission 474 15.4 Hybrid ARQ Schemes 477 15.5 Class of Half-Rate Invertible Codes 481 Contents15.6 Type II Hybrid Selective-Repeat ARQ with Finite Receiver Buffer 483 Problems 494 References 495 CHAPTER 16 APPLICATIONS OF BLOCK CODES FOR ERROR CONTROL IN DATA STORAGE SYSTEMS = 498 16.1 Error Control for Computer Main Processor and Control Storages 498 16.2 Error Control for Magnetic Tapes 503 16.3 Error Control in IBM 3850 Mass Storage System 516 164 Error Control for Magnetic Disks 525 16.5 Error Control in Other Data Storage Systems $37 Problems $92 References 532 CHAPTER 17 PRACTICAL APPLICATIONS OF CONVOLUTIONAL CODES = 533 17.1 Applications of Viterbi Decoding $33 17.2 Applications of Sequential Decoding 539 17.3 Applications of Majority-Logic Decoding $43 174 Applications to Burst-Etror Correction 347 175 Applications of Convolutional Codes in ARQ Systems 551 Problems 556 References 557 Appendix A Tables of Galois Fields 561 Appendix B Minimal Polynomials of Elements in GF(2") 579 Appendix C Generator Polynomials of Binary Primitive BCH Codes of Length up to 2'°—1 583 INDEX 599 Contents xiPreface This book owes its beginnings to the pioneering work of Claude Shannon in 1948 on achieving reliable communication over a noisy transmission channel. Shannon's central theme was that if the signaling rate of the system is less than the channel capacity, reliable communication can be achieved if one chooses proper encoding and decoding techniques. The design of good codes and of efficient decoding methods, initiated by Hamming, Slepian, and others in the early 1950s, has occupied the energies of many researchers since then. Much of this work is highly mathematical in nature, and requires an extensive background in modern algebra and probability theory to understand. This has acted as an impediment to many practicing engineers land computer scientists, who are interested in applying these techniques to real sys- tems. One of the purposes of this book is to present the essentials of this highly complex material in such a manner that it can be understood and applied with only ‘4 minimum of mathematical background. ‘Work on coding in the 1950s and 1960s was devoted primarily to developing the theory of efficient encoders and decoders. In 1970, the first author published a book entitled An Introduction to Error.Correcting Codes, which presented the fundamentals of the previous two decades of work covering both block and convolutional codes. The approach was to explain the material in an casily understood manner, with a minimum of mathematical rigor. The present book takes the same approach to cover- ing the fundamentals of coding. However, the entire manuscript has been rewritten and much new material has been added. In particular, during the 1970s the emphasis in coding research shifted from theory to practical applications. Consequently, three completely new chapters on the applications of coding to digital transmission and storage systems have been added. Other major additions include a comprehensive treatment of the error-detecting capabilities of block codes, and an emphasis on probabilistic decoding methods for convolutional codes. A brief description of each chapter follows. Chapter | presents an overview of coding for error control in data transmissionand storage systems, A brief discussion of modulation and demodulation serves to place coding in the context of a complete system. Chapter 2 develops those concepts from modern algebra that are necessary to an understanding of the material in later chapters. The presentation is at a level that can be understood by students in the senior year as well as by practicing engineers and computer scientists. Chapters 3 through 8 cover in detail block codes for random-error correction. The fundamentals of linear codes are presented in Chapter 3. Also included is an extensive section on error detection with linear codes, an important topic which is discussed only briefly in most other books on coding. Most linear codes used in practice are cyclic codes. The basic structure and properties of eyclic codes are pre- sented in Chapter 4. A simple way of decoding some cyclic codes, known as error- trapping decoding, is covered in Chapter 5. The important class of BCH codes for multiple-error correction is presented in detail in Chapter 6. A discussion of hardware and software implementation of BCH decoders is included, as well as the use of BCH codes for error detection. Chapters 7 and 8 provide detailed coverage of majority- logic decoding and majority-logic decodable codes. The material on fundamentals of block codes concludes with Chapter 9 on bursterror correction, This discussion includes codes for correcting a combination of burst and random errors. Chapters 10 through 14 are devoted to the presentation of the fundamentals of convolutional codes. Convolutional codes are introduced in Chapter 10, with the encoder state diagram serving as the basis for studying code structure and distance Properties. The Viterbi decoding algorithm for both hard and soft demodulator deci- sions is covered in Chapter 11. A detailed performance analysis based on code dis- tance properties is also included. Chapter 12 presents the basics of sequential decoding using both the stack and Fano algorithms. The difficult problem of the computational performance of sequential decoding is discussed without including detailed proofs. Chapter 13 covers majority-logic decoding of convolutional codes. The chapter con- cludes with a comparison of the three primary decoding methods for convolutional codes, Burst-error-correcting convolutional codes are presented in Chapter 14. A section is included on convolutional codes that correct a combination of burst and random errors. Burst-trapping codes, which embed a block code in a convolutional code, are also covered here. Chapters 15 through 17 cover a variety of applications of coding to modern day data communication and storage systems, Although they are not intended to bbe comprehensive, they are representative of the many different ways in which coding is used as a method of error control. This emphasis on practical applications makes the book unique in the coding literature. Chapter 15 is devoted to automatic-repeat- request (ARQ) error control schemes used for data communications. Both pure ARQ (error detection with retransmission) and hybrid ARQ (a combination of error cor- ection and error detection with retransmission) are discussed. Chapter 16 covers the application of block codes for error control in data storage systems, Coding tech- niques for computer memories, magnetic tape, magnetic disk, and optical storage systems are included, Finally, Chapter 17 presents a wide range of applications of convolutional codes to digital communication systems. Codes actually used on many space and satellite systems are included, as well as a section on using convolutional codes in a hybrid ARQ system PrefaceProtace Several additional features are included to make the book useful both as a classroom text and as a comprehensive reference for engineers and computer scientists involved in the design of error control systems. Three appendices are given which include details of algebraic structure used in the construction of block codes. Many tables of the best known codes for a given decoding structure are presented through- ut the book. These should prove valuable to designers looking for the best code for a particular application, A set of problems is given at the end of each chapter. Most of the problems are relatively straightforward applications of material covered in the text, although some more advanced problems are also included. There are a total of over 250 problems. A solutions manual will be made available to instructors using the text, Over 300 references are also included. Although no attempt was made to compile a complete bibliography on coding, the references listed serve to provide additional detail on topics covered in the book. “The book can be used as a text for an introductory course on error-correcting codes and their applications at the senior or beginning graduate level. It can also be used as a self-study guide for engineers and computer scientists in industry who want to learn the fundamentals of coding and how they can be applied to the design of error control systems, [Asa text, the book can be used as the basis for a two-semester sequence in cod- ing theory and applications, with Chapters 1 through 9 on block codes covered in fone semester and Chapters 10 through 17 on convolutional codes and applications in a second semester. Alternatively, portions of the book can be covered in a one- semester course. One possibility is to cover Chapters 1 through 6 and 10 through 12, Which include the basic fundamentals of both block and convolutional codes. A course on block codes and applications can be comprised of Chapters | through 6, 9, 15, and 16, whereas Chapters | through 3, 10 through 14, and 17 include convolu- tional codes and applications as well as the rudiments of block codes. Preliminary Versions of the notes on which the book is based have been classroom tested by both ‘authors for university courses and for short courses in industry, with very gratifying results It is difficult to identify the many individuals who have influenced this work over the years. Naturally, we both owe a great deal of thanks to our thesis advisors, Professors Paul E. Pfeiffer and James L. Massey. Without their stimulating our interest in this exciting field and their constant encouragement and guidance through the early years of our rescarch, this book would not have been possible. ‘Much of the material in the first half of the book on block codes owes a great deal to Professors W. Wesley Peterson and Tadao Kasami. Their pioneering work in flgebraic coding and their valuable discussions and suggestions had a major impact ‘on the writing of this material, The second half of the book on convolutional codes ‘was greatly influenced by Professor James L. Massey. His style of clarifying the basic Clements in highly complex subject matter was instrumental throughout the prepara- tion of this material, In particular, most of Chapter 14 was based on a set of notes that he prepared. We are grateful to the National Science Foundation, and to Mr. Elias Schutz~ ‘man, for their continuing support of our research in the coding field, Without this dssistance, our interest in coding could never have developed to the point of writingthis book. We thank the University of Hawaii and Illinois Institute of Technology for their support of our efforts in writing this book and for providing facilities. We also owe thanks to Professor Franklin F. Kuo for suggesting that we write this book, and for his constant encouragement and guidance during the preparation of the ‘manuscript. Another major source of stimulation for ths effort came from our graduate students, who have provided a continuing stream of new ideas and insights. Those who have made contributions directly reflected in this book include Drs. Pierre Chevillat, Farhad Hemmati, Alexander Drukarev, and Michael J. Miller. We would like to express our special appreciation to Professors Tadao Kasami, Michael J. Millet, and Yu-ming Wang, who read the first draft very carefully and made numerous corrections and suggestions for improvements. We also wish to thank our secretaries for their dedication and patience in typing this manuscript Deborah Waddy and Michelle Masumoto deserve much credit for their perseverence in preparing drafts and redrafts of this work. Finally, we would like to give special thanks to our parents, wives, and children for their continuing love and affection throughout this project. Siu Lin Daniel J. Costello, Jr. PrefaceERROR CONTROL CODING Fundamentals and ApplicationsCoding for Reliable Digital Transmission and Storage 1.1 INTRODUCTION In recent years, there has been an increasing demand for efficient and reliable digital data transmission and storage systems. This demand has been accelerated by the ‘emergence of large-scale, high-speed data networks for the exchange, processing, and storage of digital information in the military, governmental, and private spheres. ‘A merging of communications and computer technology is required in the design of these systems. A major concern of the designer is the control of errors so that reliable reproduction of data can be obtained. in 1948, Shannon [I] demonstrated in a landmark paper that, by proper encoding of the information, errors induced by a noisy channel or storage medium can be Feduced to any desired level without sacrificing the rate of information transmission br storage. Since Shannon's work, a great deal of effort has been expended on the problem of devising efficient encoding and decoding methods for error control in Roisy environment. Recent developments have contributed toward achieving the reliability required by today's high-speed digital systems, and the use of coding for error control has become an integral part in the design of modern communication systems and digital computers. ‘The transmission and storage of digital information have much in common. ‘They both transfer data from an information source to a destination (or user). A typical transmission (or storage) system may be represented by the block diagram shown in Figure 1.1. The information source can be either a person or a machine (ex., a digital computer). The source output, which is to be communicated to the destination, can be either a continuous waveform or a sequence of discrete symbols. 1- f = souse [_# .{ chamet L» [ wosautr sour encoder sncoder (oeting unit) Ghana Noie——e] tora Source S| Channet £_} Demodulator Destination dsvoder ‘coder (reading vit) Figure 1.1. Block diagram of a typical data transmission or storage system The source encoder transforms the source output into a sequence of binary digits (its) called the information sequence u. In the case of a continuous source, this involves analog-to-digital (A/D) conversion. The source encoder is ideally designed so that (1) the number of bits per unit time required to represent the source output is. minimized, and (2) the source output can be reconstructed from the information sequence u without ambiguity, The subject of source coding is not discussed in this book. For a thorough treatment of this important topic, see References 2 and 3. The channel encoder transforms the information sequence w into a discrete encoded sequence ¥ called a code word. In most instances v is also a binary sequence, although in some applications nonbinary codes have been used. The design and implementation of channel encoders to combat the noisy environment in which code words must be transmitted or stored is one of the major topics of this book. Discrete symbols are not suitable for transmission over a physical channel or recording on a digital storage medium. The modulator (ot writing unit) transforms each output symbol of the channel encoder into a waveform of duration T seconds which is suitable for transmission (or recording). This waveform enters the channel (or storage medium) and is corrupted by noise. Typical transmission channels include telephone lines, high-frequency radio links, telemetry links, microwave links, satelite links, and so on. Typical storage media include core and semiconductor memories, ‘magnetic tapes, drums, disk files, optical memory units, and so on. Each of these examples is subject to various types of noise disturbances. On a telephone line, the disturbance may come from switching impulse noise, thermal noise, crosstalk from other lines, or lightning. On magnetic tape, surface defects are regarded as a noise disturbance. The demodulator (or reading unit) processes each received waveform of duration 7 and produces an output that may be discrete (quantized) or continuous (unquantized). The sequence of demodulator outputs corresponding to the encoded sequence ¥ is called the received sequence r. The channel decoder transforms the received sequence r into a binary sequence A called the estimated sequence. The decoding strategy is based on the rules of channel encoding and the noise characteristics of the channel (or storage medium). Ideally, & 2 Coding for Reliable Digital Transmission and Storage Chap. 1will be a replica of the information sequence uw, although the noise may cause some decoding errors. Another major topic of this book is the design and implementation of channel decoders that minimize the probability of decoding error. The source decoder transforms the estimated sequence @ into an estimate of the source output and delivers this estimate to the destination. When the source is continuous, this involves digital-to-analog (D/A) conversion. In a well-designed system, the estimate will be a faithful reproduction of the source output except when the channel (oF storage medium) is very noisy. To focus attention on the channel encoder and channel decoder, (1) the infor- mation source and source encoder are combined into a digital source with output w: @) the modulator (or writing unit), the channel (or storage medium), and the demodu- Jator (or reading unit) are combined into a coding channel with input v and output ‘and (3) the source decoder and destination are combined into a digital sink with input 4. A simplified block diagram is shown in Figure 1.2. nove——e] Gong Sr Digital a r ne Decoder Figure 1.2. Simplified model of a coded system. ‘The major engineering problem that is addressed in this book is to design and implement the channel encoder/decoder pair such that (1) information can be trans- mitted (or recorded) in a noisy environment as fast as possible, (2) reliable reproduce tion of the information can be obtained at the output of the channel decoder, and (3) the cost of implementing the encoder and decoder falls within acceptable limits 1.2 TYPES OF CODES ‘There are two different types of codes in common use today, block codes and convolu~ tional codes, The encoder for a block code divides the information sequence into message blocks of k information bits each. A message block is represented by the binary k-tuple w= (us, a, -,24) called a message. (In block coding, the symbol tis used to denote a k-bit message rather than the entire information sequence.) ‘There are a total of 2 different possible messages. The encoder transforms each message w independently into an n-tuple Y= (b, %2++-+%) of discrete symbols called a code word. (In block coding, the symbol v is used to denote an ssymbol block rather than the entire encoded sequence.) Therefore, corresponding to the 2 different possible messages, there are 2* different possible code words at the encoder Sec. 1.2 Types of Codes 3output. This set of 2* code words of length mis called an (1, k) block code. The ratio R = kinis called the code rate, and can be interpreted as the number of information bits entering the encoder per transmitted symbol. Since the n-symbol output code word depends only on the corresponding k-bit input message, the encoder is memory- less, and can be implemented with a combinational logic circuit Ina binary code, each code word ¥ is also binary. Hence, for a binary code to be useful (ic., to have a different code word assigned to each message), k
0. as) When binary coding is used, the modulator has only binary inputs (M = 2). Similarly, when binary demodulator output quantization is used (Q = 2), the decoder has only binary inputs. In this case, the demodulator is said to make hard decisions. Most coded digital communication systems, whether block or convolutional, use binary coding with hard-decision decoding, owing to the resulting simplicity of Sec. 1.3 Modulation and Demodulation 7implementation compared to nonbinary systems. However, some binary coded systems do not use hard decisions at the demodulator output. When Q > 2 (or the output is left unquantized) the demodulator is said to make soft decisions. In this case the decoder must accept multilevel (or analog) inputs. Although this makes the decoder more difficult to implement, soft-decision decoding offers significant performance improvement over hard-decision decoding, as discussed in Chapter 11. A transition Probability diagram for a soft-decision DMC with M = 2 and Q > 2 is shown in Figure 1.5(b). This is the appropriate model for a binary-input AWGN channel with finite output quantization. The transition probabilities can be calculated from a know!- edge of the signals used, the probability distribution of the noise, and the output quantization thresholds of the demodulator in a manner similar to the calculation of the BSC transition probability p. For a more thorough treatment of the calculation of DMC transition probabilities, see References 4 and 5, If the detector output in a given interval depends on the transmitted signal in previous intervals as well as the transmitted signal in the present interval, the channel is said to have memory. A fading channel isa good example of a channel with memory, since the multipath transmissio# destroys the independence from interval to interval, Appropriate models for channels with memory are difficult to construct, and coding for these channels is normally done on an ad hoc basis, ‘Two important and related parameters in any digital communication system are the speed of information transmission and the bandwidth of the channel. Since one encoded symbol is transmitted every T seconds, the symbol transmission rate (baud rate) is 1/7. In a coded system, if the code rate is R = k/n, k information bits corre- spond to the transmission of n symbols, and the information transmission rate (data rate) is R/T bits per second (bps). In addition to signal modification due to the effects of noise, all communication channels are subject to signal distortion due to band- Width limitations. To minimize the effect of this distortion on the detection process, the channel should have a bandwidth W of roughly 1/27 hertz (Hz).' In an uncoded system (R ~ 1), the data rate is 1/7 = 2, and is limited by the channel bandwidth, In a binary-coded system, with a code rate R <1, the data rate is R/T = 2RW, and is reduced by the factor R compared to an uncoded system, Hence, to maintain the same data rate as the uncoded system, the coded system requires a bandwidth expansion by a factor of 1/R. This is characteristic of binary-coded systems: they require some bandwidth expansion to maintain a constant data rate. If no additional bandwidth is available without undergoing severe signal distortion, binary coding is not feasible, and other means of reliable communication must be sought,? 1.4 MAXIMUM LIKELIHOOD DECODING A block diagram of a coded system on an AWGN channel with finite output quantiza- tion is shown in Figure 1.6. In a block-coded system, the source output u represents ‘a k-bit message, the encoder output v represents an n-symbol code word, the demodu- ‘The exact bandwidth required depends on the shape of the signal waveform, the acceptable limits of distortion, and the definition of bandwidth. *This does not preclude the use of coding, but requires only that a larger set of signals be found. See References 4 10 6. 8 Coding for Reliable Digital Transmission and Storage Chap. 1AWGN nd >} Sane t wwatt LE pecter feet iret Le oe | ir sink detector _----! Hl 1 i i Figure 1.6 Coded system on an additive white Gaussian noise channel. lator output r represents the corresponding Q-ary received n-tuple, and the decoder output @ represents the k-bit estimate of the encoded message. In a convolutional coded system, u represents a sequence of KL information bits and v represents a code word containing N & nL + nm = n(L + m) symbols, where KL is the length of the infor- mation sequence and NV is the length of the code word. The additional nm encoded symbols are produced after the last block of information bits has entered the encoder. This is due to the m time unit memory of the encoder, and is discussed more fully in Chapter 10. The demodulator output r is a Q-ary received N-tuple, and the decoder output @ is a kL-bit estimate of the information sequence. The decoder must produce an estimate di of the information sequence u based on the received sequence r. Equivalently, since there is a one-to-one correspondence betweéi the information sequence u and the code word v, the decoder ean produce an estimate 9 of the code word ¥. Clearly, = wif and only if # = v. A decoding rule isa strategy for choosing an estimated code word 9 for each possible received sequence r. Ifthe code word v was transmitted, a decoding error occurs if and only if # ¥. Given that ris received, the conditional error probability of the decoder is defined as P(E|n) 2 PO #10). a6 The error probability of the decoder is then given by PE)= EPEWPO. an P(9) is independent of the decoding rule used since r is produced prior to decoding. Hence, an optimum decoding rule [i.e., one that minimizes P(E)] must minimize P(E|x) = P(@ # vit) for all, Since minimizing P(9 # v|r) is equivalent to maximiz- Sec. 1.4 Maximum Likelihood Decoding 9ing P(@ = v|x), P(E|r) is minimized for a given r by choosing # as the code word ¥ Which maximizes _ PIP) Pe” that is, is chosen as the most likely code word given that ris received. If all infor- mation sequences, and hence all code words, are equally likely [.¢., P(s) is the same for aly}, maximizing (1.8) is equivalent to maximizing P(e|¥). For a DMC Mules. Pel) =P (le. ) since for a memoryless channel each received symbol depends only on the corre- sponding transmitted symbol. A decoder that chooses its estimate to maximize (1.9) is called a maximum likelihood decoder (MLD). Since log x is a monotone increasing, function of x, maximizing (1.9) is equivalent to maximizing the log-likelihood function 2p log P(elv) = 5 log P(r|2). (1.10) ‘An MLD for a DMC then chooses ¢ as the code word v that maximizes the sum in (1.10). If the code words are not equally likely, an MLD is not necessarily optimum, since the conditional probabilities P(r|v) must be weighted by the code word proba- bilities P(e) to determine which code word maximizes P(y|x). However, in many sys- tems, the code word probabilities are not known exactly at the receiver, making optimum decoding impossible, and an MLD then becomes the best feasible decoding rule Now consider specializing the MLD decoding rule to the BSC. In this case r is a binary sequence which may differ from the transmitted code word v in some positions ‘because of the channel noise. When r, # 0, P(r|0,) = p,and when r, 1p. Let d(r, v), be the distance between r and v (i.e., the number of positions in which r and v differ). For a block code of length n, (1.10) becomes B52, 2-2 WEP EI) = de, ¥)lonp + la ~ dy) 108 (1 — p) = ate, logy P5 + nlog (1 — p). ca [For a convolutional code, 7 in (1.11) is replaced by N.] Since log [p/(1 — p)] <0 for p <4 and log (1 — p) is a constant for all v, the MLD decoding rule for the BSC chooses ¥ as the code word v which minimizes the distance d(r, v) between and v; that is, it chooses the code word that differs from the received sequence in the fewest number of positions. Hence, an MLD for the BSC is sometimes called a minimum distance decoder. The capability of a noisy channel to transmit information reliably was deter- mined by Shannon [1] in his original work. This result, called the noisy channel coding theorem, states that every channel has a channel capacity C, and that for any rate R < , there exists codes of rate R which, with maximum likelihood decoding, have an arbitrarily small decoding error probability P(E). In particular, for any R < C, there exists block codes of length 7 such that Povin) as) P(E) cen, 2) and there exists convolutional codes of memory order m such that P(E) < Dt pont, 13) 10 Coding for Reliable Digital Transmission and Storage Chap. 1where n & (m-+ I)nis called the code constraint length, E,(R) and E,(R) are positive functions of R for R < Cand are completely determined by the channel characteristics. ‘The bound of (1.12) implies that arbitrarily small error probabilities are achievable with block coding for any fixed R < C by increasing the block length » while holding the ratio k/n constant. The bound of (1.13) implies that arbitrarily small error proba- bilities are achievable with convolutional coding for any fixed ® < C by increasing the constraint length ng (ie., by increasing the memory order m while holding k and n constant). The noisy channel coding theorem is based on an argument called random coding. The bound obtained is actually on the average error probability of the ensemble of all codes. Since some codes must perform better than the average, the noisy channel coding theorem guarantees the existence of codes satisfying (1.12) and (1.13), but does not indicate how to construct these codes. Furthermore, to achieve very low error probabilities for block codes of fixed rate R < C, long block lengths are needed. This requires that the number of code words 2* = 2** must be very large. Since a MLD must compute log P(r|¥) for each code word, and then choose the code word that gives the maximum, the number of computations that must be performed by a MLD becomes excessively large. For convolutional codes, low error probabilities require a large memory order m. As will be seen in Chapter 11, a MLD for convolutional codes requires approximately 2'" computations to decode each block of k information bits. This, too, becomes excessively large as m increases. Hence, it is impractical to achieve very low error probabilities with maximum likelihood decoding, Therefore, wo ‘major problems are encountered when designing a coded system to achieve low error probabilities: (1) to construct good long codes whose performance with maximum likelihood decoding would satisfy (1.12) and (1.13), and (2) to find easily implement- able methods of encoding and decoding these codes such that their actual performance is close to what could be achieved with maximum likelihood decoding, The remainder of this book is devoted to finding solutions to these two problems. 1.6 TYPES OF ERRORS (On memoryless channels, the noise affects each transmitted symbol independently. As an example, consider the BSC whose transition diagram is shown in Figure 1.5(a). Each transmitted bit has a probability p of being received incorrectly and a probability 1 —p of being received correctly, independently of other transmitted bits. Hence ‘transmission errors occur randomly in the received sequence, and memoryless channels are called random-orror channels. Good examples of random-error channels are the deep-space channel and many satellite channels. Most line-of-sight transmission facilities, as well, are affected primarily by random errors. The codes devised for correcting random errors are called random-error-correcting codes. Most of the codes presented in this book are random-error-correcting codes, In particular, Chapters 3 through 8 and 10 through 13 are devoted to codes of this type. On channels with memory, the noise is not independent from transmission to transmission. A simplified model of a channel with memory is shown in Figure 1.7. This model contains two states, a “good state,” in which transmission errors occur infrequently, p, = 0, and a “bad state,” in which transmission errors are highly See. 1.5 Types of Errors "a " 7 a aKa as ni
You might also like
Coding Theory
PDF
100% (1)
Coding Theory
297 pages
Error CorrectingXinWenWu
PDF
No ratings yet
Error CorrectingXinWenWu
232 pages
Tomlinson 2017
PDF
100% (1)
Tomlinson 2017
527 pages
Introduction To Coding Theory, Second Edition Solutions Manual
PDF
No ratings yet
Introduction To Coding Theory, Second Edition Solutions Manual
134 pages
Transmission of Information - A Statistical Theory of Communication - Robert Fano
PDF
100% (3)
Transmission of Information - A Statistical Theory of Communication - Robert Fano
397 pages
40 Lessons On Digital Communications
PDF
100% (1)
40 Lessons On Digital Communications
364 pages
Channelcoding WS2324
PDF
No ratings yet
Channelcoding WS2324
133 pages
Signals and Systems (Haykin)
PDF
89% (19)
Signals and Systems (Haykin)
438 pages
Error Control Coding by Shu Lin PDF
PDF
No ratings yet
Error Control Coding by Shu Lin PDF
624 pages
C Algorithms For Real-Time DSP Paul Embree (Prentice Hall, 1995)
PDF
100% (1)
C Algorithms For Real-Time DSP Paul Embree (Prentice Hall, 1995)
126 pages
Kajetana Marta Snopek - Hands-On Signals and Systems Theory-Springer (2024)
PDF
No ratings yet
Kajetana Marta Snopek - Hands-On Signals and Systems Theory-Springer (2024)
347 pages
A Course in Error-Correcting Codes - Justesen and Høholdt
PDF
100% (1)
A Course in Error-Correcting Codes - Justesen and Høholdt
204 pages
Error Control Coding
PDF
No ratings yet
Error Control Coding
8 pages
9781584885146-Sanet ST
PDF
No ratings yet
9781584885146-Sanet ST
106 pages
Error Coding For Engineers Houghton Aeditor Instant Download
PDF
No ratings yet
Error Coding For Engineers Houghton Aeditor Instant Download
81 pages
Coding Theory The Essentials - D.G Hoffman PDF
PDF
100% (1)
Coding Theory The Essentials - D.G Hoffman PDF
146 pages
Software Radio
PDF
100% (1)
Software Radio
117 pages
Error Control Coding: From Theory Practice
PDF
No ratings yet
Error Control Coding: From Theory Practice
6 pages
CommunicationsLaboratory 20161206
PDF
No ratings yet
CommunicationsLaboratory 20161206
179 pages
Coding Theory The Essentials-D G Hoffman
PDF
100% (2)
Coding Theory The Essentials-D G Hoffman
146 pages
Communication Systems by Chitode
PDF
92% (13)
Communication Systems by Chitode
628 pages
Digital Communication
PDF
100% (2)
Digital Communication
220 pages
Notes On Coding Theory - J.I.hall
PDF
No ratings yet
Notes On Coding Theory - J.I.hall
204 pages
Introduction To OFDM Receiver Design and Simulation
PDF
No ratings yet
Introduction To OFDM Receiver Design and Simulation
285 pages
Advance Digital Communication
PDF
No ratings yet
Advance Digital Communication
31 pages
Suvra Sekhar Das PHD, Ramjee Prasad PHD - Orthogonal Time Frequency Space Modulation - A Waveform For 6G (River Publishers Series in Communications) - River Publishers (2022)
PDF
No ratings yet
Suvra Sekhar Das PHD, Ramjee Prasad PHD - Orthogonal Time Frequency Space Modulation - A Waveform For 6G (River Publishers Series in Communications) - River Publishers (2022)
238 pages
CH 2 Digital Communications
PDF
No ratings yet
CH 2 Digital Communications
22 pages
BaigiangECC PDF
PDF
No ratings yet
BaigiangECC PDF
248 pages
Digital-Communication Chitoda PDF
PDF
80% (5)
Digital-Communication Chitoda PDF
552 pages
Digital Communications - Proakis
PDF
100% (1)
Digital Communications - Proakis
937 pages
Error Control Coding
PDF
No ratings yet
Error Control Coding
8 pages
Ziemer Digital Communication
PDF
90% (10)
Ziemer Digital Communication
464 pages
E2 205 Aug. 3:0 Error-Control Codes: Instructor
PDF
No ratings yet
E2 205 Aug. 3:0 Error-Control Codes: Instructor
4 pages
Channel Coding
PDF
100% (1)
Channel Coding
21 pages
Coding Theory
PDF
No ratings yet
Coding Theory
67 pages
Physical-Layer Security in 6G Networks
PDF
No ratings yet
Physical-Layer Security in 6G Networks
14 pages
Digital Communications by J.s.chitode
PDF
100% (2)
Digital Communications by J.s.chitode
188 pages
Basics of Digital Communication
PDF
100% (1)
Basics of Digital Communication
41 pages
Chapter 3: Information Theory: Section 3.5
PDF
No ratings yet
Chapter 3: Information Theory: Section 3.5
22 pages
Digital Communications 3units
PDF
100% (2)
Digital Communications 3units
108 pages
DC Notes PDF
PDF
No ratings yet
DC Notes PDF
151 pages
Digital Modulation
PDF
No ratings yet
Digital Modulation
48 pages
Introduction To Spread Spectrum
PDF
100% (1)
Introduction To Spread Spectrum
108 pages
Decision Feedback Equaliser
PDF
No ratings yet
Decision Feedback Equaliser
9 pages
Digital e Comm. Engineering
PDF
No ratings yet
Digital e Comm. Engineering
61 pages
Introduction To Orthogonal Frequency Division Multiplexing (OFDM) Technique
PDF
No ratings yet
Introduction To Orthogonal Frequency Division Multiplexing (OFDM) Technique
34 pages
OFDMA
PDF
No ratings yet
OFDMA
29 pages
Digital Communication (Formating)
PDF
No ratings yet
Digital Communication (Formating)
73 pages
Notes - Digital Communication Lecture-1
PDF
No ratings yet
Notes - Digital Communication Lecture-1
63 pages
LTE Channel Modelling For SystLTE Channel Modelling For System Levelem Level
PDF
No ratings yet
LTE Channel Modelling For SystLTE Channel Modelling For System Levelem Level
65 pages
5.Eng-Peak Cancellation Crest Factor-VINAY REDDY N
PDF
No ratings yet
5.Eng-Peak Cancellation Crest Factor-VINAY REDDY N
10 pages
ITC Unit I
PDF
No ratings yet
ITC Unit I
31 pages
Book Fall2011
PDF
No ratings yet
Book Fall2011
450 pages
Introduction To Communications: Channel Coding
PDF
No ratings yet
Introduction To Communications: Channel Coding
27 pages
Principles of Digital Communications
PDF
100% (1)
Principles of Digital Communications
152 pages
Digital Communications
PDF
100% (1)
Digital Communications
39 pages
Continuous-Time and Discrete-Time Signals
PDF
No ratings yet
Continuous-Time and Discrete-Time Signals
12 pages
Inter Symbol Interference
PDF
No ratings yet
Inter Symbol Interference
29 pages
Nyquist
PDF
No ratings yet
Nyquist
18 pages