logo
logo
x
바코드검색
BOOKPRICE.co.kr
책, 도서 가격비교 사이트
바코드검색

인기 검색어

일간
|
주간
|
월간

실시간 검색어

검색가능 서점

도서목록 제공

[eBook Code] Error Correction Coding

[eBook Code] Error Correction Coding (eBook Code, 2nd)

(Mathematical Methods and Algorithms)

Todd K. Moon (지은이)
Wiley
223,120원

일반도서

검색중
서점 할인가 할인률 배송비 혜택/추가 실질최저가 구매하기
178,490원 -20% 0원
0원
178,490원 >
yes24 로딩중
교보문고 로딩중
notice_icon 검색 결과 내에 다른 책이 포함되어 있을 수 있습니다.

중고도서

검색중
서점 유형 등록개수 최저가 구매하기
로딩중

eBook

검색중
서점 정가 할인가 마일리지 실질최저가 구매하기
로딩중

책 이미지

[eBook Code] Error Correction Coding
eBook 미리보기

책 정보

· 제목 : [eBook Code] Error Correction Coding (eBook Code, 2nd) (Mathematical Methods and Algorithms)
· 분류 : 외국도서 > 컴퓨터 > 소프트웨어 개발/엔지니어링 > 일반
· ISBN : 9781119567493
· 쪽수 : 992쪽
· 출판일 : 2020-12-15

목차

Preface v

List of Program Files xxxiii

List of Laboratory Exercises xxxiv

List of Algorithms xxxvi

List of Figures xliv

List of Tables xlvi

List of Boxes xlvii

Part I Introduction and Foundations 1

1 A Context for Error Correction Coding 3

1.1 Purpose of This Book 3

1.2 Introduction: Where Are Codes? 3

1.3 The Communications System 5

1.4 Basic Digital Communications 11

1.4.1 Binary Phase-Shift Keying 11

1.4.2 More General Digital Modulation 12

1.5 Signal Detection 15

1.5.1 The Gaussian Channel 15

1.5.2 MAP and ML Detection 17

1.5.3 Special Case: Binary Detection 19

1.5.4 Probability of Error for Binary Detection 21

1.5.5 Bounds on Performance: The Union Bound 22

1.5.6 The Binary Symmetric Channel 24

1.5.7 The BSC and the Gaussian Channel Model 26

1.6 Memoryless Channels 26

1.7 Simulation and Energy Considerations for Coded Signals 26

1.8 Some Important Definitions 29

1.8.1 Detection of Repetition Codes Over a BSC 30

1.8.2 Soft-Decision Decoding of Repetition Codes Over the AWGN 33

1.8.3 Simulation of Results 34

1.8.4 Summary 35

1.9 Hamming Codes 35

1.9.1 Hard-Input Decoding Hamming Codes 36

1.9.2 Other Representations of the Hamming Code 38

1.9.2.1 An Algebraic Representation 39

1.9.2.2 A Polynomial Representation 39

1.9.2.3 A Trellis Representation 40

1.9.2.4 The Tanner Graph Representation 41

1.10 The Basic Questions 41

1.11 Historical Milestones of Coding Theory 42

1.12 A Bit of Information Theory 42

1.12.1 Definitions for Discrete Random Variables 43

1.12.1.1 Entropy and Conditional Entropy 43

1.12.1.2 Relative Entropy, Mutual Information, and Channel Capacity 44

1.12.2 Data Processing Inequality 46

1.12.3 Channels 47

1.12.3.1 Binary Symmetric Channel 47

1.12.3.2 Binary Erasure Channel 48

1.12.3.3 Noisy Typewriter 49

1.12.3.4 Symmetric Channels 49

1.12.4 Channel Capacity 49

1.12.5 Definitions for Continuous Random Variables 50

1.12.6 The Channel Coding Theorem 52

1.12.7 “Proof" of the Channel Coding Theorem 52

1.12.8 Capacity for the Continuous-Time AWGN Channel 57

1.12.9 Transmission at Capacity with Errors 58

1.12.10 The Implication of the Channel Coding Theorem 59

1.12.11 Non-Asymptotic Information Theory 59

1.12.11.1 Discrete Channels 61

1.12.11.2 The AWGN Channel 62

1.12.11.3 Comparison of Codes 63

Lab 1 Simulating a Communications Channel 63

Objective 63

Background 63

Use of Coding in Conjunction with the BSC 64

Assignment 65

Programming Part 65

Resources and Implementation Suggestions 66

1.13 Exercises 67

1.14 References 71

Part II Block Codes 73

2 Groups and Vector Spaces 74

2.1 Introduction 74

2.2 Groups 74

2.2.1 Subgroups 78

2.2.2 Cyclic Groups and the Order of an Element 79

2.2.3 Cosets 80

2.2.4 Lagrange’s Theorem 81

2.2.5 Induced Operations; Isomorphism 82

2.2.6 Homomorphism 86

2.3 Fields: A Prelude 87

2.4 Review of Linear Algebra 89

2.5 Exercises 95

2.6 References 96

3 Linear Block Codes 97

3.1 Basic Definitions 97

3.2 The Generator Matrix Description of Linear Block Codes 98

3.2.1 Rudimentary Implementation 100

3.3 The Parity Check Matrix and Dual Codes 100

3.3.1 Some Simple Bounds on Block Codes 103

3.4 Error Detection and Correction over Hard-Output Channels 104

3.4.1 Error Detection 104

3.4.2 Error Correction: The Standard Array 105

3.5 Weight Distributions of Codes and Their Duals 109

3.6 Binary Hamming Codes and Their Duals 112

3.7 Performance of Linear Codes 113

3.7.1 Error detection performance 114

3.7.2 Error Correction Performance 115

3.7.3 Performance for Soft-Decision Decoding 118

3.8 Erasure Decoding 119

3.8.1 Binary Erasure Decoding 120

3.9 Modifications to Linear Codes 121

3.10 Best Known Linear Block Codes 123

3.11 Exercises 123

3.12 References 128

4 Cyclic Codes, Rings, and Polynomials 129

4.1 Introduction 129

4.2 Basic Definitions 129

4.3 Rings 130

4.3.1 Rings of Polynomials 132

4.4 Quotient Rings 133

4.5 Ideals in Rings 135

4.6 Algebraic Description of Cyclic Codes 137

4.7 Nonsystematic Encoding and Parity Check 138

4.8 Systematic Encoding 141

4.9 Some Hardware Background 143

4.9.1 Computational Building Blocks 143

4.9.2 Sequences and Power series 144

4.9.3 Polynomial Multiplication 146

4.9.3.1 Last-Element-First Processing 146

4.9.3.2 First-Element-First Processing 146

4.9.4 Polynomial division 147

4.9.4.1 Last-Element-First Processing 147

4.9.5 Simultaneous Polynomial Division and Multiplication 149

4.9.5.1 First-Element-First Processing 149

4.10 Cyclic Encoding 152

4.11 Syndrome Decoding 155

4.12 Shortened Cyclic Codes 160

4.12.0.1 Method 1: Simulating the Extra Clock Shifts 161

4.12.0.2 Method 2: Changing the Error Pattern Detection Circuit 162

4.13 Binary CRC Codes 165

4.13.1 Byte-Oriented Encoding and Decoding Algorithms 167

4.13.2 CRC Protecting Data Files or Data Packets 171

Appendix 4.A Linear Feedback Shift Registers 172

Appendix 4.A.1 Basic Concepts 172

Appendix 4.A.2 Connection With Polynomial Division 175

Appendix 4.A.3 Some Algebraic Properties of Shift Sequences 178

Lab 2 Polynomial Division and Linear Feedback Shift Registers 179

Objective 179

Preliminary Exercises 179

Programming Part: BinLFSR 179

Resources and Implementation Suggestions 180

Programming Part: BinPolyDiv 180

Follow-On Ideas and Problems 180

Lab 3 CRC Encoding and Decoding 181

Objective 181

Preliminary 181

Programming Part 181

Resources and Implementation Suggestions 183

4.14 Exercises 184

4.15 References 189

5 Rudiments of Number Theory and Algebra 191

5.1 Motivation 191

5.2 Number Theoretic Preliminaries 195

5.2.1 Divisibility 195

5.2.2 The Euclidean Algorithm and Euclidean Domains 198

5.2.3 The Sugiyama Algorithm 203

5.2.4 Congruence 206

5.2.5 The ‑ Function 207

5.2.6 Some Cryptographic Payoff 208

5.2.6.1 Fermat’s Little Theorem 208

5.2.6.2 RSA Encryption 209

5.3 The Chinese Remainder Theorem 210

5.3.1 The CRT and Interpolation 213

5.3.1.1 The Evaluation Homomorphism 213

5.3.1.2 The Interpolation Problem 213

5.4 Fields 215

5.4.1 An Examination of R and C 216

5.4.2 Galois Field Construction: An Example 219

5.4.3 Connection with Linear Feedback Shift Registers 222

5.5 Galois Fields: Mathematical Facts 223

5.6 Implementing Galois Field Arithmetic 227

5.6.1 Zech Logarithms 227

5.6.2 Hardware Implementations 228

5.7 Subfields of Galois Fields 229

5.8 Irreducible and Primitive polynomials 230

5.9 Conjugate Elements and Minimal Polynomials 232

5.9.1 Minimal Polynomials 235

5.10 Factoring x n  1 238

5.11 Cyclotomic Cosets 241

Appendix 5.A How Many Irreducible Polynomials Are There? 241

Appendix 5.A.1 Solving for Im Explicitly: The Moebius Function 245

Lab 4 Programming the Euclidean Algorithm 246

Objective 246

Preliminary Exercises 246

Background 246

Programming Part 246

Lab 5 Programming Galois Field Arithmetic 247

Objective 247

Preliminary Exercises 247

Programming Part 247

5.12 Exercises 249

5.13 References 258

6 BCH and Reed-Solomon Codes: Designer Cyclic Codes 259

6.1 BCH Codes 259

6.1.1 Designing BCH Codes 259

6.1.2 The BCH Bound 262

6.1.3 Weight Distributions for Some Binary BCH Codes 265

6.1.4 Asymptotic Results for BCH Codes 266

6.2 Reed-Solomon Codes 267

6.2.1 Reed-Solomon Construction 1 267

6.2.2 Reed-Solomon Construction 2 268

6.2.3 Encoding Reed-Solomon Codes 269

6.2.4 MDS Codes and Weight Distributions for RS Codes 270

6.3 Decoding BCH and RS Codes: The General Outline 272

6.3.1 Computation of the Syndrome 273

6.3.2 The Error Locator Polynomial 274

6.3.3 Chien Search 274

6.4 Finding the Error Locator Polynomial 276

6.4.1 Simplifications for Narrow-Sense Binary Codes; Peterson’s Algorithm 277

6.4.2 Berlekamp-Massey Algorithm 280

6.4.3 Characterization of LFSR Length in Massey’s Algorithm 28

6.4.4 Simplifications for Binary Codes 286

6.5 Non-Binary BCH and RS Decoding 288

6.5.1 Forney’s Algorithm 288

6.6 Euclidean Algorithm for the Error Locator Polynomial 293

6.7 Erasure Decoding for Nonbinary BCH or RS codes 294

6.8 Galois Field Fourier Transform Methods 297

6.8.1 Equivalence of the Two Reed-Solomon Code Constructions 302

6.8.2 Frequency-Domain Decoding 302

6.9 Variations and Extensions of Reed-Solomon Codes 303

6.9.1 Simple Modifications 303

6.9.2 Generalized Reed-Solomon Codes and Alternant Codes 304

6.9.3 Goppa Codes 306

6.9.4 Decoding Alternant Codes 307

6.9.5 The McEliece Public Key Cryptosystem 308

Lab 6 Programming the Berlekamp-Massey Algorithm 308

Background 308

Assignment 309

Preliminary Exercises 309

Programming Part 309

Resources and Implementation Suggestions 309

Lab 7 Programming the BCH Decoder 310

Objective 310

Preliminary Exercises 310

Programming Part 310

Resources and Implementation Suggestions 311

Follow-On Ideas and Problems 311

Lab 8 Reed-Solomon Encoding and Decoding 311

Objective 311

Background 311

Programming Part 311

Appendix 6.A Proof of Newton’s Identities 313

6.10 Exercises 314

6.11 References 319

7 Alternate Decoding Algorithms for Reed-Solomon Codes 320

7.1 Introduction: Workload for Reed-Solomon Decoding 320

7.2 Derivations of Welch-Berlekamp Key Equation 321

7.2.1 The Welch-Berlekamp Derivation of the WB Key Equation 321

7.2.2 Derivation From the Conventional Key Equation 325

7.3 Finding the Error Values 327

7.4 Methods of Solving the WB Key Equation 329

7.4.1 Background: Modules 329

7.4.2 The Welch-Berlekamp Algorithm 330

7.4.3 Modular Solution of the WB Key Equation 337

7.5 Erasure Decoding with the Welch-Berlekamp Key Equation 348

7.6 The Guruswami-Sudan Decoding Algorithm and Soft RS Decoding 349

7.6.1 Bounded Distance, ML, and List Decoding 349

7.6.2 Error Correction by Interpolation 350

7.6.3 Polynomials in Two Variables 351

7.6.3.1 Degree and Monomial Order 351

7.6.3.2 Zeros and Multiple Zeros 355

7.6.4 The GS Decoder: The Main Theorems 357

7.6.4.1 The Interpolation Theorem 358

7.6.4.2 The Factorization Theorem 358

7.6.4.3 The Correction Distance 359

7.6.4.4 The Number of Polynomials in the Decoding List 362

7.6.5 Algorithms for Computing the Interpolation Step 364

7.6.5.1 Finding Linearly Dependent Columns: The FengTzeng Algorithm 365

7.6.5.2 Finding the Intersection of Kernels: The Kötter Algorithm 369

7.6.6 A Special Case: m D 1 and L D 1 375

7.6.7 The Roth-Ruckenstein Algorithm 377

7.6.7.1 What to Do with Lists of Factors? 384

7.6.8 Soft-Decision Decoding of Reed-Solomon Codes 385

7.6.8.1 Notation 386

7.6.8.2 A Factorization Theorem 388

7.6.8.3 Mapping from Reliability to Multiplicity 388

7.6.8.4 The Geometry of the Decoding Regions 390

7.6.8.5 Computing the Reliability Matrix 391

7.7 Exercises 392

7.8 References 395

8 Other Important Block Codes 397

8.1 Introduction 397

8.2 Hadamard Matrices, Codes, and Transforms 397

8.2.1 Introduction to Hadamard Matrices 397

8.2.2 The Paley Construction of Hadamard Matrices 399

8.2.3 Hadamard Codes 402

8.3 Reed-Muller Codes 403

8.3.1 Boolean Functions 403

8.3.2 Definition of the Reed-Muller Codes 404

8.3.3 Encoding and Decoding Algorithms for First-Order RM Codes 407

8.3.3.1 Encoding RM.1; m/ Codes 407

8.3.3.2 Decoding RM.1; m/ Codes 408

8.3.3.3 Expediting Decoding Using the Fast Hadamard Transform 410

8.3.4 The Reed Decoding Algorithm for RM.r; m/ Codes, r 1 412

8.3.4.1 Details for an RM.2; 4/ Code 412

8.3.4.2 A Geometric Viewpoint 415

8.3.5 Other Constructions of Reed-Muller Codes 419

8.4 Building Long Codes from Short Codes: The Squaring Construction 420

8.5 Quadratic Residue Codes 424

8.6 Golay Codes 426

8.6.1 Decoding the Golay Code 427

8.6.1.1 Algebraic Decoding of the G23 Golay Code 428

8.6.1.2 Arithmetic Decoding of the G24 Code 429

8.7 Exercises 430

8.8 References 432

9 Bounds on Codes 434

9.1 The Gilbert-Varshamov Bound 437

9.2 The Plotkin Bound 438

9.3 The Griesmer Bound 439

9.4 The Linear Programming and Related Bounds 441

9.4.1 Krawtchouk Polynomials 443

9.4.2 Character 443

9.4.3 Krawtchouk Polynomials and Characters 444

9.5 The McEliece-Rodemich-Rumsey-Welch Bound 446

9.6 Exercises 448

9.7 References 452

10 Bursty Channels, Interleavers, and Concatenation 453

10.1 Introduction to Bursty Channels 453

10.2 Interleavers 453

10.3 An Application of Interleaved RS Codes: Compact Discs 455

10.4 Product Codes 458

10.5 Reed-Solomon Codes 459

10.6 Concatenated Codes 460

10.7 Fire Codes 461

10.7.1 Fire Code Definition 461

10.7.2 Decoding Fire Codes: Error Trapping Decoding 463

10.8 Exercises 465

10.9 References 466

11 Soft-Decision Decoding Algorithms 467

11.1 Introduction and General Notation 467

11.2 Generalized Minimum Distance Decoding 469

11.2.1 Distance Measures and Properties 470

11.3 The Chase Decoding Algorithms 473

11.4 Halting the Search: An Optimality Condition 473

11.5 Ordered Statistic Decoding 475

11.6 The Hartmann Rudolph Algorithm 477

11.7 Exercises 479

11.8 References 480

Part III Codes on Graphs 482

12 Convolutional Codes 483

12.1 Introduction and Basic Notation 483

12.1.1 The State 487

12.2 Definition of Codes and Equivalent Codes 490

12.2.1 Catastrophic Encoders 493

12.2.2 Polynomial and Rational Encoders 495

12.2.3 Constraint Length and Minimal Encoders 497

12.2.4 Systematic Encoders 500

12.3 Decoding Convolutional Codes 500

12.3.1 Introduction and Notation 500

12.3.2 The Viterbi Algorithm 504

12.3.3 Some Implementation Issues 514

12.3.3.1 The Basic Operation: Add-Compare-Select 514

12.3.3.2 Decoding Streams of Data: Windows on the Trellis 514

12.3.3.3 Output Decisions 515

12.3.3.4 Hard and Soft Decoding; Quantization 517

12.3.3.5 Synchronization Issues 519

12.4 Some Performance Results 520

12.5 Error Analysis for Convolutional Codes 524

12.5.1 Enumerating Paths Through the Trellis 526

12.5.1.1 Enumerating on More Complicated Graphs: Mason’s Rule 529

12.5.2 Characterizing Node Error Probability Pe and Bit Error Rate Pb 532

12.5.3 A Bound on Pd for Discrete Channels 534

12.5.3.1 Performance Bound on the BSC 536

12.5.4 A Bound on Pd for BPSK Signaling Over the AWGN Channel 536

12.5.5 Asymptotic Coding Gain 538

12.6 Tables of Good Codes 539

12.7 Puncturing 539

12.7.1 Puncturing to Achieve Variable Rate 542

12.8 Suboptimal Decoding Algorithms for Convolutional Codes 543

12.8.1 Tree Representations 544

12.8.2 The Fano Metric 545

12.8.3 The Stack Algorithm 548

12.8.4 The Fano Algorithm 549

12.8.5 Other Issues for Sequential Decoding 554

12.8.6 A Variation on the Viterbi Algorithm: The M Algorithm 555

12.9 Convolutional Codes as Block Codes and Tailbiting Codes 556

12.10 A modified expression for the the path metric 557

12.11 Soft Output Viterbi Algorithm (SOVA) 559

12.12 Trellis Representations of Block and Cyclic Codes 564

12.12.1 Block Codes 565

12.12.2 Cyclic Codes 565

12.12.3 Trellis Decoding of Block Codes 566

Lab 9 Programming Convolutional Encoders 567

Objective 567

Background 567

Programming Part 568

Lab 10 Convolutional Decoders: The Viterbi Algorithm 569

Objective 570

Background 570

Programming Part 570

12.13 Exercises 572

12.14 References 576

13 Trellis Coded Modulation 578

13.1 Adding Redundancy by Adding Signals 578

13.2 Background on Signal Constellations 578

13.3 TCM Example 580

13.3.1 The General Ungerboeck Coding Framework 587

13.3.2 The Set Partitioning Idea 587

13.4 Some Error Analysis for TCM Codes 589

13.4.1 General Considerations 589

13.4.2 A Description of the Error Events 590

13.4.3 Known Good TCM Codes 594

13.5 Decoding TCM Codes 596

13.6 Rotational Invariance 598

13.6.0.1 Differential Encoding 601

13.6.0.2 Constellation Labels and Partitions 602

13.7 Multidimensional TCM 604

13.7.1 Some Advantages of Multidimensional TCM 604

13.7.2 Lattices and Sublattices 605

13.7.2.1 Basic Definitions 605

13.7.2.2 Common Lattices 608

13.7.2.3 Sublattices and Cosets 609

13.7.2.4 The Lattice Code Idea 610

13.7.2.5 Sources of Coding Gain in Lattice Codes 610

13.7.2.6 Some Good Lattice Codes 613

13.8 The V.34 Modem Standard 613

13.9 Exercises 621

Lab 11 Trellis-Coded Modulation Encoding and Decoding 622

Objective 622

Background 622

Programming Part 623

13.10 References 623

Part IV Iteratively Decoded Codes 624

14 Turbo Codes 625

14.1 Introduction 625

14.2 Encoding Parallel Concatenated Codes 627

14.3 Turbo Decoding Algorithms 629

14.3.1 The MAP Decoding Algorithm 631

14.3.2 Notation 631

14.3.3 Posterior Probability 633

14.3.4 Computing ˛t and ˇt635

14.3.5 Computing 636

CONTENTS xxi

14.3.6 Normalization 637

14.3.7 Summary of the BCJR Algorithm 638

14.3.8 A Matrix/Vector Formulation 640

14.3.9 Comparison of the Viterbi and BCJR Algorithms 641

14.3.10 The BCJR Algorithm for Systematic Codes 641

14.3.11 Turbo Decoding Using the BCJR Algorithm 643

14.3.11.1 The Terminal State of the Encoders 645

14.3.12 Likelihood Ratio Decoding 645

14.3.12.1 Log Prior Ratio p;t 646

14.3.12.2 Log Posterior .0/ s;t 648

14.3.13 Statement of the Turbo Decoding Algorithm 648

14.3.14 Turbo Decoding Stopping Criteria 648

14.3.14.1 The Cross Entropy Stopping Criterion 649

14.3.14.2 The Sign Change Ratio (SCR) Criterion 650

14.3.14.3 The Hard Decision Aided (HDA) Criterion 651

14.3.15 Modifications of the MAP Algorithm 651

14.3.15.1 The Max-Log-MAP Algorithm 651

14.3.16 Corrections to the Max-Log-MAP Algorithm 652

14.3.17 The Soft Output Viterbi Algorithm 653

14.4 On the Error Floor and Weight Distributions 653

14.4.1 The Error Floor 653

14.4.2 Spectral Thinning and Random Permuters 655

14.4.3 On Permuters 658

14.5 EXIT Chart Analysis 660

14.5.1 The EXIT Chart 662

14.6 Block Turbo Coding 664

14.7 Turbo Equalization 667

14.7.1 Introduction to Turbo Equalization 667

14.7.2 The Framework for Turbo Equalization 668

Lab 12 Turbo Code Decoding 670

Objective 670

Background 670

Programming Part 670

14.8 Exercises 670

14.9 References 673

15 Low-Density Parity-Check Codes: Introduction, Decoding, and Analysis 675

15.1 Introduction 675

15.2 LDPC Codes: Construction and Notation 676

15.3 Tanner Graphs 679

15.4 Decoding LDPC Codes 681

15.4.1 Decoding Using Log Likelihood Ratios 681

15.4.1.1 Log-Likelihood ratios 681

15.4.1.2 Log Likelihood Ratio of the Sum of Bits 682

15.4.1.3 Decoding: Message from a Check Node to a Variable Node 683

15.4.1.4 Log Likelihood of Repeated Observations About a Bit 685

15.4.1.5 Decoding: Message from a Variable Node to a Check Node 685

15.4.1.6 Inputs to the LDPC Decoding Algorithm 686

15.4.1.7 Terminating the Decoding Algorithm 688

15.4.1.8 Summary of Decoding: The Belief Propagation Algorithm 688

15.4.1.9 Messages on the Tanner Graph 691

15.4.2 Decoding Using Probabilities 692

15.4.2.1 Probability of Even Parity: Decoding at the Check Nodes 692

15.4.2.2 Probability of Independent Observerations Decoding at a Variable Node 694

15.4.2.3 Computing the Leave-One-Out Product 697

15.4.2.4 Sparse Memory Organization 698

15.4.3 Variations on Decoding Algorithms: The Min-Sum Decoder 698

15.4.3.1 The  Operation and the ‑.x/ Function 699

15.4.3.2 Attenuated and Offset Min-sum Decoders 700

15.4.4 Variations on Decoding Algorithms: Min-Sum with Correction 701

15.4.4.1 Approximate min Decoder 704

15.4.4.2 The Reduced Complexity Box-Plus Decoder 705

15.4.4.3 The Richardson-Novichkov Decoder 707

15.4.5 Hard Decision Decoding 710

15.4.5.1 Bit Flipping 710

15.4.5.2 Gallager’s Algorithms A and B 711

15.4.5.3 Weighted Bit Flipping 712

15.4.5.4 Gradient Descent Bit Flipping 713

15.4.6 Divide and Concur Decoding 715

15.4.6.1 Summary of the Divide and Concur Algorithm 715

15.4.6.2 DC Applied to LDPC Decoding 716

15.4.6.3 The Divide Projections 719

15.4.6.4 The Concur Projection 721

15.4.6.5 A Message Passing Viewpoint of DC Decoding 722

15.4.7 Difference Map Belief Propagation Decoding 722

15.4.8 Linear Programming Decoding 723

15.4.8.1 Background on Linear Programming 723

15.4.8.2 Formulation of the Basic LP Decoding Algorithm 724

15.4.8.3 LP Relaxation 727

15.4.8.4 Examples and Discussion 731

15.4.9 Decoding on the Binary Erasure Channel 732

15.4.10 BEC Channels and Stopping Sets 734

15.5 Why Low-Density Parity-Check Codes? 736

15.6 The Iterative Decoder on General Block Codes 737

15.7 Density Evolution 738

15.8 EXIT Charts for LDPC Codes 742

15.9 Irregular LDPC Codes 744

15.9.1 Degree Distribution Pairs 745

15.9.2 Density Evolution for Irregular Codes 747

15.9.3 Computation and Optimization of Density Evolution 750

15.9.4 Using Irregular Codes 750

15.10 More on LDPC Code Construction 750

15.11 Encoding LDPC Codes 751

15.12 A Variation: Low-Density Generator Matrix Codes 753

Lab 13 Programming an LDPC Decoder 753

Objective 753

Background 753

Assignment 754

Numerical Considerations 755

15.13 Exercises 756

15.14 References 759

16 Low-Density Parity-Check Codes: Designs and Variations 760

16.1 Introduction 760

16.2 Repeat-Accumulate Codes 760

16.2.1 Decoding RA Codes 763

16.2.2 Irregular RA Codes 764

16.2.3 RA Codes with Multiple Accumulators 766

16.3 LDPC Convolutional Codes 766

16.4 Quasi-Cyclic Codes 769

16.4.1 QC Generator matrices 769

16.4.2 Constructing QC Generator Matrices from QC Parity Check Matrices 771

16.4.2.1 Full Rank Case 772

16.4.2.2 Non-Full Rank Case 773

16.5 Construction of LDPC codes based on Finite Fields 774

16.5.1 I. Construction of QC-LDPC Codes Based on the MinimumWeight Codewords of a Reed-Solomon Code with Two Information Symbols 776

16.5.2 II. Construction of QC-LDPC Codes Based on a Special Subclass of RS Codes 777

16.5.3 III. Construction of QC-LDPC Codes Based on Subgroups of a Finite Field 779

16.5.4 IV. Construction of QC-LDPC Codes Based on Subgroups of the Multiplicative Group of a Finite Field 780

16.5.5 V. Construcion of QC-LDPC Codes Based on Primitive Elements of a Field 781

16.6 Code Design Based on Finite Geometries 781

16.6.1 Rudiments of Euclidan Geometry 781

16.6.1.1 Points in EG.m; q/ 781

16.6.1.2 Lines in EG.m; q/ 783

16.6.1.3 Incidence vectors in EG .m; q/ 784

16.6.2 A Family of Cyclic EG-LDPC Codes 785

16.6.3 Construction of LDPC codes based on Parallel Bundles of Lines787

16.6.4 Constructions Based on Other Combinatoric Objects 787

16.7 Ensembles of LDPC Codes 787

16.7.1 Regular ensembles 788

16.7.2 Irregular Ensembles 789

16.7.3 Multi-edge type ensembles 791

16.8 Constructing LDPC Codes by Progressive Edge Growth (PEG) 792

16.9 Protograph and Multi-Edge Type LDPC Codes 795

16.10 Error Floors and Trapping Sets 797

16.11 Importance Sampling 798

16.11.1 Importance Sampling: General Principles 799

16.11.2 Importance Sampling for Estimating Error Probability 801

16.11.3 Importance Sampling for Tanner Trees 813

16.11.3.1 Single Parity Check Codes 813

16.11.3.2 Symmetric Tanner Trees 816

16.11.3.3 General Trees 818

16.11.3.4 Importance Sampling for LDPC Codes 819

16.12 Fountain Codes 819

16.12.1 Conventional Erasure Correction Codes 820

16.12.2 Tornado Codes 820

16.12.3 Luby Transform Codes 821

16.12.4 Raptor Codes 821

16.13 References 823

Part V Polar Codes 825

17 Polar Codes 826

17.1 Introduction and Preview 826

17.2 Notation and Channels 828

17.3 Channel Polarization, N D 2 Channel 830

17.3.1 Encoding 830

17.3.2 Synthetic Channels and Mutual Information 831

17.3.3 Synthetic Channel Transition Probabilities 834

17.3.4 An example with N D 2 using the binary erasure channel 834

17.3.5 Successive Cancellation Decoding 835

17.3.5.1 Log Likelihood Ratio Computations 837

17.3.5.2 Computing the Log Likelihood function with lower complexity 839

17.3.6 Tree Representation 840

17.3.7 The Polar Coding Idea 841

17.4 Channel Polarization, N > 2 channels 842

17.4.1 Channel Combining: Extension from N=2 to N channels842

17.4.2 Pseudocode for Encoder for Arbitrary N 846

17.4.3 Transition probabilities and channel splitting 846

17.4.4 An example using the BEC for N > 2 850

17.4.5 Polar Coding 856

17.4.6 Tree Representation 857

17.4.7 Successive Cancellation Decoding 857

17.4.8 SC Decoding from a Message Passing Point of View on the Tree861

17.4.9 A Decoding Example with N D 4 861

17.4.10 A Decoding Example with N D 8 865

17.4.11 Pseudo-Code Description of Successive Cancellation Algorithm874

17.5 Some Theorems of Polar Coding Theory 877

17.5.1 I.W / and Z.W / for general B-DMCs 877

17.5.2 Channel Polarization 879

17.5.3 The Polarization Theorem 881

17.5.4 A few facts about martingales 881

17.5.5 Proof of the Polarization Theorem 882

17.5.6 Another Polarization Theorem 883

17.5.7 Rate of Polarization 887

17.5.8 Probability of Error Performance 887

17.6 Designing Polar Codes 888

17.6.1 Code Design by Battacharyya Bound 889

17.6.2 Monte-Carlo Estimation of Z.W .i/ N/ 890

17.7 Perspective: The Channel Coding Theorem 891

17.8 Systematic Encoding of Polar Codes 891

17.8.1 Polar Systematic Encoding via the Encoder Graph 892

17.8.2 Polar Systematic Encoding via Arıkan’s Method 895

17.8.3 Systematic Encoding: The Bit Reverse Permutation 895

17.8.4 Decoding Systematically Encoded Codes 895

17.8.5 Flexible Systematic Encoding 895

17.8.6 Involutions and Domination Contiguity 898

17.8.7 Polar Codes and Domination Contiguity 900

17.8.8 Modifications for Polar Codes with Bit-Reverse Permutation 903

17.9 List Decoding of Polar Codes 903

17.9.1 The Likelihood Data Structure P 905

17.9.2 Normalization 907

17.9.3 Code to Compute P 907

17.9.4 The Bits Data Structure C 908

17.9.5 Code to Compute C 910

17.9.6 Supporting Data Structures 911

17.9.7 Code for Polar List Decoding 912

17.9.8 An Example of List Decoding 917

17.9.9 Computational Complexity 919

17.9.10 Modified Polar Codes 920

17.10 LLR-Based Successive Cancellation List Decoding 921

17.10.1 Implementation Considerations 922

17.11 Simplified Successive Cancellation Decoding 922

17.11.1 Modified SC Decoding 926

17.12 Relations with Reed-Muller Codes 926

17.13 Hardware and Throughput Performance Results 926

17.14 Generalizations, Extensions, and Variations 927

Appendix 17.A BN is a Bit-Reverse Permutation 927

Appendix 17.B The Relationship of the Battacharyya Parameter to Channel

Capacity 928

Appendix 17.B.1 Error probability for Two Codewords 928

Appendix 17.B.2 Proof of Inequality (17.59) 930

Appendix 17.B.3 Proof of Inequality (17.60) [17] 934

Appendix 17.C Proof of Theorem 17.12 936

17.15 Exercises 938

17.16 References 938

Part VI Applications 939

18 Some Applications of Error Correction in Modern Communication Systems 940

18.1 Introduction 940

18.2 Digital Video Broadcast T2 (DVB-T2) 940

18.2.1 BCH Outer Encoding 940

18.2.2 LDPC Inner Encoding 942

18.3 Digital Cable Television 944

18.4 E-UTRA and Long Term Evolution 948

18.4.1 LTE Rate 1/3 Convolutional Encoder 949

18.4.2 LTE Turbo Code 949

18.5 References 951

Part VII Space-Time Coding 953

19 Fading Channels and Space-Time Codes 954

19.1 Introduction 954

19.2 Fading Channels 954

19.2.1 Rayleigh Fading 957

19.3 Diversity Transmission and Reception: The MIMO Channel 958

19.3.1 The Narrowband MIMO Channel 960

19.3.2 Diversity Performance with Maximal-Ratio Combining 961

19.4 Space-Time Block Codes 962

19.4.1 The Alamouti Code 963

19.4.2 A More General Formulation 964

19.4.3 Performance Calculation 965

19.4.3.1 Real Orthogonal Designs 967

19.4.3.2 Encoding and Decoding Based on Orthogonal Designs 968

19.4.3.3 Generalized Real Orthogonal Designs 969

19.4.4 Complex Orthogonal Designs 970

19.4.4.1 Future Work 971

19.5 Space-Time Trellis Codes 972

19.5.1 Concatenation 973

19.6 How Many Antennas? 973

19.7 Estimating Channel Information 977

19.8 Exercises 977

19.9 References 978

References 979

Index 999

이 포스팅은 쿠팡 파트너스 활동의 일환으로,
이에 따른 일정액의 수수료를 제공받습니다.
이 포스팅은 제휴마케팅이 포함된 광고로 커미션을 지급 받습니다.
도서 DB 제공 : 알라딘 서점(www.aladin.co.kr)
최근 본 책