logo
logo
x
바코드검색
BOOKPRICE.co.kr
책, 도서 가격비교 사이트
바코드검색

인기 검색어

실시간 검색어

검색가능 서점

도서목록 제공

[eBook Code] Matrix Algebra for Linear Models

[eBook Code] Matrix Algebra for Linear Models (eBook Code, 1st)

Marvin H. J. Gruber (지은이)
Wiley
174,190원

일반도서

검색중
서점 할인가 할인률 배송비 혜택/추가 실질최저가 구매하기
139,350원 -20% 0원
0원
139,350원 >
yes24 로딩중
교보문고 로딩중
notice_icon 검색 결과 내에 다른 책이 포함되어 있을 수 있습니다.

중고도서

검색중
서점 유형 등록개수 최저가 구매하기
로딩중

eBook

검색중
서점 정가 할인가 마일리지 실질최저가 구매하기
로딩중

책 이미지

[eBook Code] Matrix Algebra for Linear Models
eBook 미리보기

책 정보

· 제목 : [eBook Code] Matrix Algebra for Linear Models (eBook Code, 1st) 
· 분류 : 외국도서 > 과학/수학/생태 > 수학 > 확률과 통계 > 일반
· ISBN : 9781118800416
· 쪽수 : 392쪽
· 출판일 : 2013-12-02

목차

Preface xiii

Acknowledgments xv

Part I Basic Ideas about Matrices and Systems of Linear Equations 1

Section 1 What Matrices are and Some Basic Operations with Them 3

1.1 Introduction 3

1.2 What are Matrices and why are they Interesting to a Statistician? 3

1.3 Matrix Notation Addition and Multiplication 6

1.4 Summary 10

Exercises 10

Section 2 Determinants and Solving a System of Equations 14

2.1 Introduction 14

2.2 Definition of and Formulae for Expanding Determinants 14

2.3 Some Computational Tricks for the Evaluation of Determinants 16

2.4 Solution to Linear Equations Using Determinants 18

2.5 Gauss Elimination 22

2.6 Summary 27

Exercises 27

Section 3 The Inverse of a Matrix 30

3.1 Introduction 30

3.2 The Adjoint Method of Finding the Inverse of a Matrix 30

3.3 Using Elementary Row Operations 31

3.4 Using the Matrix Inverse to Solve a System of Equations 33

3.5 Partitioned Matrices and Their Inverses 34

3.6 Finding the Least Square Estimator 38

3.7 Summary 44

Exercises 44

Section 4 Special Matrices and Facts about Matrices that will be used in the Sequel 47

4.1 Introduction 47

4.2 Matrices of the Form aIn + bJn 47

4.3 Orthogonal Matrices 49

4.4 Direct Product of Matrices 52

4.5 An Important Property of Determinants 53

4.6 The Trace of a Matrix 56

4.7 Matrix Differentiation 57

4.8 The Least Square Estimator Again 62

4.9 Summary 62

Exercises 63

Section 5 Vector Spaces 66

5.1 Introduction 66

5.2 What is a Vector Space? 66

5.3 The Dimension of a Vector Space 68

5.4 Inner Product Spaces 70

5.5 Linear Transformations 73

5.6 Summary 76

Exercises 76

Section 6 The Rank of a Matrix and Solutions to Systems of Equations 79

6.1 Introduction 79

6.2 The Rank of a Matrix 79

6.3 Solving Systems of Equations with Coefficient Matrix of Less than Full Rank 84

6.4 Summary 87

Exercises 87

Part II Eigenvalues the Singular Value Decomposition and Principal Components 91

Section 7 Finding the Eigenvalues of a Matrix 93

7.1 Introduction 93

7.2 Eigenvalues and Eigenvectors of a Matrix 93

7.3 Nonnegative Definite Matrices 101

7.4 Summary 104

Exercises 105

Section 8 The Eigenvalues and Eigenvectors of Special Matrices 108

8.1 Introduction 108

8.2 Orthogonal Nonsingular and Idempotent Matrices 109

8.3 The Cayley–Hamilton Theorem 112

8.4 The Relationship between the Trace the Determinant and the Eigenvalues of a Matrix 114

8.5 The Eigenvalues and Eigenvectors of the Kronecker Product of Two Matrices 116

8.6 The Eigenvalues and the Eigenvectors of a Matrix of the Form aI + bJ 117

8.7 The Loewner Ordering 119

8.8 Summary 121

Exercises 122

Section 9 The Singular Value Decomposition (SVD) 124

9.1 Introduction 124

9.2 The Existence of the SVD 125

9.3 Uses and Examples of the SVD 127

9.4 Summary 134

Exercises 134

Section 10 Applications of the Singular Value Decomposition 137

10.1 Introduction 137

10.2 Reparameterization of a Non-full-Rank Model to a Full-Rank Model 137

10.3 Principal Components 141

10.4 The Multicollinearity Problem 143

10.5 Summary 144

Exercises 145

Section 11 Relative Eigenvalues and Generalizations of the Singular Value Decomposition 146

11.1 Introduction 146

11.2 Relative Eigenvalues and Eigenvectors 146

11.3 Generalizations of the Singular Value Decomposition:Overview 151

11.4 The First Generalization 152

11.5 The Second Generalization 157

11.6 Summary 160

Exercises 160

Part III Generalized Inverses 163

Section 12 Basic Ideas about Generalized Inverses 165

12.1 Introduction 165

12.2 What is a Generalized Inverse and how is One Obtained? 165

12.3 The Moore–Penrose Inverse 170

12.4 Summary 173

Exercises 173

Section 13 Characterizations of Generalized Inverses Using the Singular Value Decomposition 175

13.1 Introduction 175

13.2 Characterization of the Moore–Penrose Inverse 175

13.3 Generalized Inverses in Terms of the Moore–Penrose Inverse 177

13.4 Summary 185

Exercises 186

Section 14 Least Square and Minimum Norm Generalized Inverses 188

14.1 Introduction 188

14.2 Minimum Norm Generalized Inverses 189

14.3 Least Square Generalized Inverses 193

14.4 An Extension of Theorem 7.3 to Positive-Semi-definite Matrices 196

14.5 Summary 197

Exercises 197

Section 15 More Representations of Generalized Inverses 200

15.1 Introduction 200

15.2 Another Characterization of the Moore–Penrose Inverse 200

15.3 Still another Representation of the Generalized Inverse 204

15.4 The Generalized Inverse of a Partitioned Matrix 207

15.5 Summary 211

Exercises 211

Section 16 Least Square Estimators for Less than Full-Rank Models 213

16.1 Introduction 213

16.2 Some Preliminaries 213

16.3 Obtaining the LS Estimator 214

16.4 Summary 221

Exercises 221

Part IV Quadratic Forms and the Analysis of Variance 223

Section 17 Quadratic Forms and their Probability Distributions 225

17.1 Introduction 225

17.2 Examples of Quadratic Forms 225

17.3 The Chi-Square Distribution 228

17.4 When does the Quadratic Form of a Random Variable have a Chi-Square Distribution? 230

17.5 When are Two Quadratic Forms with the Chi-Square Distribution Independent? 231

17.6 Summary 234

Exercises 235

Section 18 Analysis of Variance: Regression Models and the One- and Two-Way Classification 237

18.1 Introduction 237

18.2 The Full-Rank General Linear Regression Model 237

18.3 Analysis of Variance: One-Way Classification 241

18.4 Analysis of Variance: Two-Way Classification 244

18.5 Summary 249

Exercises 249

Section 19 More ANOVA 253

19.1 Introduction 253

19.2 The Two-Way Classification with Interaction 254

19.3 The Two-Way Classification with One Factor Nested 258

19.4 Summary 262

Exercises 262

Section 20 The General Linear Hypothesis 264

20.1 Introduction 264

20.2 The Full-Rank Case 264

20.3 The Non-full-Rank Case 267

20.4 Contrasts 270

20.5 Summary 273

Exercises 273

Part V Matrix Optimization Problems 275

Section 21 Unconstrained Optimization Problems 277

21.1 Introduction 277

21.2 Unconstrained Optimization Problems 277

21.3 The Least Square Estimator Again 281

21.4 Summary 283

Exercises 283

Section 22 Constrained Minimization Problems with Linear Constraints 287

22.1 Introduction 287

22.2 An Overview of Lagrange Multipliers 287

22.3 Minimizing a Second-Degree Form with Respect to a Linear Constraint 293

22.4 The Constrained Least Square Estimator 295

22.5 Canonical Correlation 299

22.6 Summary 302

Exercises 302

Section 23 The Gauss–Markov Theorem 304

23.1 Introduction 304

23.2 The Gauss–Markov Theorem and the Least Square Estimator 304

23.3 The Modified Gauss–Markov Theorem and the Linear Bayes Estimator 306

23.4 Summary 311

Exercises 311

Section 24 Ridge Regression-Type Estimators 314

24.1 Introduction 314

24.2 Minimizing a Second-Degree Form with Respect to a Quadratic Constraint 314

24.3 The Generalized Ridge Regression Estimators 315

24.4 The Mean Square Error of the Generalized Ridge Estimator without Averaging over the Prior Distribution 317

24.5 The Mean Square Error Averaging over the Prior Distribution 321

24.6 Summary 321

Exercises 321

Answers to Selected Exercises 324

References 366

Index 368

이 포스팅은 쿠팡 파트너스 활동의 일환으로,
이에 따른 일정액의 수수료를 제공받습니다.
이 포스팅은 제휴마케팅이 포함된 광고로 커미션을 지급 받습니다.
도서 DB 제공 : 알라딘 서점(www.aladin.co.kr)
최근 본 책