책 이미지
책 정보
· 분류 : 외국도서 > 과학/수학/생태 > 수학 > 확률과 통계 > 일반
· ISBN : 9781119583028
· 쪽수 : 576쪽
목차
Preface
1 Preliminary Considerations
1.1 The Philosophical Bases of Knowledge: Rationalistic versus
Empiricist Pursuits
1.2 What is a “Model”?
1.3 Social Sciences versus Hard Sciences
1.4 Is Complexity a Good Depiction of Reality? Are Multivariate
Methods Useful?
1.5 Causality
1.6 The Nature of Mathematics: Mathematics as a Representation
of Concepts
1.7 As a Scientist, How Much Mathematics Do You Need
to Know?
1.8 Statistics and Relativity
1.9 Experimental versus Statistical Control
1.10 Statistical versus Physical Effects
1.11 Understanding What “Applied Statistics” Means
Review Exercises
2 Introductory Statistics
2.1 Densities and Distributions
2.1.2 Binomial Distributions
2.1.3 Normal Approximation
2.1.4 Joint Probability Densities: Bivariate and Multivariate
Distributions
2.2 Chi-Square Distributions and Goodness-of-Fit Test
2.2.1 Power for Chi-Square Test of Independence
2.3 Sensitivity and Specificity
2.4 Scales of Measurement: Nominal, Ordinal, and Interval, Ratio
2.4.1 Nominal Scale
2.4.2 Ordinal Scale
2.4.3 Interval Scale
2.4.4 Ratio Scale
2.5 Mathematical Variables versus Random Variables
2.6 Moments and Expectations
2.7 Estimation and Estimators
2.8 Variance
2.9 Degrees of Freedom
2.10 Skewness and Kurtosis
2.11 Sampling Distributions
2.11.1 Sampling Distribution of the Mean
2.12 Central Limit Theorem
2.13 Confidence Intervals
2.14 Maximum Likelihood
2.15 Akaike’s Information Criteria
2.16 Covariance and Correlation
2.17 Psychometric Validity, Reliability: A Common Use of Correlation Coefficients
2.18 Covariance and Correlation Matrices
2.19 Other Correlation Coefficients
2.20 Student’s t Distribution
2.20.1 t-Tests for One Sample
2.20.2 t-Tests for Two Samples
2.21 Statistical Power
2.21.1 Power Estimation Using R and G∗Power
2.21.2 Estimating Sample Size and Power for Independent
Samples t-Test
2.22 Paired Samples t-Test: Statistical Test for Matched Pairs
(Elementary Blocking) Designs
2.23 Blocking with Several Conditions
2.24 Composite Variables: Linear Combinations
2.25 Models in Matrix Form
2.26 Graphical Approaches
2.26.1 Box-and-Whisker Plots
2.27 What Makes a p-Value Small? A Critical Overview and Simple
Demonstration of Null Hypothesis Significance Testing
2.27.1 Null Hypothesis Significance Testing: A History
of Criticism
2.27.2 The Makeup of a p-Value: A Brief Recap and Summary
2.27.3 The Issue of Standardized Testing: Are Students in
Your School Achieving More Than the National Average?
2.27.4 Other Test Statistics
2.27.5 The Solution
2.27.6 Statistical Distance: Cohen’s d
2.27.7 Why and Where the Significance Test Still Makes Sense
2.28 Chapter Summary and Highlights
Review Exercises
3 Analysis of Variance: Fixed Effects Models
3.1 What is Analysis of Variance? Fixed versus Random Effects
3.1.1 Small Sample Example: Achievement as a
Function of Teacher
3.2 How Analysis of Variance Works: A Big Picture Overview
3.2.1 Is the Observed Difference Likely? ANOVA as a
Comparison (Ratio) of Variances
3.3 Logic and Theory of ANOVA: A Deeper Look
3.3.1 Independent Samples t-tests versus Analysis of Variance
3.3.2 The ANOVA Model: Explaining Variation
3.3.3 Breaking Down a Deviation
3.3.4 Naming the Deviations
3.3.5 The Sums of Squares of ANOVA
3.4 From Sums of Squares to Unbiased Variance Estimators:
Dividing by Degrees of Freedom
3.5 Expected Mean Squares for One-Way Fixed Effects Model:
Deriving the F-Ratio
3.6 The Null Hypothesis in ANOVA
3.7 Fixed Effects ANOVA: Model Assumptions
3.8 A Word on Experimental Design and Randomization
3.9 A Preview of the Concept of Nesting
3.10 Balanced versus Unbalanced Data in ANOVA Models
3.11 Measures of Association and Effect Size in ANOVA:
Measures of Variance Explained
3.11.1 Eta-Squared
3.11.2 Omega-Squared
3.12 The F-Test and the Independent Samples t-Test
3.13 Contrasts and Post-Hocs
3.13.1 Independence of Contrasts
3.13.2 Independent Samples t-Test as a Linear Contrast
3.14 Post-Hoc Tests
3.14.1 Newman–Keuls and Tukey HSD
3.14.2 Tukey HSD
3.14.3 Scheffé Test
3.14.4 Contrast versus Post-Hoc? Which Should I Be Doing?
3.15 Sample Size and Power for ANOVA: Estimation with R and
G∗Power
3.15.1 Power for ANOVA in R and G∗Power
3.16 Fixed Effects One-Way Analysis of Variance in R:
Mathematics Achievement as a Function of Teacher
3.17 Analysis of Variance Via R’s lm
3.18 Kruskal–Wallis Test in R and the Motivation Behind Nonparametric Tests
3.19 ANOVA in SPSS: Achievement as a Function of Teacher
3.20 Chapter Summary and Highlights
Review Exercises
4 Factorial Analysis of Variance: Modeling Interactions
4.1 What is Factorial Analysis of Variance?
4.2 Theory of Factorial ANOVA: A Deeper Look
4.2.1 Deriving the Model for Two-Way Factorial ANOVA
4.2.2 Cell Effects
4.2.3 Interaction Effects
4.2.4 A Model for the Two-Way Fixed Effects ANOVA
4.3 Comparing One-Way ANOVA to Two-Way ANOVA: Cell
Effects in Factorial ANOVA versus Sample Effects
in One-Way ANOVA
4.4 Partitioning the Sums of Squares for Factorial ANOVA:
The Case of Two Factors
4.4.1 SS Total: A Measure of Total Variation
4.4.2 Model Assumptions: Two-Way Factorial Model
4.4.3 Expected Mean Squares for Factorial Design
4.5 Interpreting Main Effects in the Presence of Interactions
4.6 Effect Size Measures
4.7 Three-Way, Four-Way, and Higher-Order Models
4.8 Simple Main Effects
4.9 Nested Designs
4.9.1 Varieties of Nesting: Nesting of Levels versus Subjects
4.10 Achievement as a Function of Teacher and Textbook: Example of
Factorial ANOVA in R
4.10.1 Simple Main Effects for Achievement Data: Breaking Down
Interaction Effects
4.11 Interaction Contrasts
4.12 Chapter Summary and Highlights
Review Exercises
5 Introduction to Random Effects and Mixed Models
5.1 What is Random Effects Analysis of Variance?
5.2 Theory of Random Effects Models
5.3 Estimation in Random Effects Models
5.3.1 Transitioning from Fixed Effects to Random Effects
5.3.2 Expected Mean Squares for MS Between and MS Within
5.4 Defining Null Hypotheses in Random Effects Models
5.4.1 F-Ratio for Testing
5.5 Comparing Null Hypotheses in Fixed versus Random Effects Models:
The Importance of Assumptions
5.6 Estimating Variance Components in Random Effects Models:
ANOVA, ML, REML Estimators
5.6.1 ANOVA Estimators of Variance Components
5.6.2 Maximum Likelihood and Restricted Maximum
Likelihood
5.7 Is Achievement a Function of Teacher? One-Way Random Effects
Model in R
5.7.1 Proportion of Variance Accounted for by Teacher
5.8 R Analysis Using REML
5.9 Analysis in SPSS: Obtaining Variance Components
5.10 Factorial Random Effects: A Two-Way Model
5.11 Fixed Effects versus Random Effects: A Way of Conceptualizing Their
Differences
5.12 Conceptualizing the Two-Way Random Effects Model: The Makeup of
a Randomly Chosen Observation
5.13 Sums of Squares and Expected Mean Squares for Random Effects: The
Contaminating Influence of Interaction Effects
5.13.1 Testing Null Hypotheses
5.14 You Get What You Go in with: The Importance of Model Assumptions
and Model Selection
5.15 Mixed Model Analysis of Variance: Incorporating Fixed and Random
Effects
5.15.1 Mixed Model in R
5.16 Mixed Models in Matrices
5.17 Multilevel Modeling as a Special Case of the Mixed Model:
Incorporating Nesting and Clustering
5.18 Chapter Summary and Highlights
Review Exercises
6 Randomized Blocks and Repeated Measures
6.1 What Is a Randomized Block Design?
6.2 Randomized Block Designs: Subjects Nested Within Blocks
6.3 Theory of Randomized Block Designs
6.3.1 Nonadditive Randomized Block Design
6.3.2 Additive Randomized Block Design
6.4 Tukey Test for Nonadditivity
6.5 Assumptions for the Variance–Covariance Matrix
6.6 Intraclass Correlation
6.7 Repeated Measures Models: A Special Case of Randomized Block
Designs
6.8 Independent versus Paired Samples t-Test
6.9 The Subject Factor: Fixed or Random Effect?
6.10 Model for One-Way Repeated Measures Design
6.10.1 Expected Mean Squares for Repeated Measures Models
6.11 Analysis Using R: One-Way Repeated Measures: Learning as a
Function of Trial
6.12 Analysis Using SPSS: One-Way Repeated Measures: Learning as a
Function of Trial
6.12.1 Which Results Should Be Interpreted?
6.13 SPSS: Two-Way Repeated Measures Analysis of Variance: Mixed
Design: One Between Factor, One Within Factor
6.13.1 Another Look at the Between-Subjects Factor
6.14 Chapter Summary and Highlights
Review Exercises
7 Linear Regression
7.1 Brief History of Regression
7.2 Regression Analysis and Science: Experimental versus Correlational
Distinctions
7.3 A Motivating Example: Can Offspring Height Be Predicted?
7.4 Theory of Regression Analysis: A Deeper Look
7.5 Multilevel Yearnings
7.6 The Least-Squares Line
7.7 Making Predictions Without Regression
7.8 More About
7.9 Model Assumptions for Linear Regression
7.9.1 Model Specification
7.9.2 Measurement Error
7.10 Estimation of Model Parameters in Regression
7.10.1 Ordinary Least-Squares
7.11 Null Hypotheses for Regression
7.12 Significance Tests and Confidence Intervals for Model
Parameters
7.13 Other Formulations of the Regression Model
7.14 The Regression Model in Matrices: Allowing for More Complex
Multivariable Models
7.15 Ordinary Least-Squares in Matrices
7.16 Analysis of Variance for Regression
7.17 Measures of Model Fit for Regression: How Well Does the Linear
Equation Fit?
7.18 Adjusted
7.19 What “Explained Variance” Means: And More Importantly,
What It Does Not Mean
7.20 Values Fit by Regression
7.21 Least-Squares Regression in R: Using Matrix Operations
7.22 Linear Regression Using R
7.23 Regression Diagnostics: A Check on Model Assumptions
7.23.1 Understanding How Outliers Influence a Regression Model
7.23.2 Examining Outliers and Residuals
7.24 Regression in SPSS: Predicting Quantitative from Verbal
7.25 Power Analysis for Linear Regression in R
7.26 Chapter Summary and Highlights
Review Exercises
8 Multiple Linear Regression
8.1 Theory of Partial Correlation
8.2 Semipartial Correlations
8.3 Multiple Regression
8.4 Some Perspective on Regression Coefficients: “Experimental
Coefficients”?
8.5 Multiple Regression Model in Matrices
8.6 Estimation of Parameters
8.7 Conceptualizing Multiple R
8.8 Interpreting Regression Coefficients: Correlated Versus Uncorrelated Predictors
8.9 Anderson’s Iris Data: Predicting Sepal Length from Petal
Length and Petal Width
8.10 Fitting Other Functional Forms: A Brief Look at Polynomial
Regression
8.11 Measures of Collinearity in Regression: Variance Inflation Factor
and Tolerance
8.12 R-Squared as a Function of Partial and Semipartial Correlations:
The Stepping Stones to Forward and Stepwise Regression
8.13 Model-Building Strategies: Simultaneous, Hierarchichal,
Forward, and Stepwise
8.13.1 Simultaneous, Hierarchical, and Forward
8.13.2 Stepwise Regression
8.13.3 Selection Procedures in R
8.13.4 Which Regression Procedure Should Be Used? Concluding
Comments and Recommendations Regarding
Model-Building
8.14 Power Analysis for Multiple Regression
8.15 Introduction to Statistical Mediation: Concepts
and Controversy
8.15.1 Statistical versus True Mediation: Some Philosophical
Pitfalls in the Interpretation of Mediation Analysis
8.16 Brief Survey of Ridge and Lasso Regression: Penalized Regression Models and the Concept of Shrinkage
8.17 Chapter Summary and Highlights
Review Exercises
9 Interactions in Multiple Linear Regression: Dichotomous,
Polytomous, and Continuous Moderators
9.1 The Additive Regression Model with Two Predictors
9.2 Why the Interaction is the Product Term : Drawing
an Analogy to Factorial ANOVA
9.3 A Motivating Example of Interaction in Regression: Crossing a
Continuous Predictor with a Dichotomous Predictor
9.4 Analysis of Covariance
9.5 Continuous Moderators
9.6 Summing Up the Idea of Interactions in Regression
9.7 Do Moderators Really “Moderate” Anything? Some Philosophical
Considerations
9.8 Interpreting Model Coefficients in the Context of Moderators
9.9 Mean-Centering Predictors: Improving the Interpretability
of Simple Slopes
9.10 Multilevel Regression: Another Special Case of the Mixed
Model
9.11 Chapter Summary and Highlights
Review Exercises
10 Logistic Regression and the Generalized Linear Model
10.1 Nonlinear Models
10.2 Generalized Linear Models
10.2.1 The Logic of the Generalized Linear Model: How the Link
Function Transforms Nonlinear Response Variables
10.3 Canonical Links
10.3.1 Canonical Link for Gaussian Variable
10.4 Distributions and Generalized Linear Models
10.4.1 Logistic Models
10.4.2 Poisson Models
10.5 Dispersion Parameters and Deviance
10.6 Logistic Regression: A Generalized Linear Model for Binary
Responses
10.6.1 Model for Single Predictor
10.7 Exponential and Logarithmic Functions
10.7.1 Logarithms
10.7.2 The Natural Logarithm
10.8 Odds and the Logit
10.9 Putting It All Together: The Logistic Regression Model
10.9.1 Interpreting the Logit: A Survey of Logistic Regression
Output
10.10 Logistic Regression in R: Challenger O-ring Data
10.11 Challenger Analysis in SPSS
10.11.1 Predictions of New Cases
10.12 Sample Size, Effect Size, and Power
10.13 Further Directions
10.14 Chapter Summary and Highlights
Review Exercises
11 Multivariate Analysis of Variance
11.1 A Motivating Example: Quantitative and Verbal Ability
as a Variate
11.2 Constructing the Composite
11.3 Theory of MANOVA
11.4 Is the Linear Combination Meaningful?
11.5 Multivariate Hypotheses
11.6 Assumptions of MANOVA
11.7 Hotelling’s : The Case of Generalizing from Univariate
to Multivariate
11.8 The Covariance Matrix
11.9 From Sums of Squares and Cross-Products to Variances and
Covariances
11.10 Hypothesis and Error Matrices of MANOVA
11.11 Multivariate Test Statistics
11.11.1 Pillai’s Trace
11.11.2 Lawley–Hotelling’s Trace
11.12 Equality of Variance–Covariance Matrices
11.13 Multivariate Contrasts
11.14 MANOVA in R and SPSS
11.14.1 Univariate Analyses
11.15 MANOVA of Fisher’s Iris Data
11.16 Power Analysis and Sample Size for MANOVA
11.17 Multivariate Analysis of Covariance and Multivariate Models:
A Bird’s Eye View of Linear Models
11.18 Chapter Summary and Highlights
Review Exercises
12 Discriminant Analysis
12.1 What is Discriminant Analysis? The Big Picture
on the Iris Data
12.2 Theory of Discriminant Analysis
12.2.1 Discriminant Analysis for Two Populations
12.3 LDA in R and SPSS
12.4 Discriminant Analysis for Several Populations
12.4.1 Theory for Several Populations
12.5 Discriminating Species of Iris: Discriminant Analyses
for Three Populations
12.6 A Note on Classification and Error Rates
12.7 Discriminant Analysis and Beyond
12.8 Canonical Correlation
12.9 Motivating Example for Canonical Correlation: Hotelling’s 1936
Data
12.10 Canonical Correlation as a General Linear Model
12.11 Theory of Canonical Correlation
12.12 Canonical Correlation of Hotelling’s Data
12.13 Canonical Correlation on the Iris Data: Extracting Canonical
Correlation from Regression, MANOVA, LDA
12.14 Chapter Summary and Highlights
Review Exercises
13 Principal Components Analysis
13.1 History of Principal Components Analysis
13.2 Hotelling 1933
13.3 Theory of Principal Components Analysis
13.3.1 The Theorem of Principal Components Analysis
13.4 Eigenvalues as Variance
13.5 Principal Components as Linear Combinations
13.6 Extracting the First Component
13.6.1 Sample Variance of a Linear Combination
13.7 Extracting the Second Component
13.8 Extracting Third and Remaining Components
13.9 The Eigenvalue as the Variance of a Linear Combination
Relative to Its Length
13.10 Demonstrating Principal Components Analysis: Pearson’s 1901
Illustration
13.11 Scree Plots
13.12 Principal Components versus Least-Squares Regression Lines
13.13 Covariance versus Correlation Matrices: Principal Components
and Scaling
13.14 Principal Components Analysis Using SPSS
13.15 Chapter Summary and Highlights
Review Exercises
14 Factor Analysis
14.1 History of Factor Analysis
14.2 Factor Analysis: At a Glance
14.3 Exploratory vs. Confirmatory Factor Analysis
14.4 Theory of Factor Analysis: The Exploratory Factor-Analytic
Model
14.5 The Common Factor-Analytic Model
14.6 Assumptions of the Factor-Analytic Model
14.7 Why Model Assumptions are Important
14.8 The Factor Model as an Implication for the Covariance
Matrix
14.9 Again, Why is so Important a Result?
14.10 The Major Critique Against Factor Analysis: Indeterminacy
and the Nonuniqueness of Solutions
14.11 Has Your Factor Analysis Been Successful?
14.12 Estimation of Parameters in Exploratory Factor Analysis
14.13 Principal Factor
14.14 Maximum Likelihood
14.15 The Concepts (and Criticisms) of Factor Rotation
14.16 Varimax and Quartimax Rotation
14.17 Should Factors Be Rotated? Is That Not “Cheating?”
14.18 Sample Size for Factor Analysis
14.19 Principal Components Analysis versus Factor Analysis:
Two Key Differences
14.19.1 Hypothesized Model and Underlying Theoretical
Assumptions
14.19.2 Solutions Are Not Invariant in Factor Analysis
14.20 Principal Factor in SPSS: Principal Axis Factoring
14.21 Bartlett Test of Sphericity and Kaiser–Meyer–Olkin Measure
of Sampling Adequacy (MSA)
14.23 Factor Analysis in R: Holzinger and Swineford (1939)
14.23 Cluster Analysis
14.24 What Is Cluster Analysis? The Big Picture
14.25 Measuring Proximity
14.26 Hierarchical Clustering Approaches
14.27 Nonhierarchical Clustering Approaches
14.28 K-Means Cluster Analysis in R
14.29 Guidelines and Warnings About Cluster Analysis
14.30 A Brief Look at Multidimensional Scaling
14.31 Chapter Summary and Highlights
Review Exercises
15 Path Analysis and Structural Equation Modeling
15.1 Path Analysis: A Motivating Example—Predicting IQ Across
Generations
15.2 Path Analysis and “Causal Modeling”
15.3 Early Post-Wright Path Analysis: Predicting Child’s IQ
(Burks, 1928)
15.4 Decomposing Path Coefficients
15.5 Path Coefficients and Wright’s Contribution
15.6 Path Analysis in R: A Quick Overview—Modeling
Galton’s Data
15.7 Confirmatory Factor Analysis: The Measurement Model
15.7.1. Confirmatory Factor Analysis as a Means of Evaluating Construct Validity and Assessing Psychometric Qualities
15.8 Structural Equation Models
15.9 Direct, Indirect, and Total Effects
15.10 Theory of Statistical Modeling: A Deeper Look into Covariance
Structures and General Modeling
15.11 The Discrepancy Function and Chi-Square
15.13 Identification
15.14 Disturbance Variables
15.15 Measures and Indicators of Model Fit
15.16 Overall Measures of Model Fit
15.16.1 Root Mean Square Residual and Standardized Root
Mean Square Residual
15.16.2 Root Mean Square Error of Approximation
15.17 Model Comparison Measures: Incremental Fit Indices
15.18 Which Indicator of Model Fit Is Best?
15.19 Structural Equation Model in R
15.20 How All Variables Are Latent: A Suggestion for Resolving
the Manifest–Latent Distinction
15.21 The Structural Equation Model as a General Model:
Some Concluding Thoughts on Statistics and Science
15.22 Chapter Summary and Highlights
Review Exercises
References
Index