logo
logo
x
바코드검색
BOOKPRICE.co.kr
책, 도서 가격비교 사이트
바코드검색

인기 검색어

실시간 검색어

검색가능 서점

도서목록 제공

Machine Learning for Business Analytics: Concepts, Techniques, and Applications in R

Machine Learning for Business Analytics: Concepts, Techniques, and Applications in R (Hardcover, 2)

갈리트 시뮤엘리, Nitin R. Patel, C. Peter, 피터 게데크, Inbal Yahav Shenberger (지은이)
  |  
Wiley
2023-03-08
  |  
89,000원

일반도서

검색중
서점 할인가 할인률 배송비 혜택/추가 실질최저가 구매하기
알라딘 89,000원 -0% 0원 2,670원 86,330원 >
yes24 로딩중
교보문고 로딩중
notice_icon 검색 결과 내에 다른 책이 포함되어 있을 수 있습니다.

중고도서

검색중
로딩중

e-Book

검색중
서점 정가 할인가 마일리지 실질최저가 구매하기
로딩중

해외직구

책 이미지

Machine Learning for Business Analytics: Concepts, Techniques, and Applications in R

책 정보

· 제목 : Machine Learning for Business Analytics: Concepts, Techniques, and Applications in R (Hardcover, 2) 
· 분류 : 외국도서 > 컴퓨터 > 데이터베이스 관리 > 데이터 마이닝
· ISBN : 9781119835172
· 쪽수 : 688쪽

목차

Foreword by Ravi Bapna xxi

Foreword by Gareth James xxiii

Preface to the Second R Edition xxv

Acknowledgments xxix

PART I PRELIMINARIES

CHAPTER 1 Introduction 3

1.1 What Is Business Analytics? 3

1.2 What Is Machine Learning? 5

1.3 Machine Learning, AI, and Related Terms 5

1.4 Big Data 7

1.5 Data Science 8

1.6 Why Are There So Many Different Methods? 8

1.7 Terminology and Notation 9

1.8 Road Maps to This Book 11

Order of Topics 13

CHAPTER 2 Overview of the Machine Learning Process 17

2.1 Introduction 17

2.2 Core Ideas in Machine Learning 18

Classification 18

Prediction 18

Association Rules and Recommendation Systems 18

Predictive Analytics 19

Data Reduction and Dimension Reduction 19

Data Exploration and Visualization 19

Supervised and Unsupervised Learning 20

2.3 The Steps in a Machine Learning Project 21

2.4 Preliminary Steps 23

Organization of Data 23

Predicting Home Values in the West Roxbury Neighborhood 23

vii

viii CONTENTS

Loading and Looking at the Data in R 24

Sampling from a Database 26

Oversampling Rare Events in Classification Tasks 27

Preprocessing and Cleaning the Data 28

2.5 Predictive Power and Overfitting 35

Overfitting 36

Creating and Using Data Partitions 38

2.6 Building a Predictive Model 41

Modeling Process 41

2.7 Using R for Machine Learning on a Local Machine 46

2.8 Automating Machine Learning Solutions 47

Predicting Power Generator Failure 48

Uber’s Michelangelo 50

2.9 Ethical Practice in Machine Learning 52

Machine Learning Software: The State of the Market (by Herb Edelstein) 53

Problems 57

PART II DATA EXPLORATION AND DIMENSION REDUCTION

CHAPTER 3 Data Visualization 63

3.1 Uses of Data Visualization 63

Base R or ggplot? 65

3.2 Data Examples 65

Example 1: Boston Housing Data 65

Example 2: Ridership on Amtrak Trains 67

3.3 Basic Charts: Bar Charts, Line Charts, and Scatter Plots 67

Distribution Plots: Boxplots and Histograms 70

Heatmaps: Visualizing Correlations and Missing Values 73

3.4 Multidimensional Visualization 75

Adding Variables: Color, Size, Shape, Multiple Panels, and Animation 76

Manipulations: Rescaling, Aggregation and Hierarchies, Zooming, Filtering 79

Reference: Trend Lines and Labels 83

Scaling Up to Large Datasets 85

Multivariate Plot: Parallel Coordinates Plot 85

Interactive Visualization 88

3.5 Specialized Visualizations 91

Visualizing Networked Data 91

Visualizing Hierarchical Data: Treemaps 93

Visualizing Geographical Data: Map Charts 95

3.6 Major Visualizations and Operations, by Machine Learning Goal 97

Prediction 97

Classification 97

Time Series Forecasting 97

Unsupervised Learning 98

Problems 99

CONTENTS ix

CHAPTER 4 Dimension Reduction 101

4.1 Introduction 101

4.2 Curse of Dimensionality 102

4.3 Practical Considerations 102

Example 1: House Prices in Boston 103

4.4 Data Summaries 103

Summary Statistics 104

Aggregation and Pivot Tables 104

4.5 Correlation Analysis 107

4.6 Reducing the Number of Categories in Categorical Variables 109

4.7 Converting a Categorical Variable to a Numerical Variable 111

4.8 Principal Component Analysis 111

Example 2: Breakfast Cereals 111

Principal Components 116

Normalizing the Data 117

Using Principal Components for Classification and Prediction 120

4.9 Dimension Reduction Using Regression Models 121

4.10 Dimension Reduction Using Classification and Regression Trees 121

Problems 123

PART III PERFORMANCE EVALUATION

CHAPTER 5 Evaluating Predictive Performance 129

5.1 Introduction 130

5.2 Evaluating Predictive Performance 130

Naive Benchmark: The Average 131

Prediction Accuracy Measures 131

Comparing Training and Holdout Performance 133

Cumulative Gains and Lift Charts 133

5.3 Judging Classifier Performance 136

Benchmark: The Naive Rule 136

Class Separation 136

The Confusion (Classification) Matrix 137

Using the Holdout Data 138

Accuracy Measures 139

Propensities and Threshold for Classification 139

Performance in Case of Unequal Importance of Classes 143

Asymmetric Misclassification Costs 146

Generalization to More Than Two Classes 149

5.4 Judging Ranking Performance 150

Cumulative Gains and Lift Charts for Binary Data 150

Decile-wise Lift Charts 153

Beyond Two Classes 154

x CONTENTS

Gains and Lift Charts Incorporating Costs and Benefits 154

Cumulative Gains as a Function of Threshold 155

5.5 Oversampling 156

Creating an Over-sampled Training Set 158

Evaluating Model Performance Using a Non-oversampled Holdout Set 159

Evaluating Model Performance If Only Oversampled Holdout Set Exists 159

Problems 162

PART IV PREDICTION AND CLASSIFICATION METHODS

CHAPTER 6 Multiple Linear Regression 167

6.1 Introduction 167

6.2 Explanatory vs Predictive Modeling 168

6.3 Estimating the Regression Equation and Prediction 170

Example: Predicting the Price of Used Toyota Corolla Cars 171

Cross-validation and caret 175

6.4 Variable Selection in Linear Regression 176

Reducing the Number of Predictors 176

How to Reduce the Number of Predictors 178

Regularization (Shrinkage Models) 183

Problems 188

CHAPTER 7 k-Nearest Neighbors (kNN) 193

7.1 The k-NN Classifier (Categorical Outcome) 193

Determining Neighbors 194

Classification Rule 194

Example: Riding Mowers 195

Choosing k 196

Weighted k-NN 199

Setting the Cutoff Value 200

k-NN with More Than Two Classes 201

Converting Categorical Variables to Binary Dummies 201

7.2 k-NN for a Numerical Outcome 201

7.3 Advantages and Shortcomings of k-NN Algorithms 204

Problems 205

CHAPTER 8 The Naive Bayes Classifier 207

8.1 Introduction 207

Threshold Probability Method 208

Conditional Probability 208

Example 1: Predicting Fraudulent Financial Reporting 208

8.2 Applying the Full (Exact) Bayesian Classifier 209

Using the “Assign to the Most Probable Class” Method 210

Using the Threshold Probability Method 210

CONTENTS xi

Practical Difficulty with the Complete (Exact) Bayes Procedure 210

8.3 Solution: Naive Bayes 211

The Naive Bayes Assumption of Conditional Independence 212

Using the Threshold Probability Method 212

Example 2: Predicting Fraudulent Financial Reports, Two Predictors 213

Example 3: Predicting Delayed Flights 214

Working with Continuous Predictors 218

8.4 Advantages and Shortcomings of the Naive Bayes Classifier 220

Problems 223

CHAPTER 9 Classification and Regression Trees 225

9.1 Introduction 226

Tree Structure 227

Decision Rules 227

Classifying a New Record 227

9.2 Classification Trees 228

Recursive Partitioning 228

Example 1: Riding Mowers 228

Measures of Impurity 231

9.3 Evaluating the Performance of a Classification Tree 235

Example 2: Acceptance of Personal Loan 236

9.4 Avoiding Overfitting 239

Stopping Tree Growth 242

Pruning the Tree 243

Best-Pruned Tree 245

9.5 Classification Rules from Trees 247

9.6 Classification Trees for More Than Two Classes 248

9.7 Regression Trees 249

Prediction 250

Measuring Impurity 250

Evaluating Performance 250

9.8 Advantages and Weaknesses of a Tree 250

9.9 Improving Prediction: Random Forests and Boosted Trees 252

Random Forests 252

Boosted Trees 254

Problems 257

CHAPTER 10 Logistic Regression 261

10.1 Introduction 261

10.2 The Logistic Regression Model 263

10.3 Example: Acceptance of Personal Loan 264

Model with a Single Predictor 265

Estimating the Logistic Model from Data: Computing Parameter Estimates 267

Interpreting Results in Terms of Odds (for a Profiling Goal) 270

10.4 Evaluating Classification Performance 271

xii CONTENTS

10.5 Variable Selection 273

10.6 Logistic Regression for Multi-Class Classification 274

Ordinal Classes 275

Nominal Classes 276

10.7 Example of Complete Analysis: Predicting Delayed Flights 277

Data Preprocessing 282

Model-Fitting and Estimation 282

Model Interpretation 282

Model Performance 284

Variable Selection 285

Problems 289

CHAPTER 11 Neural Nets 293

11.1 Introduction 293

11.2 Concept and Structure of a Neural Network 294

11.3 Fitting a Network to Data 295

Example 1: Tiny Dataset 295

Computing Output of Nodes 296

Preprocessing the Data 299

Training the Model 300

Example 2: Classifying Accident Severity 304

Avoiding Overfitting 305

Using the Output for Prediction and Classification 305

11.4 Required User Input 307

11.5 Exploring the Relationship Between Predictors and Outcome 308

11.6 Deep Learning 309

Convolutional Neural Networks (CNNs) 310

Local Feature Map 311

A Hierarchy of Features 311

The Learning Process 312

Unsupervised Learning 312

Example: Classification of Fashion Images 313

Conclusion 320

11.7 Advantages and Weaknesses of Neural Networks 320

Problems 322

CHAPTER 12 Discriminant Analysis 325

12.1 Introduction 325

Example 1: Riding Mowers 326

Example 2: Personal Loan Acceptance 327

12.2 Distance of a Record from a Class 327

12.3 Fisher’s Linear Classification Functions 329

12.4 Classification Performance of Discriminant Analysis 333

12.5 Prior Probabilities 334

12.6 Unequal Misclassification Costs 334

CONTENTS xiii

12.7 Classifying More Than Two Classes 336

Example 3: Medical Dispatch to Accident Scenes 336

12.8 Advantages and Weaknesses 339

Problems 341

CHAPTER 13 Generating, Comparing, and Combining Multiple

Models

345

13.1 Ensembles 346

Why Ensembles Can Improve Predictive Power 346

Simple Averaging or Voting 348

Bagging 349

Boosting 349

Bagging and Boosting in R 349

Stacking 350

Advantages and Weaknesses of Ensembles 351

13.2 Automated Machine Learning (AutoML) 352

AutoML: Explore and Clean Data 352

AutoML: Determine Machine Learning Task 353

AutoML: Choose Features and Machine Learning Methods 354

AutoML: Evaluate Model Performance 354

AutoML: Model Deployment 356

Advantages and Weaknesses of Automated Machine Learning 357

13.3 Explaining Model Predictions 358

13.4 Summary 360

Problems 362

PART V INTERVENTION AND USER FEEDBACK

CHAPTER 14 Interventions: Experiments, Uplift Models, and

Reinforcement Learning

367

14.1 A/B Testing 368

Example: Testing a New Feature in a Photo Sharing App 369

The Statistical Test for Comparing Two Groups (T-Test) 370

Multiple Treatment Groups: A/B/n Tests 372

Multiple A/B Tests and the Danger of Multiple Testing 372

14.2 Uplift (Persuasion) Modeling 373

Gathering the Data 374

A Simple Model 376

Modeling Individual Uplift 376

Computing Uplift with R 378

Using the Results of an Uplift Model 378

14.3 Reinforcement Learning 380

Explore-Exploit: Multi-armed Bandits 380

Example of Using a Contextual Multi-Arm Bandit for Movie Recommendations 382

Markov Decision Process (MDP) 383

xiv CONTENTS

14.4 Summary 388

Problems 390

PART VI MINING RELATIONSHIPS AMONG RECORDS

CHAPTER 15 Association Rules and Collaborative Filtering 393

15.1 Association Rules 394

Discovering Association Rules in Transaction Databases 394

Example 1: Synthetic Data on Purchases of Phone Faceplates 394

Generating Candidate Rules 395

The Apriori Algorithm 397

Selecting Strong Rules 397

Data Format 399

The Process of Rule Selection 400

Interpreting the Results 401

Rules and Chance 403

Example 2: Rules for Similar Book Purchases 405

15.2 Collaborative Filtering 407

Data Type and Format 407

Example 3: Netflix Prize Contest 408

User-Based Collaborative Filtering: “People Like You” 409

Item-Based Collaborative Filtering 411

Evaluating Performance 412

Example 4: Predicting Movie Ratings with MovieLens Data 413

Advantages and Weaknesses of Collaborative Filtering 416

Collaborative Filtering vs Association Rules 417

15.3 Summary 419

Problems 421

CHAPTER 16 Cluster Analysis 425

16.1 Introduction 426

Example: Public Utilities 427

16.2 Measuring Distance Between Two Records 429

Euclidean Distance 429

Normalizing Numerical Variables 430

Other Distance Measures for Numerical Data 432

Distance Measures for Categorical Data 433

Distance Measures for Mixed Data 434

16.3 Measuring Distance Between Two Clusters 434

Minimum Distance 434

Maximum Distance 435

Average Distance 435

Centroid Distance 435

16.4 Hierarchical (Agglomerative) Clustering 437

Single Linkage 437

CONTENTS xv

Complete Linkage 438

Average Linkage 438

Centroid Linkage 438

Ward’s Method 438

Dendrograms: Displaying Clustering Process and Results 439

Validating Clusters 441

Limitations of Hierarchical Clustering 443

16.5 Non-Hierarchical Clustering: The k-Means Algorithm 444

Choosing the Number of Clusters (k) 445

Problems 450

PART VII FORECASTING TIME SERIES

CHAPTER 17 Handling Time Series 455

17.1 Introduction 455

17.2 Descriptive vs Predictive Modeling 457

17.3 Popular Forecasting Methods in Business 457

Combining Methods 457

17.4 Time Series Components 458

Example: Ridership on Amtrak Trains 458

17.5 Data Partitioning and Performance Evaluation 463

Benchmark Performance: Naive Forecasts 463

Generating Future Forecasts 465

Problems 466

CHAPTER 18 Regression-Based Forecasting 469

18.1 A Model with Trend 469

Linear Trend 469

Exponential Trend 473

Polynomial Trend 474

18.2 A Model with Seasonality 476

18.3 A Model with Trend and Seasonality 478

18.4 Autocorrelation and ARIMA Models 479

Computing Autocorrelation 480

Improving Forecasts by Integrating Autocorrelation Information 483

Evaluating Predictability 486

Problems 489

CHAPTER 19 Smoothing and Deep Learning Methods for

Forecasting

499

19.1 Smoothing Methods: Introduction 500

19.2 Moving Average 500

Centered Moving Average for Visualization 500

Trailing Moving Average for Forecasting 501

xvi CONTENTS

Choosing Window Width (w) 504

19.3 Simple Exponential Smoothing 505

Choosing Smoothing Parameter α 506

Relation Between Moving Average and Simple Exponential Smoothing 506

19.4 Advanced Exponential Smoothing 507

Series with a Trend 508

Series with a Trend and Seasonality 508

Series with Seasonality (No Trend) 509

19.5 Deep Learning for Forecasting 511

Problems 516

PART VIII DATA ANALYTICS

CHAPTER 20 Social Network Analytics 527

20.1 Introduction 527

20.2 Directed vs Undirected Networks 529

20.3 Visualizing and Analyzing Networks 530

Plot Layout 530

Edge List 533

Adjacency Matrix 533

Using Network Data in Classification and Prediction 534

20.4 Social Data Metrics and Taxonomy 534

Node-Level Centrality Metrics 535

Egocentric Network 536

Network Metrics 536

20.5 Using Network Metrics in Prediction and Classification 538

Link Prediction 538

Entity Resolution 540

Collaborative Filtering 542

20.6 Collecting Social Network Data with R 545

20.7 Advantages and Disadvantages 545

Problems 548

CHAPTER 21 Text Mining 549

21.1 Introduction 549

21.2 The Tabular Representation of Text 550

21.3 Bag-of-Words vs Meaning Extraction at Document Level 551

21.4 Preprocessing the Text 552

Tokenization 553

Text Reduction 555

Presence/Absence vs Frequency 556

Term Frequency–Inverse Document Frequency (TF-IDF) 557

From Terms to Concepts: Latent Semantic Indexing 558

Extracting Meaning 559

From Terms to High-Dimensional Word Vectors: Word2Vec or GloVe 559

CONTENTS xvii

21.5 Implementing Machine Learning Methods 560

21.6 Example: Online Discussions on Autos and Electronics 560

Importing and Labeling the Records 561

Text Preprocessing in R 561

Producing a Concept Matrix 561

Fitting a Predictive Model 562

Prediction 564

21.7 Example: Sentiment Analysis of Movie Reviews 564

Data Loading, Preparation, and Partitioning 565

Generating and Applying the GloVe Model 565

Fitting a Predictive Model 566

21.8 Summary 568

Problems 570

CHAPTER 22 Responsible Data Science 573

22.1 Introduction 573

22.2 Unintentional Harm 574

22.3 Legal Considerations 576

22.4 Principles of Responsible Data Science 577

Non-maleficence 578

Fairness 578

Transparency 579

Accountability 580

Data Privacy and Security 580

22.5 A Responsible Data Science Framework 580

Justification 581

Assembly 581

Data Preparation 582

Modeling 583

Auditing 583

22.6 Documentation Tools 584

Impact Statements 584

Model Cards 585

Datasheets 586

Audit Reports 586

22.7 Example: Applying the RDS Framework to the COMPAS Example 588

Unanticipated Uses 588

Ethical Concerns 588

Protected Groups 588

Data Issues 589

Fitting the Model 589

Auditing the Model 591

Bias Mitigation 596

22.8 Summary 598

Problems 599

xviii CONTENTS

PART IX CASES

CHAPTER 23 Cases 603

23.1 Charles Book Club 603

The Book Industry 603

Database Marketing at Charles 604

Machine Learning Techniques 606

Assignment 608

23.2 German Credit 610

Background 610

Data 610

Assignment 614

23.3 Tayko Software Cataloger 615

Background 615

The Mailing Experiment 615

Data 615

Assignment 617

23.4 Political Persuasion 619

Background 619

Predictive Analytics Arrives in US Politics 619

Political Targeting 619

Uplift 620

Data 621

Assignment 621

23.5 Taxi Cancellations 623

Business Situation 623

Assignment 623

23.6 Segmenting Consumers of Bath Soap 625

Business Situation 625

Key Problems 625

Data 626

Measuring Brand Loyalty 626

Assignment 626

23.7 Direct-Mail Fundraising 629

Background 629

Data 629

Assignment 629

23.8 Catalog Cross-Selling 632

Background 632

Assignment 632

23.9 Time Series Case: Forecasting Public Transportation Demand 634

Background 634

Problem Description 634

Available Data 634

Assignment Goal 634

Assignment 635

CONTENTS xix

Tips and Suggested Steps 635

23.10 Loan Approval 636

Background 636

Regulatory Requirements 636

Getting Started 636

Assignment 637

References 639

R Packages Used in the Book 643

Data Files Used in the Book 647

Index 649

저자소개

갈리트 시뮤엘리 (지은이)    정보 더보기
현재 대만 국립 칭화대학교 서비스 사이언스 연구소의 칭화 특훈교수이며, 베스트셀러인 비즈니스를 위한 데이터마이닝 책의 공동저자이다. 그동안 관련 분야에서 다수의 전문서적을 출간하였으며, 최고 학술지에 다수의 논문을 게재하였다. 또한, 시뮤엘리 교수는 인도 경영대학, 미국 메릴랜드대학교 스미스 경영대학원, 인도 경영대학 대만 국립칭화대학교, Statistics.com 등에서 예측, 데이터마이닝, 통계학, 기타 데이터 분석 등의 과목을 설계하고, 강의한 경력이 있다.
펼치기
갈리트 시뮤엘리의 다른 책 >
피터 게데크 (지은이)    정보 더보기
데이터 과학자. 과학 계산과 데이터 과학 분야에서 30년 이상의 경험을 가지고 있다. 노바티스(Novartis)에서 계산화학자로 20년 동안 근무했고, 현재 컬래버레이티브 드러그 디스커버리(Collaborative Drug Discovery)에서 선임 데이터 과학자로 근무하며 약물 후보 물질의 생물학적, 물리화학적 특성을 예측하기 위한 머신러닝 알고리즘을 개발하는 일을 전문적으로 한다. 『Data Mining for Business Analytics』(Wiley, 2019)의 공동 저자이다. 프리드리히 알렉산더 대학교에서 화학 박사학위를 받았으며 독일 하겐 통신대학교(Fernuniversitat in Hagen)에서 수학을 전공했다.
펼치기
피터 게데크의 다른 책 >
Inbal Yahav Shenberger (지은이)    정보 더보기
펼치기
이 포스팅은 쿠팡 파트너스 활동의 일환으로,
이에 따른 일정액의 수수료를 제공받습니다.
도서 DB 제공 : 알라딘 서점(www.aladin.co.kr)
최근 본 책