logo
logo
x
바코드검색
BOOKPRICE.co.kr
책, 도서 가격비교 사이트
바코드검색

인기 검색어

실시간 검색어

검색가능 서점

도서목록 제공

Supervised Machine Learning for Text Analysis in R

Supervised Machine Learning for Text Analysis in R (Hardcover, 1)

줄리아 실기, Emil Hvitfeldt (지은이)
Chapman and Hall/CRC
346,180원

일반도서

검색중
서점 할인가 할인률 배송비 혜택/추가 실질최저가 구매하기
283,860원 -18% 0원
14,200원
269,660원 >
yes24 로딩중
교보문고 로딩중
notice_icon 검색 결과 내에 다른 책이 포함되어 있을 수 있습니다.

중고도서

검색중
서점 유형 등록개수 최저가 구매하기
로딩중

eBook

검색중
서점 정가 할인가 마일리지 실질최저가 구매하기
로딩중

책 이미지

Supervised Machine Learning for Text Analysis in R
eBook 미리보기

책 정보

· 제목 : Supervised Machine Learning for Text Analysis in R (Hardcover, 1) 
· 분류 : 외국도서 > 과학/수학/생태 > 수학 > 확률과 통계 > 일반
· ISBN : 9780367554187
· 쪽수 : 402쪽
· 출판일 : 2021-11-04

목차

I Natural Language Features 1. Language and modeling Linguistics for text analysis A glimpse into one area: morphology Different languages Other ways text can vary Summary 2. Tokenization What is a token? Types of tokens Character tokens Word tokens Tokenizing by n-grams Lines, sentence, and paragraph tokens Where does tokenization break down? Building your own tokenizer Tokenize to characters, only keeping letters Allow for hyphenated words Wrapping it in a function Tokenization for non-Latin alphabets Tokenization benchmark Summary 3. Stop words Using premade stop word lists Stop word removal in R Creating your own stop words list All stop word lists are context-specific What happens when you remove stop words Stop words in languages other than English Summary 4. Stemming How to stem text in R Should you use stemming at all? Understand a stemming algorithm Handling punctuation when stemming Compare some stemming options Lemmatization and stemming Stemming and stop words Summary 5. Word Embeddings Motivating embeddings for sparse, high-dimensional data Understand word embeddings by finding them yourself Exploring CFPB word embeddings Use pre-trained word embeddings Fairness and word embeddings Using word embeddings in the real world Summary II Machine Learning Methods Regression A first regression model Building our first regression model Evaluation Compare to the null model Compare to a random forest model Case study: removing stop words Case study: varying n-grams Case study: lemmatization Case study: feature hashing Text normalization What evaluation metrics are appropriate? The full game: regression Preprocess the data Specify the model Tune the model Evaluate the modeling Summary Classification A first classification model Building our first classification model Evaluation Compare to the null model Compare to a lasso classification model Tuning lasso hyperparameters Case study: sparse encoding Two class or multiclass? Case study: including non-text data Case study: data censoring Case study: custom features Detect credit cards Calculate percentage censoring Detect monetary amounts What evaluation metrics are appropriate? The full game: classification Feature selection Specify the model Evaluate the modeling Summary III Deep Learning Methods Dense neural networks Kickstarter data A first deep learning model Preprocessing for deep learning One-hot sequence embedding of text Simple flattened dense network Evaluation Using bag-of-words features Using pre-trained word embeddings Cross-validation for deep learning models Compare and evaluate DNN models Limitations of deep learning Summary Long short-term memory (LSTM) networks A first LSTM model Building an LSTM Evaluation Compare to a recurrent neural network Case study: bidirectional LSTM Case study: stacking LSTM layers Case study: padding Case study: training a regression model Case study: vocabulary size The full game: LSTM Preprocess the data Specify the model Summary Convolutional neural networks What are CNNs? Kernel Kernel size A first CNN model Case study: adding more layers Case study: byte pair encoding Case study: explainability with LIME Case study: hyperparameter search The full game: CNN Preprocess the data Specify the model Summary IV Conclusion Text models in the real world Appendix A Regular expressions A Literal characters A Meta characters A Full stop, the wildcard A Character classes A Shorthand character classes A Quantifiers A Anchors A Additional resources B Data B Hans Christian Andersen fairy tales B Opinions of the Supreme Court of the United States B Consumer Financial Protection Bureau (CFPB) complaints B Kickstarter campaign blurbs C Baseline linear classifier C Read in the data C Split into test/train and create resampling folds C Recipe for data preprocessing C Lasso regularized classification model C A model workflow C Tune the workflow

저자소개

줄리아 실기 (지은이)    정보 더보기
줄리아는 스택 오버플로에서 일하는 데이터 과학자다. 복잡한 데이터셋들을 분석하기도 하고 기술적 주제로 다양한 청중과 소통하기도 한다. 천체물리학 박사이며, 제인 오스틴을 사랑하고, 아름다운 도표 그리기를 좋아한다.
펼치기
이 포스팅은 쿠팡 파트너스 활동의 일환으로,
이에 따른 일정액의 수수료를 제공받습니다.
이 포스팅은 제휴마케팅이 포함된 광고로 커미션을 지급 받습니다.
도서 DB 제공 : 알라딘 서점(www.aladin.co.kr)
최근 본 책