logo
logo
x
바코드검색
BOOKPRICE.co.kr
책, 도서 가격비교 사이트
바코드검색

인기 검색어

실시간 검색어

검색가능 서점

도서목록 제공

Machine Learning for Factor Investing: R Version

Machine Learning for Factor Investing: R Version (Hardcover)

토니 귀다, 기욤 코케렛 (지은이)
Taylor & Francis Ltd
425,250원

일반도서

검색중
서점 할인가 할인률 배송비 혜택/추가 실질최저가 구매하기
348,700원 -18% 0원
17,440원
331,260원 >
yes24 로딩중
교보문고 로딩중
notice_icon 검색 결과 내에 다른 책이 포함되어 있을 수 있습니다.

중고도서

검색중
서점 유형 등록개수 최저가 구매하기
로딩중

eBook

검색중
서점 정가 할인가 마일리지 실질최저가 구매하기
로딩중

책 이미지

Machine Learning for Factor Investing: R Version
eBook 미리보기

책 정보

· 제목 : Machine Learning for Factor Investing: R Version (Hardcover) 
· 분류 : 외국도서 > 경제경영 > 통계
· ISBN : 9780367473228
· 쪽수 : 342쪽
· 출판일 : 2020-09-01

목차

I Introduction 1. Preface What this book is not about The targeted audience How this book is structured Companion website Why R? Coding instructions Acknowledgements Future developments 2. Notations and data Notations Dataset 3. Introduction Context Portfolio construction: the workflow Machine Learning is no Magic Wand 4. Factor investing and asset pricing anomalies Introduction Detecting anomalies Simple portfolio sorts Factors Predictive regressions, sorts, and p-value issues Fama-Macbeth regressions Factor competition Advanced techniques Factors or characteristics? Hot topics: momentum, timing and ESG Factor momentum Factor timing The green factors The link with machine learning A short list of recent references Explicit connections with asset pricing models Coding exercises 5. Data preprocessing Know your data Missing data Outlier detection Feature engineering Feature selection Scaling the predictors Labelling Simple labels Categorical labels The triple barrier method Filtering the sample Return horizons Handling persistence Extensions Transforming features Macro-economic variables Active learning Additional code and results Impact of rescaling: graphical representation Impact of rescaling: toy example Coding exercises II Common supervised algorithms 6. Penalized regressions and sparse hedging for minimum variance portfoliosPenalised regressions Simple regressions Forms of penalizations Illustrations Sparse hedging for minimum variance portfolios Presentation and derivations Example Predictive regressions Literature review and principle Code and results Coding exercise 7. Tree-based methods Simple trees Principle Further details on classification Pruning criteria Code and interpretation Random forests Principle Code and results Boosted trees: Adaboost Methodology Illustration Boosted trees: extreme gradient boosting Managing Loss Penalisation Aggregation Tree structure Extensions Code and results Instance weighting Discussion Coding exercises 8. Neural networks The original perceptron Multilayer perceptron (MLP) Introduction and notations Universal approximation Learning via back-propagation Further details on classification How deep should we go? And other practical issues Architectural choices Frequency of weight updates and learning duration Penalizations and dropout Code samples and comments for vanilla MLP Regression example Classification example Custom losses Recurrent networks Presentation Code and results Other common architectures Generative adversarial networks Auto-encoders A word on convolutional networks Advanced architectures Coding exercise 9. Support vector machines SVM for classification SVM for regression Practice Coding exercises 10. Bayesian methods The Bayesian framework Bayesian sampling Gibbs sampling Metropolis-Hastings sampling Bayesian linear regression Naive Bayes classifier Bayesian additive trees General formulation Priors Sampling and predictions Code III From predictions to portfolios 11. Validating and tuning Learning metrics Regression analysis Classification analysis Validation The variance-bias tradeoff: theory The variance-bias tradeoff: illustration The risk of overfitting: principle The risk of overfitting: some solutions The search for good hyperparameters Methods Example: grid search Example: Bayesian optimization Short discussion on validation in backtests 12.Ensemble models Linear ensembles Principles Example Stacked ensembles Two stage training Code and results Extensions Exogenous variables Shrinking inter-model correlations Exercise 13. Portfolio backtesting Setting the protocol Turning signals into portfolio weights Performance metrics Discussion Pure performance and risk indicators Factor-based evaluation Risk-adjusted measures Transaction costs and turnover Common errors and issues Forward looking data Backtest overfitting Simple safeguards Implication of non-stationarity: forecasting is hard General comments The no free lunch theorem Example Coding exercises IV Further important topics 14.Interpretability Global interpretations Simple models as surrogates Variable importance (tree-based) Variable importance (agnostic) Partial dependence plot Local interpretations LIME Shapley values Breakdown 15. Two key concepts: causality and non-stationarity Causality Granger causality Causal additive models Structural time-series models Dealing with changing environments Non-stationarity: yet another illustration Online learning Homogeneous transfer learning 16. Unsupervised learning The problem with correlated predictors Principal component analysis and autoencoders A bit of algebra PCA Autoencoders Application Clustering via k-means Nearest neighbors Coding exercise 17. Reinforcement learning Theoretical layout General framework Q-learning SARSA The curse of dimensionality Policy gradient Principle Extensions Simple examples Q-learning with simulations Q-learning with market data Concluding remarks Exercises V Appendix Data Description Solution to exercises

저자소개

토니 귀다 (지은이)    정보 더보기
RAM 액티브 인베스트먼트(RAM Active Investments)에서 퀀트 매크로 부문의 공동 대표를 맡고 있다. 또한, 『Big Data and Machine Learning in Quantitative Investment』(Wiley, 2018)의 편집자이자 공동 저자다.
펼치기
기욤 코케렛 (지은이)    정보 더보기
EM리옹 경영대학(EMLyon Business School)의 금융 및 데이터 과학 부교수다. 최근 연구는 금융 경제학에서의 머신러닝 응용을 중심으로 이뤄지고 있다.
펼치기
이 포스팅은 쿠팡 파트너스 활동의 일환으로,
이에 따른 일정액의 수수료를 제공받습니다.
이 포스팅은 제휴마케팅이 포함된 광고로 커미션을 지급 받습니다.
도서 DB 제공 : 알라딘 서점(www.aladin.co.kr)
최근 본 책