logo
logo
x
바코드검색
BOOKPRICE.co.kr
책, 도서 가격비교 사이트
바코드검색

인기 검색어

실시간 검색어

검색가능 서점

도서목록 제공

Agile Machine Learning: Effective Machine Learning Inspired by the Agile Manifesto

Agile Machine Learning: Effective Machine Learning Inspired by the Agile Manifesto (Paperback)

Eric Carter, Matthew Hurst (지은이)
Apress
124,470원

일반도서

검색중
서점 할인가 할인률 배송비 혜택/추가 실질최저가 구매하기
102,060원 -18% 0원
5,110원
96,950원 >
yes24 로딩중
교보문고 로딩중
notice_icon 검색 결과 내에 다른 책이 포함되어 있을 수 있습니다.

중고도서

검색중
서점 유형 등록개수 최저가 구매하기
로딩중

eBook

검색중
서점 정가 할인가 마일리지 실질최저가 구매하기
로딩중

책 이미지

Agile Machine Learning: Effective Machine Learning Inspired by the Agile Manifesto
eBook 미리보기

책 정보

· 제목 : Agile Machine Learning: Effective Machine Learning Inspired by the Agile Manifesto (Paperback) 
· 분류 : 외국도서 > 컴퓨터 > 프로그래밍 > 마이크로소프트 프로그래밍
· ISBN : 9781484251065
· 쪽수 : 248쪽
· 출판일 : 2019-08-22

목차

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
    1. How do you get started on a data engineering problem?
    2. There are so many directions and approaches to go in--how do you pick the right one?
    3. The importance of establishing measurements and metrics early.
      1. Also - how to state problems in the traditional ML frame (test sets, train sets, dev sets, etc.) without the necessity of an “ML’ solution - you can test a heuristic against your test set!
    4. Daily measurements and metric
    5. The power of metric driven teams and data driven decision making
    6. who is the customer? Do they need to be represented? Is the customer the data goal?
    7. how to establish / calibrate what is possible given the data available and the inferences being asked of the team.
    8. is this new problem, or a known problem with a known recipe?
  2. Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.
    1. Building models that are changeable, configurable, extensible, pluggable--how to not redo everything when requirements change
    2. Adjusting metrics when requirements change
    3. When to throw things away
    4. Anticipating change (but not over-anticipating change)
    5. Ensuring that model-features are testable and agile
    6. Balancing work priorities. When to break a sprint
    7. Balancing bug fixes versus new work
    8. When to make long term, long payoff investments
    9. Balancing metrics that naturally work against each other (duplicate rate versus overmatch, conversion rate versus time on site)
    10. The role of test, train and regression data sets in the process; how these can be adapted to new or modified requirements; the importance of keeping track of them and how they were generated
    11. How to build, manage and develop a data labeling / curation /judgment team; how to set expectations for quality of labels, how to know when your devs should be labeling data (to learn the problem space) and if and when to engage with a labeling team.
  3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  4. Deciding on a product cadence and project rhythm
  5. How to avoid ML death-marches
  6. How long do we go down a particular path before we explore another direction
  7. When to invest in better models/hyperparameters versus when to invest in better features
  8. How big of an improvement is good enough to ship?
  9. Using flighting to evade analysis paralysis
  10. Leveraging your users to deliver working software more quickly (fix the world, observing user behavior and clicks)!
  11. How to fail fast
  12. How to optimize for DEVELOPER PRODUCTIVITY (the most important thing in Matt’s opinion) and how not to get fooled by processes which result in shoddy engineering that will come back to haunt you.
  13. How to deliver DATA in a predictable manner; when how to migrate schema; the hierarchy of changing things, their associated costs and implied degree of design required (i.e. if you can change it easily you can worry about the correctness of the data model less, but - can you really change it easily?)
  • Business people and developers must work together daily throughout the project.
    1. Designing business focused metrics--ensuring the right problem is solved
      1. How to recognize metrics for metrics sake that don’t actually relate to the business, the user, or the engineering progress in solving a problem
    2. How to work with business people to understand a domain
    3. Helping business people understand what to expect from an ml project
    4. Exposing out early work/models to business people before they go into production
    5. Building a vision
    6. Using metrics to align cross team/cross division work...the importance of shared metrics
  • Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  • Hiring and interviewing practices for hiring data engineers
  • Tooling for data engineers--what is important
  • The developer “inner loop” and “outer loop”
  • Why you shouldn't skimp on diagnostic tools and monitoring
  • Shortening the time it takes to build, test, and deploy a model
  • How to ensure that you move beyond just the urgent work items (urgent vs important quad)
  • How to balance passion an individual has for an approach (e.g. an ML paradigm) with the applicability of that approach to your problem space and business need
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
    1. Important face to face meetings
    2. The importance of data wallows
    3. How to do daily standups in a data engineering team
      1. How and when to communicate more complex discoveries and punt to a data wallow
    4. Encouraging cross team problem solving  and ‘happy accidents’ with all hands demos
      1. And how to avoid these being a n-1 reporting process for a boss / alternatively, why this might be a good thing
      2. Presenting data, data viz skills, power point skills are not optional nice to haves but a key skill area for any journey man developer - this is not just for managers and designers.
    5. Face to face with your customers
    6. Face to face with your judges and  vendors
  • Working software is the primary measure of progress.
    1. How to ensure the software is working every day?
    2. Testing in production
    3. The role of devops
    4. How to do “model spikes” to figure out how much of an impact a good model could have (e.g. wizard of oz)
    5. How to do early abandonment of something that won’t work
    6. How to know if / how / when to let ICs explore vs exploit
    7. Models in production with no pressure to get better versus academia models
    8. What if your delivery is really data, not working software?
    9. The role of documentation for processes (e.g. deployment) and algorithmic decisions (we did it this way because this data and this experiment showed this result)
  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
    1. How to have the right amount of urgency
    2. Setting audacious goals while maintaining realism
    3. Communicating to sponsors that ml isn't a magic bullet for everything and has limitations
      1. How to negotiate / avoid contractual obligations around specific quality measures
    4. The importance of slack weeks
    5. Making sure people  understand why their work matters and how it fits
    6. How to make live site sustainable
    7. Cross geo collaboration
    8. The hierarchy of goals (big multi-year, annual, quarterly … daily)
  • Continuous attention to technical excellence and good design enhances agility.
    1. Best practices for multistage classification systems
    2. Best practices for testing
    3. Best practices for deploying
    4. Best practices for feature engineering
    5. Best practices around big data systems and pipelines
    6. Best practices around ‘I need to fix something fast’
    7. Best practices around performance sensitive ml
    8. The power of great labeling: The importance of paying attention to judges and how to make them part of the team
    9. Pros and cons of ‘defense in depth ml’. (Clean catalog versus relevance filters the catalog)
    10. Data workflow design
    11. E.g. don’t drop any data vs each stage ‘improves’ the data and drops earlier representations
  • Simplicity--the art of maximizing the amount of work not done--is essential.
    1. How to analyze the gaps in your model
    2. How to prioritize what model improvements you go after
    3. What ml is bad at
    4. When to just brute force it
    5. When and how to put humans in the loop
      1. General architectures that allow any component in your system to ‘phone a human’
    6. Limits to precision and recall--managing the existence of failure cases
      1. Always design for mitigation - any feedback can be addressed by a) taking the feedback as an example of a type of error that can be explored, quantified, etc. and b) immediately fixed so that that instance is never a problem
      2. How to manage data patches (as from the above mitigations) so they don’t become stale and bite you back
    7. Reuse versus build from scratch
  • The best architectures, requirements, and designs emerge from self-organizing teams.
    1. A great feature engineering idea could come from anyone
    2. How to make sure you don’t have a “hub and spoke” model where everyone is dependent on one expert (also, I think, known as the surgeon model)
    3. Asking the right questions and setting goals in a way that encourages everyone on the team to engage their brain
    4. Setting vision for the team
    5. Connecting the team to the organizational needs and strategy
  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
    1. How to do early abandonment of something that won’t work
    2. Understanding your dsats
    3. Revisiting and retuning your metrics
    4. The benefits of having a competitor (competitive metrics)
      1.  The pros and cons of using competitor tail lights as your ground truth - you will never be better
    5. Early warning signs that something may be going wrong
    6. Common problem patterns and the ml techniques used to solve them

     

     

    이 포스팅은 쿠팡 파트너스 활동의 일환으로,
    이에 따른 일정액의 수수료를 제공받습니다.
    이 포스팅은 제휴마케팅이 포함된 광고로 커미션을 지급 받습니다.
    도서 DB 제공 : 알라딘 서점(www.aladin.co.kr)
    최근 본 책