logo
logo
x
바코드검색
BOOKPRICE.co.kr
책, 도서 가격비교 사이트
바코드검색

인기 검색어

일간
|
주간
|
월간

실시간 검색어

검색가능 서점

도서목록 제공

Why Programs Fail: A Guide to Systematic Debugging

Why Programs Fail: A Guide to Systematic Debugging (Paperback, 2)

Andreas Zeller (지은이)
Morgan Kaufmann Pub
39,000원

일반도서

검색중
서점 할인가 할인률 배송비 혜택/추가 실질최저가 구매하기
39,000원 -0% 0원
1,170원
37,830원 >
yes24 로딩중
교보문고 로딩중
notice_icon 검색 결과 내에 다른 책이 포함되어 있을 수 있습니다.

중고도서

검색중
서점 유형 등록개수 최저가 구매하기
로딩중

eBook

검색중
서점 정가 할인가 마일리지 실질최저가 구매하기
로딩중

책 이미지

Why Programs Fail: A Guide to Systematic Debugging
eBook 미리보기

책 정보

· 제목 : Why Programs Fail: A Guide to Systematic Debugging (Paperback, 2) 
· 분류 : 외국도서 > 컴퓨터 > 소프트웨어 개발/엔지니어링 > 일반
· ISBN : 9780123745156
· 쪽수 : 544쪽
· 출판일 : 2009-06-01

목차

Amended Table of Contents
The * denotes additions/changes for the proposed second edition. For brevity, second-level sections are omitted from the list. Please note that there are also recurring end-of-chapter sections: Concepts, Tools, Further Reading, and Exercises.

Table of Contents
* Include a list of "How To's" as indicated in appropriate chapters
About the Author
Preface
* What's new in the second edition

1 How Failures Come to Be
1.1 My Program Does Not Work!
* New section "Facts on Bugs" - highlighting recent empirical findings
1.2 From Defects to Failures
1.3 Lost in Time and Space
1.4 From Failures to Fixes
1.5 Automated Debugging Techniques
1.6 Bugs, Faults, or Defects?
* New section "Learning From Mistakes" - pointing to the later chapter

2 Tracking Problems
2.1 Oh! All These Problems
2.2 Reporting Problems
2.3 Managing Problems
2.4 Classifying Problems
2.5 Processing Problems
2.6 Managing Problem Tracking
2.7 Requirements as Problems
2.8 Managing Duplicates
* New section "Collecting Problem Data" - laying the foundation for later investigation
2.9 Relating Problems and Fixes
2.10 Relating Problems and Tests
* 2.9 and 2.10 will be merged into a new section "A Concert of Activities", focusing on integrated environments like Jazz.net

3 Making Programs Fail
3.1 Testing for Debugging
3.2 Controlling the Program
3.3 Testing at the Presentation Layer
3.4 Testing at the Functionality Layer
3.5 Testing at the Unit Layer
3.6 Isolating Units
3.7 Designing for Debugging
* Expand on "design for diagnosability", esp. for embedded systems
3.8 Preventing Unknown Problems
* This section will be deleted and replaced with a whole new chapter 18

4 Reproducing Problems
4.1 The First Task in Debugging
4.2 Reproducing the Problem Environment
4.3 Reproducing Program Execution
4.4 Reproducing System Interaction
4.5 Focusing on Units
* Expand reflecting latest research results

5 Simplifying Problems
5.1 Simplifying the Problem
5.2 The Gecko BugAThon
5.3 Manual Simplification
5.4 Automatic Simplification
5.5 A Simplification Algorithm
5.6 Simplifying User Interaction
5.7 Random Input Simplified
5.8 Simplifying Faster

6 Scientific Debugging
6.1 How to Become a Debugging Guru
6.2 The Scientific Method
6.3 Applying the Scientific Method
6.4 Explicit Debugging
6.5 Keeping a Logbook
6.6 Debugging Quick-and-Dirty
6.7 Algorithmic Debugging
6.8 Deriving a Hypothesis
6.9 Reasoning About Programs

7 Deducing Errors
* This chapter will be renamed to "Tracking Dependences"
7.1 Isolating Value Origins
7.2 Understanding Control Flow
7.3 Tracking Dependences
7.4 Slicing Programs
7.5 Deducing Code Smells
* Move to new chapter 11 "Verifying Code"
7.6 Limits of Static Analysis
* Move to new chapter 11 "Verifying Code"

8 Observing Facts
8.1 Observing State
8.2 Logging Execution
8.3 Using Debuggers
8.4 Querying Events
8.5 Visualizing State

9 Tracking Origins
9.1 Reasoning Backwards
* Update with recent commercial tools
9.2 Exploring Execution History
9.3 Dynamic Slicing
9.4 Leveraging Origins
* Expand to use latest tools by Ko et al. as well as Gupta et al.
9.5 Tracking Down Infections

10 Asserting Expectations
10.1 Automating Observation
10.2 Basic Assertions
* Explain "design by contract" and its principles
10.3 Asserting Invariants
* Expand on integrating contracts with inheritance
10.4 Asserting Correctness
10.5 Assertions as Specifications
10.6 From Assertions to Verification
* Move to its own chapter "Verifying Code"
10.7 Reference Runs
* Move to "Verifying Code"
10.8 System Assertions
10.9 Checking Production Code
* Expand discussion; consider checking preconditions only

* New Chapter 11 Verifying Code, Why does my Code smell?
* Highlight tools like FindBUGS
* Defects as Abnormal Behavior
* Discuss work by Engler et al.
Assertions as Specifications
From Assertions to Verification- moved from 10.6
* Show the integration of ESC/Java and Spec# (with demos)
Reference Runs0 moved from 10.7
* Limits of Static Analysis

12 Detecting Anomalies
12.1 Capturing Normal Behavior
12.2 Comparing Coverage
12.3 Statistical Debugging
* Include and reflect recent work
* Integrate machine learning approaches
* Refer to the iBugs library
12.4 Collecting Data in the Field
12.5 Dynamic Invariants
* Discuss the AGITAR tool
12.6 Invariants on the Fly
12.7 From Anomalies to Defects

13 Causes and Effects
13.1 Causes and Alternate Worlds
13.2 Verifying Causes
13.3 Causality in Practice
13.4 Finding Actual Causes
13.5 Narrowing Down Causes
13.6 A Narrowing Example
13.7 The Common Context
13.8 Causes in Debugging

14 Isolating Failure Causes
14.1 Isolating Causes Automatically
14.2 Isolating versus Simplifying
14.3 An Isolation Algorithm
14.4 Implementing Isolation
14.5 Isolating Failure-inducing Input
14.6 Isolating Failure-inducing Schedules
14.7 Isolating Failure-inducing Changes
* Update to recent tools and screenshots
14.8 Problems and Limitations

15 Isolating Cause-Effect Chains
15.1 Useless Causes
15.2 Capturing Program States
15.3 Comparing Program States
15.4 Isolating Relevant Program States
15.5 Isolating Cause-Effect Chains
15.6 Isolating Failure-inducing Code
15.7 Issues and Risks
* Discuss how to recreate state via method calls
* New project in Python

16 Fixing the Defect
16.1 Locating the Defect
16.2 Focusing on the Most Likely Errors
16.3 Validating the Defect
16.4 Correcting the Defect
16.5 Workarounds
16.6 Learning from Mistakes
* This becomes its own chapter 17

* New chapter 17 Learning from Mistakes
*17.1 Measuring effort and damage We want to know how much effort and cost went into each problem
*17.2 Leveraging software archives Collect data from problem and change databases; access more of them
*17.3 Mapping errors Which components have had the most errors in the past? Demonstrate using Eclipse and Mozilla data
*17.4 Predicting errors Which components will have the most errors in the future?
*17.5 What is it that makes software complex? Complexity of code; lack of quality assurance; changing requirements... and how to measure this
*17.6 Digging for more data Goal-Question-Metric approach; experience factory
*17.7 Continuous Improvement Space Shuttle Software

* New chapter 18 Preventing Errors
18.1 Keep Things Simple General principles of good design and coding
18.2 Know what to do Pragmatic specification (design by contract, assertions)
18.3 Know how to check General principles of quality assurance
18.4 Learn from mistakes As laid out in (new) Section 16; integrated with earlier principles
18.5 Improve process and product- keep on challenging yourself

Appendix: Formal Definitions
A.1 Delta Debugging
A.2 Memory Graphs
A.3 Cause-Effect Chains

Glossary
Bibliography
Index

이 포스팅은 쿠팡 파트너스 활동의 일환으로,
이에 따른 일정액의 수수료를 제공받습니다.
이 포스팅은 제휴마케팅이 포함된 광고로 커미션을 지급 받습니다.
도서 DB 제공 : 알라딘 서점(www.aladin.co.kr)
최근 본 책