통합
제목
저자
외국
ISBN
목차
출판
x
BOOK
PRICE.co.kr
책, 도서 가격비교 사이트
베스트셀러
알라딘
교보문고
Yes24
신간도서
알라딘
교보문고
Yes24
국내도서
가정/요리/뷰티
건강/취미/레저
경제경영
고등학교참고서
고전
과학
달력/기타
대학교재/전문서적
만화
사회과학
소설/시/희곡
수험서/자격증
어린이
에세이
여행
역사
예술/대중문화
외국어
유아
인문학
자기계발
잡지
전집/중고전집
종교/역학
좋은부모
중학교참고서
청소년
초등학교참고서
컴퓨터/모바일
외국도서
가정/원예/인테리어
가족/관계
건강/스포츠
건축/디자인
게임/토이
경제경영
공예/취미/수집
교육/자료
기술공학
기타 언어권 도서
달력/다이어리/연감
대학교재
독일 도서
만화
법률
소설/시/희곡
수험서
스페인 도서
어린이
언어학
에세이
여행
역사
예술/대중문화
오디오북
요리
유머
의학
인문/사회
일본 도서
자기계발
자연과학
전기/자서전
종교/명상/점술
중국 도서
청소년
컴퓨터
한국관련도서
해외잡지
ELT/어학/사전
내책판매
인기 검색어
일간
|
주간
|
월간
1
김동식
2
등업신공
3
울보 집주인
4
학교사회
5
아저씨
실시간 검색어
grammr
markov decision
ccjdao
grammar map 2-b
함평문학제27집
검색가능 서점
도서목록 제공
알라딘,
영풍문고,
교보문고
"markov decision"
(으)로 15개의 도서가 검색 되었습니다.
Handbook of Markov Decision Processes (Methods and Applications)
| Springer Nature B.V.
73,480원 | 20171215 | 9781461508069
Eugene A. Feinberg Adam Shwartz This volume deals with the theory of Markov Decision Processes (MDPs) and their applications. Each chapter was written by a leading expert in the re spective area. The papers cover major research areas and methodologies, and discuss open questions and future research directions. The papers can be read independently, with the basic notation and concepts ofSection 1.2.
가격비교
Markov Decision Processes in Practice
Boucherie, Richard (EDT) | Springer
551,230원 | 20170316 | 9783319477640
This book presents classical Markov Decision Processes (MDP) for real-life applications and optimization. MDP allows users to develop and formally support approximate and simple decision rules, and this book showcases state-of-the-art applications in which MDP was key to the solution approach.
가격비교
Markov Decision Processes in Practice
| Springer Nature B.V.
73,480원 | 20170315 | 9783319477657
This book presents classical Markov Decision Processes (MDP) for real-life applications and optimization. MDP allows users to develop and formally support approximate and simple decision rules, and this book showcases state-of-the-art applications in which MDP was key to the solution approach. ?The book is divided into six parts.
가격비교
Constrained Markov Decision Processes
| Routledge
423,220원 | 19990330 | 9780849303821
This book covers Markov Decisions Processes with constraints -- analyzing the many developments in the areas of MDPs within the last decade.
가격비교
Markov Decision Processes 양장본 Hardcover
Puterman, Martin L. | Wiley
0원 | 19940101 | 9780471619772
가격비교
Markov Decision Processes and Stochastic Positional Games: Optimal Control on Complex Networks
| Springer
264,580원 | 20250114 | 9783031401824
This book presents recent findings and results concerning the solutions of especially finite state-space Markov decision problems and determining Nash equilibria for related stochastic games with average and total expected discounted reward payoffs.
가격비교
Continuous-Time Markov Decision Processes: Theory and Applications (Theory and Applications #62)
Guo, Xianping, Hernandez-Lerma, Onesimo | Springer
202,100원 | 20210101 | 9783642025464
Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields.
가격비교
Planning with Markov Decision Processes: An AI Perspective
| Springer
69,800원 | 20120703 | 9783031004315
Markov Decision Processes (MDPs) are widely popular in Artificial Intelligence for modeling sequential decision-making scenarios with probabilistic dynamics.
가격비교
Planning with Markov Decision Processes: An AI Perspective (An Ai Perspective)
| Morgan & Claypool
73,500원 | 20120630 | 9781608458868
Markov Decision Processes (MDPs) are widely popular in Artificial Intelligence for modeling sequential decision-making scenarios with probabilistic dynamics. They are the framework of choice when designing an intelligent agent that needs to act for long periods of time in an environment where its actions could have uncertain outcomes. MDPs are actively researched in two related subareas of AI, probabilistic planning and reinforcement learning.
가격비교
Handbook of Markov Decision Processes: Methods and Applications
Feinberg, Eugene A. (EDT)/ Shwartz, Adam (EDT) | Kluwer
698,230원 | 20020101 | 9780792374596
1.1 AN OVERVIEW OF MARKOV DECISION PROCESSES The theory of Markov Decision Processes-also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming-studiessequential optimization ofdiscrete time stochastic systems.
가격비교
Markov Decision Processes: Discrete Stochastic Dynamic Programming (Discrete Stochastic Dynamic Programming)
Puterman, Martin L. | Wiley-Interscience
157,240원 | 20050301 | 9780471727828
The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists. "This text is unique in bringing together so many results hitherto found only in part in other texts and papers. . . . The text is fairly self-contained, inclusive of some basic mathematical results needed, and provides a rich diet of examples, applications, and exercises. The bibliographical material at the end of each chapter is excellent, not only from a historical perspective, but because it is valuable for researchers in acquiring a good perspective of the MDP research potential." -Zentralblatt fur Mathematik ". . . it is of great value to advanced-level students, researchers, and professional practitioners of this field to have now a complete volume (with more than 600 pages) devoted to this topic. . . . Markov Decision Processes: Discrete Stochastic Dynamic Programming represents an up-to-date, unified, and rigorous treatment of theoretical and computational aspects of discrete-time Markov decision processes." -Journal of the American Statistical Association
가격비교
Simulation-Based Algorithms for Markov Decision Processes Paperback
Chang, Hyeong Soo/ Fu, Michael, C./ Hu, Jiaqiao/ M | Springer
0원 | 20070301 | 9781846286896
Markov decision process (MDP) models are widely used for modeling sequential decision-making problems that arise in engineering, economics, computer science, and the social sciences. It is well-known that many real-world problems modeled by MDPs have huge state and/or action spaces, leading to the notorious curse of dimensionality that makes practical solution of the resulting models intractable. In other cases, the system of interest is complex enough that it is not feasible to specify some of the MDP model parameters explicitly, but simulation samples are readily available (e.g., for random transitions and costs). For these settings, various sampling and population-based numerical algorithms have been developed recently to overcome the difficulties of computing an optimal solution in terms of a policy and/or value function. Specific approaches include: multi-stage adaptive sampling; evolutionary policy iteration; evolutionary random policy search; and model reference adaptive search. Simulation-based Algorithms for Markov Decision Processes brings this state-of-the-art research together for the first time and presents it in a manner that makes it accessible to researchers with varying interests and backgrounds. In addition to providing numerous specific algorithms, the exposition includes both illustrative numerical examples and rigorous theoretical convergence results. The algorithms developed and analyzed differ from the successful computational methods for solving MDPs based on neuro-dynamic programming or reinforcement learning and will complement work in those areas. Furthermore, the authors show how to combine the various algorithms introduced with approximate dynamic programming methods that reduce the size of the state space and ameliorate the effects of dimensionality. The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling and control, and simulation but will be a valuable source of instruction and reference for students of control and operations research.
가격비교
Partially Observed Markov Decision Processes: From Filtering to Controlled Sensing
Vikram Krishnamurthy | Cambridge University Press
209,470원 | 20160321 | 9781107134607
This book covers formulation, algorithms, and structural results of partially observed Markov decision processes, whilst linking theory to real-world applications in controlled sensing. Computations are kept to a minimum, enabling students and researchers in engineering, operations research, and economics to understand the methods and determine the structure of their optimal solution.
가격비교
Dynamic Probabilistic Systems : Semi-markov and Decision Processes(2) (Semi-markov and Decision Processes #2)
Howard, Ronald A. | Dover
45,120원 | 20070611 | 9780486458724
This book is an integrated work published in two volumes. the first volume treats the basic Markov process and its variants; the second, semi-Markov and decision processes. It equips readers to formulate, analyze, and evaluate simple and advanced Markov models of systems, ranging from genetics to space engineering to marketing. 1971 edition.
가격비교
Markov Chains and Decision Processes for Engineers and Managers
Theodore J. Sheskin | CRC Press
356,400원 | 20210101 | 9781420051117
Presents an introduction to finite Markov chains and Markov decision processes, with applications in engineering and management. This book introduces discrete-time, finite-state Markov chains, and Markov decision processes. It describes both algorithms and applications, enabling students to understand the logical basis for the algorithms.
가격비교
1
최근 본 책