logo
logo
x
바코드검색
BOOKPRICE.co.kr
책, 도서 가격비교 사이트
바코드검색

인기 검색어

실시간 검색어

검색가능 서점

도서목록 제공

An Information-Theoretic Approach to Neural Computing

An Information-Theoretic Approach to Neural Computing (Hardcover)

Dragan Obradovic, Gustavo Deco (지은이)
Springer-Verlag New York Inc
230,020원

일반도서

검색중
서점 할인가 할인률 배송비 혜택/추가 실질최저가 구매하기
188,610원 -18% 0원
9,440원
179,170원 >
yes24 로딩중
교보문고 로딩중
notice_icon 검색 결과 내에 다른 책이 포함되어 있을 수 있습니다.

중고도서

검색중
서점 유형 등록개수 최저가 구매하기
로딩중

eBook

검색중
서점 정가 할인가 마일리지 실질최저가 구매하기
로딩중

책 이미지

An Information-Theoretic Approach to Neural Computing
eBook 미리보기

책 정보

· 제목 : An Information-Theoretic Approach to Neural Computing (Hardcover) 
· 분류 : 외국도서 > 컴퓨터 > 신경회로망
· ISBN : 9780387946665
· 쪽수 : 262쪽
· 출판일 : 1996-02-08

목차

1 Introduction.- 2 Preliminaries of Information Theory and Neural Networks.- 2.1 Elements of Information Theory.- 2.1.1 Entropy and Information.- 2.1.2 Joint Entropy and Conditional Entropy.- 2.1.3 Kullback-Leibler Entropy.- 2.1.4 Mutual Information.- 2.1.5 Differential Entropy, Relative Entropy and Mutual Information.- 2.1.6 Chain Rules.- 2.1.7 Fundamental Information Theory Inequalities.- 2.1.8 Coding Theory.- 2.2 Elements of the Theory of Neural Networks.- 2.2.1 Neural Network Modeling.- 2.2.2 Neural Architectures.- 2.2.3 Learning Paradigms.- 2.2.4 Feedforward Networks: Backpropagation.- 2.2.5 Stochastic Recurrent Networks: Boltzmann Machine.- 2.2.6 Unsupervised Competitive Learning.- 2.2.7 Biological Learning Rules.- I: Unsupervised Learning.- 3 Linear Feature Extraction: Infomax Principle.- 3.1 Principal Component Analysis: Statistical Approach.- 3.1.1 PCA and Diagonalization of the Covariance Matrix.- 3.1.2 PCA and Optimal Reconstruction.- 3.1.3 Neural Network Algorithms and PCA.- 3.2 Information Theoretic Approach: Infomax.- 3.2.1 Minimization of Information Loss Principle and Infomax Principle.- 3.2.2 Upper Bound of Information Loss.- 3.2.3 Information Capacity as a Lyapunov Function of the General Stochastic Approximation.- 4 Independent Component Analysis: General Formulation and Linear Case.- 4.1 ICA-Definition.- 4.2 General Criteria for ICA.- 4.2.1 Cumulant Expansion Based Criterion for ICA.- 4.2.2 Mutual Information as Criterion for ICA.- 4.3 Linear ICA.- 4.4 Gaussian Input Distribution and Linear ICA.- 4.4.1 Networks With Anti-Symmetric Lateral Connections.- 4.4.2 Networks With Symmetric Lateral Connections.- 4.4.3 Examples of Learning with Symmetric and Anti-Symmetric Networks.- 4.5 Learning in Gaussian ICA with Rotation Matrices: PCA.- 4.5.1 Relationship Between PCA and ICA in Gaussian Input Case.- 4.5.2 Linear Gaussian ICA and the Output Dimension Reduction.- 4.6 Linear ICA in Arbitrary Input Distribution.- 4.6.1 Some Properties of Cumulants at the Output of a Linear Transformation.- 4.6.2 The Edgeworth Expansion Criteria and Theorem 4.6.2.- 4.6.3 Algorithms for Output Factorization in the Non-Gaussian Case.- 4.6.4 Experimental Results of Linear ICA Algorithms in the Non-Gaussian Case.- 5 Nonlinear Feature Extraction: Boolean Stochastic Networks.- 5.1 Infomax Principle for Boltzmann Machines.- 5.1.1 Learning Model.- 5.1.2 Examples of Infomax Principle in Boltzmann Machine.- 5.2 Redundancy Minimization and Infomax for the Boltzmann Machine.- 5.2.1 Learning Model.- 5.2.2 Numerical Complexity of the Learning Rule.- 5.2.3 Factorial Learning Experiments.- 5.2.4 Receptive Fields Formation from a Retina.- 5.3 Appendix.- 6 Nonlinear Feature Extraction: Deterministic Neural Networks.- 6.1 Redundancy Reduction by Triangular Volume Conserving Architectures.- 6.1.1 Networks with Linear, Sigmoidal and Higher Order Activation Functions.- 6.1.2 Simulations and Results.- 6.2 Unsupervised Modeling of Chaotic Time Series.- 6.2.1 Dynamical System Modeling.- 6.3 Redundancy Reduction by General Symplectic Architectures.- 6.3.1 General Entropy Preserving Nonlinear Maps.- 6.3.2 Optimizing a Parameterized Symplectic Map.- 6.3.3 Density Estimation and Novelty Detection.- 6.4 Example: Theory of Early Vision.- 6.4.1 Theoretical Background.- 6.4.2 Retina Model.- II: Supervised Learning.- 7 Supervised Learning and Statistical Estimation.- 7.1 Statistical Parameter Estimation - Basic Definitions.- 7.1.1 Cramer-Rao Inequality for Unbiased Estimators.- 7.2 Maximum Likelihood Estimators.- 7.2.1 Maximum Likelihood and the Information Measure.- 7.3 Maximum A Posteriori Estimation.- 7.4 Extensions of MLE to Include Model Selection.- 7.4.1 Akaike's Information Theoretic Criterion (AIC).- 7.4.2 Minimal Description Length and Stochastic Complexity.- 7.5 Generalization and Learning on the Same Data Set.- 8 Statistical Physics Theory of Supervised Learning and Generalization.- 8.1 Statistical Mechanics Theory of Supervised Learning.- 8.1.1 Maximu

이 포스팅은 쿠팡 파트너스 활동의 일환으로,
이에 따른 일정액의 수수료를 제공받습니다.
이 포스팅은 제휴마케팅이 포함된 광고로 커미션을 지급 받습니다.
도서 DB 제공 : 알라딘 서점(www.aladin.co.kr)
최근 본 책