Language:
English
繁體中文
Help
圖資館首頁
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Simulation-based optimizationparamet...
~
Gosavi, Abhijit.
Simulation-based optimizationparametric optimization techniques and reinforcement learning /
Record Type:
Electronic resources : Monograph/item
Title/Author:
Simulation-based optimizationby Abhijit Gosavi.
Reminder of title:
parametric optimization techniques and reinforcement learning /
Author:
Gosavi, Abhijit.
Published:
Boston, MA :Springer US :2015.
Description:
xxvi, 508 p. :ill., digital ;24 cm.
Contained By:
Springer eBooks
Subject:
Probabilities.
Online resource:
http://dx.doi.org/10.1007/978-1-4899-7491-4
ISBN:
9781489974914 (electronic bk.)
Simulation-based optimizationparametric optimization techniques and reinforcement learning /
Gosavi, Abhijit.
Simulation-based optimization
parametric optimization techniques and reinforcement learning /[electronic resource] :by Abhijit Gosavi. - 2nd ed. - Boston, MA :Springer US :2015. - xxvi, 508 p. :ill., digital ;24 cm. - Operations research/computer science interfaces series,v.551387-666X ;. - Operations research/computer science interfaces series ;v.55..
Background -- Simulation basics -- Simulation optimization: an overview -- Response surfaces and neural nets -- Parametric optimization -- Dynamic programming -- Reinforcement learning -- Stochastic search for controls -- Convergence: background material -- Convergence: parametric optimization -- Convergence: control optimization -- Case studies.
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms. Key features of this revised and improved Second Edition include: Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search, and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search, and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online), and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations Themed around three areas in separate sets of chapters Static Simulation Optimization, Reinforcement Learning, and Convergence Analysis this book is written for researchers and students in the fields of engineering (industrial, systems, electrical, and computer), operations research, computer science, and applied mathematics.
ISBN: 9781489974914 (electronic bk.)
Standard No.: 10.1007/978-1-4899-7491-4doiSubjects--Topical Terms:
182046
Probabilities.
LC Class. No.: TA340
Dewey Class. No.: 519.2
Simulation-based optimizationparametric optimization techniques and reinforcement learning /
LDR
:03401nmm a2200349 a 4500
001
459982
003
DE-He213
005
20150610110138.0
006
m d
007
cr nn 008maaau
008
151110s2015 mau s 0 eng d
020
$a
9781489974914 (electronic bk.)
020
$a
9781489974907 (paper)
024
7
$a
10.1007/978-1-4899-7491-4
$2
doi
035
$a
978-1-4899-7491-4
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
TA340
072
7
$a
KJT
$2
bicssc
072
7
$a
KJMD
$2
bicssc
072
7
$a
BUS049000
$2
bisacsh
082
0 4
$a
519.2
$2
22
090
$a
TA340
$b
.G676 2015
100
1
$a
Gosavi, Abhijit.
$3
711080
245
1 0
$a
Simulation-based optimization
$h
[electronic resource] :
$b
parametric optimization techniques and reinforcement learning /
$c
by Abhijit Gosavi.
250
$a
2nd ed.
260
$a
Boston, MA :
$b
Springer US :
$b
Imprint: Springer,
$c
2015.
300
$a
xxvi, 508 p. :
$b
ill., digital ;
$c
24 cm.
490
1
$a
Operations research/computer science interfaces series,
$x
1387-666X ;
$v
v.55
505
0
$a
Background -- Simulation basics -- Simulation optimization: an overview -- Response surfaces and neural nets -- Parametric optimization -- Dynamic programming -- Reinforcement learning -- Stochastic search for controls -- Convergence: background material -- Convergence: parametric optimization -- Convergence: control optimization -- Case studies.
520
$a
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms. Key features of this revised and improved Second Edition include: Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search, and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search, and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online), and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations Themed around three areas in separate sets of chapters Static Simulation Optimization, Reinforcement Learning, and Convergence Analysis this book is written for researchers and students in the fields of engineering (industrial, systems, electrical, and computer), operations research, computer science, and applied mathematics.
650
0
$a
Probabilities.
$3
182046
650
0
$a
Mathematical optimization.
$3
183292
650
1 4
$a
Economics/Management Science.
$3
273684
650
2 4
$a
Operation Research/Decision Theory.
$3
585050
650
2 4
$a
Operations Research, Management Science.
$3
511451
650
2 4
$a
Simulation and Modeling.
$3
273719
710
2
$a
SpringerLink (Online service)
$3
273601
773
0
$t
Springer eBooks
830
0
$a
Operations research/computer science interfaces series ;
$v
v.55.
$3
711081
856
4 0
$u
http://dx.doi.org/10.1007/978-1-4899-7491-4
950
$a
Business and Economics (Springer-11643)
based on 0 review(s)
ALL
電子館藏
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
000000109489
電子館藏
1圖書
電子書
EB TA340 G676 2015
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Multimedia file
http://dx.doi.org/10.1007/978-1-4899-7491-4
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login