Language:
English
繁體中文
Help
圖資館首頁
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Multi-armed banditstheory and applic...
~
Zhao, Qing ((Ph.D. in electrical engineering),)
Multi-armed banditstheory and applications to online learning in networks /
Record Type:
Electronic resources : Monograph/item
Title/Author:
Multi-armed banditsQing Zhao.
Reminder of title:
theory and applications to online learning in networks /
remainder title:
Theory and applications to online learning in networks
Author:
Zhao, Qing
Description:
1 online resource (167 p.)
Subject:
Machine learning.
Online resource:
https://portal.igpublish.com/iglibrary/search/MCPB0006505.html
ISBN:
9781627056380
Multi-armed banditstheory and applications to online learning in networks /
Zhao, Qing(Ph.D. in electrical engineering),
Multi-armed bandits
theory and applications to online learning in networks /[electronic resource] :Theory and applications to online learning in networksQing Zhao. - 1 online resource (167 p.) - Synthesis lectures on communication networks ;22. - Synthesis lectures on communication networks ;23..
Includes bibliographical references (pages 127-145).
Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools--Bayesian and frequentist--of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.
Mode of access: World Wide Web.
ISBN: 9781627056380Subjects--Topical Terms:
188639
Machine learning.
Index Terms--Genre/Form:
214472
Electronic books.
LC Class. No.: Q325.5
Dewey Class. No.: 006.3/1
Multi-armed banditstheory and applications to online learning in networks /
LDR
:02372nmm a2200289 i 4500
001
603306
006
m eo d
007
cr cn |||m|||a
008
211117s2019 cau ob 000 0 eng d
020
$a
9781627056380
020
$a
9781627058711
020
$a
9781681736372
035
$a
MCPB0006505
040
$a
iG Publishing
$b
eng
$c
iG Publishing
$e
rda
050
0 0
$a
Q325.5
082
0 0
$a
006.3/1
100
1
$a
Zhao, Qing
$c
(Ph.D. in electrical engineering),
$e
author.
$3
899579
245
1 0
$a
Multi-armed bandits
$h
[electronic resource] :
$b
theory and applications to online learning in networks /
$c
Qing Zhao.
246
3 0
$a
Theory and applications to online learning in networks
264
1
$a
[San Rafael, California] :
$b
Morgan & Claypool Publishers,
$c
2019.
300
$a
1 online resource (167 p.)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
490
1
$a
Synthesis lectures on communication networks ;
$v
22
504
$a
Includes bibliographical references (pages 127-145).
520
3
$a
Multi-armed bandit problems pertain to optimal sequential decision making and learning in unknown environments. Since the first bandit problem posed by Thompson in 1933 for the application of clinical trials, bandit problems have enjoyed lasting attention from multiple research communities and have found a wide range of applications across diverse domains. This book covers classic results and recent development on both Bayesian and frequentist bandit problems. We start in Chapter 1 with a brief overview on the history of bandit problems, contrasting the two schools--Bayesian and frequentist--of approaches and highlighting foundational results and key applications. Chapters 2 and 4 cover, respectively, the canonical Bayesian and frequentist bandit models. In Chapters 3 and 5, we discuss major variants of the canonical bandit models that lead to new directions, bring in new techniques, and broaden the applications of this classical problem. In Chapter 6, we present several representative application examples in communication networks and social-economic systems, aiming to illuminate the connections between the Bayesian and the frequentist formulations of bandit problems and how structural results pertaining to one may be leveraged to obtain solutions under the other.
538
$a
Mode of access: World Wide Web.
650
0
$a
Machine learning.
$3
188639
650
0
$a
Reinforcement learning.
$3
349131
655
4
$a
Electronic books.
$2
local.
$3
214472
830
0
$a
Synthesis lectures on communication networks ;
$v
23.
$3
899522
856
4 0
$u
https://portal.igpublish.com/iglibrary/search/MCPB0006505.html
based on 0 review(s)
ALL
電子館藏
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
000000202694
電子館藏
1圖書
電子書
EB Q325.5
一般使用(Normal)
On shelf
0
1 records • Pages 1 •
1
Multimedia
Multimedia file
https://portal.igpublish.com/iglibrary/search/MCPB0006505.html
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login