語系:
繁體中文
English
說明(常見問題)
圖資館首頁
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Error Bounds and Applications for St...
~
The Johns Hopkins University.
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
作者:
Zhu, Jingyi.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, 2020
面頁冊數:
308 p.
附註:
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
附註:
Advisor: Hobbs, Benjamin F.
Contained By:
Dissertations Abstracts International82-03B.
標題:
Statistics.
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28068816
ISBN:
9798662430402
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
Zhu, Jingyi.
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 308 p.
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
Thesis (Ph.D.)--The Johns Hopkins University, 2020.
This item must not be sold to any third party vendors.
This work analyzes the stochastic approximation algorithm with non-decaying gains as applied in time-varying problems. The setting is to minimize a sequence of scalar-valued loss functions fk(·) at sampling times τk or to locate the root of a sequence of vector-valued functions gk(·) at τk with respect to a parameter θ ∈ Rp. The available information is the noise-corrupted observation(s) of either fk(·) or gk(·) evaluated at one or two design points only. Given the time-varying stochastic approximation setup, we apply stochastic approximation algorithms. The gain has to be bounded away from zero so that the recursive estimate denoted as θˆk can maintain its momentum in tracking the time-varying optimum denoted as θ∗k. Given that {θk∗ } is perpetually varying, the best property that θˆk can have is to be near the solution θ∗k (concentration behavior) in place of the improbable convergence. Chapter 3 provides a bound for the root-mean-squared error and a bound for the mean-absolute-deviation. Note that the only assumption imposed on {θ∗k} is that the average distance between two consecutive underlying optimal parameter vectors is bounded from above. Overall, the bounds are applicable under a mild assumption on the time-varying drift and a modest restriction on the observation noise and the bias term. After establishing the tracking capability in Chapter 3, we also discuss the concentration behavior of θˆk in Chapter 4. The weak convergence limit of the continuous interpolation of θˆk is shown to follow the trajectory of a non-autonomous ordinary differential equation. Then we apply the formula for variation of parameters to derive a computable upper-bound for the probability that θˆk deviates from θ∗k beyond a certain threshold. Both Chapter 3 and Chapter 4 are probabilistic arguments and may not provide much guidance on the gain-tuning strategies useful for one single experiment run. Therefore, Chapter 5 discusses a data-dependent gain-tuning strategy based on estimating the Hessian information and the noise level. Overall, this work answers the questions “what is the estimate for the dynamical system θ∗k” and “how much we can trust θˆk as an estimate for θ∗k.”.
ISBN: 9798662430402Subjects--Topical Terms:
182057
Statistics.
Subjects--Index Terms:
Stochastic approximation
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
LDR
:03548nmm a2200409 4500
001
594600
005
20210521101704.5
008
210917s2020 ||||||||||||||||| ||eng d
020
$a
9798662430402
035
$a
(MiAaPQ)AAI28068816
035
$a
(MiAaPQ)0098vireo5334Zhu
035
$a
AAI28068816
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Zhu, Jingyi.
$3
886636
245
1 0
$a
Error Bounds and Applications for Stochastic Approximation with Non-decaying Gain.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
308 p.
500
$a
Source: Dissertations Abstracts International, Volume: 82-03, Section: B.
500
$a
Advisor: Hobbs, Benjamin F.
502
$a
Thesis (Ph.D.)--The Johns Hopkins University, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
This work analyzes the stochastic approximation algorithm with non-decaying gains as applied in time-varying problems. The setting is to minimize a sequence of scalar-valued loss functions fk(·) at sampling times τk or to locate the root of a sequence of vector-valued functions gk(·) at τk with respect to a parameter θ ∈ Rp. The available information is the noise-corrupted observation(s) of either fk(·) or gk(·) evaluated at one or two design points only. Given the time-varying stochastic approximation setup, we apply stochastic approximation algorithms. The gain has to be bounded away from zero so that the recursive estimate denoted as θˆk can maintain its momentum in tracking the time-varying optimum denoted as θ∗k. Given that {θk∗ } is perpetually varying, the best property that θˆk can have is to be near the solution θ∗k (concentration behavior) in place of the improbable convergence. Chapter 3 provides a bound for the root-mean-squared error and a bound for the mean-absolute-deviation. Note that the only assumption imposed on {θ∗k} is that the average distance between two consecutive underlying optimal parameter vectors is bounded from above. Overall, the bounds are applicable under a mild assumption on the time-varying drift and a modest restriction on the observation noise and the bias term. After establishing the tracking capability in Chapter 3, we also discuss the concentration behavior of θˆk in Chapter 4. The weak convergence limit of the continuous interpolation of θˆk is shown to follow the trajectory of a non-autonomous ordinary differential equation. Then we apply the formula for variation of parameters to derive a computable upper-bound for the probability that θˆk deviates from θ∗k beyond a certain threshold. Both Chapter 3 and Chapter 4 are probabilistic arguments and may not provide much guidance on the gain-tuning strategies useful for one single experiment run. Therefore, Chapter 5 discusses a data-dependent gain-tuning strategy based on estimating the Hessian information and the noise level. Overall, this work answers the questions “what is the estimate for the dynamical system θ∗k” and “how much we can trust θˆk as an estimate for θ∗k.”.
590
$a
School code: 0098.
650
4
$a
Statistics.
$3
182057
650
4
$a
Artificial intelligence.
$3
194058
650
4
$a
Systems science.
$3
730372
653
$a
Stochastic approximation
653
$a
Non-decaying gain
653
$a
Constant gain
653
$a
Error bound
653
$a
Time-varying systems
653
$a
ODE limit
653
$a
Second-order algorithms
690
$a
0463
690
$a
0790
690
$a
0800
710
2
$a
The Johns Hopkins University.
$b
Applied Mathematics and Statistics.
$3
886637
773
0
$t
Dissertations Abstracts International
$g
82-03B.
790
$a
0098
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28068816
筆 0 讀者評論
全部
電子館藏
館藏
1 筆 • 頁數 1 •
1
條碼號
館藏地
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
000000193560
電子館藏
1圖書
電子書
EB 2020
一般使用(Normal)
編目處理中
0
1 筆 • 頁數 1 •
1
多媒體
多媒體檔案
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=28068816
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入