語系:
繁體中文
English
說明(常見問題)
圖資館首頁
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Self-organization of Multi-agent Sys...
~
Arizona State University.
Self-organization of Multi-agent Systems Using Markov Chain Models.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Self-organization of Multi-agent Systems Using Markov Chain Models.
作者:
Biswal, Shiba.
出版者:
Ann Arbor : ProQuest Dissertations & Theses, 2020
面頁冊數:
160 p.
附註:
Source: Dissertations Abstracts International, Volume: 81-11, Section: B.
附註:
Advisor: Berman, Spring.
Contained By:
Dissertations Abstracts International81-11B.
標題:
Engineering.
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27955761
ISBN:
9798645458423
Self-organization of Multi-agent Systems Using Markov Chain Models.
Biswal, Shiba.
Self-organization of Multi-agent Systems Using Markov Chain Models.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 160 p.
Source: Dissertations Abstracts International, Volume: 81-11, Section: B.
Thesis (Ph.D.)--Arizona State University, 2020.
This item must not be sold to any third party vendors.
The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal. This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution.
ISBN: 9798645458423Subjects--Topical Terms:
210888
Engineering.
Subjects--Index Terms:
Control theory
Self-organization of Multi-agent Systems Using Markov Chain Models.
LDR
:03645nmm a2200373 4500
001
594554
005
20210521101652.5
008
210917s2020 ||||||||||||||||| ||eng d
020
$a
9798645458423
035
$a
(MiAaPQ)AAI27955761
035
$a
AAI27955761
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Biswal, Shiba.
$3
886561
245
1 0
$a
Self-organization of Multi-agent Systems Using Markov Chain Models.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
160 p.
500
$a
Source: Dissertations Abstracts International, Volume: 81-11, Section: B.
500
$a
Advisor: Berman, Spring.
502
$a
Thesis (Ph.D.)--Arizona State University, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal. This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution.
590
$a
School code: 0010.
650
4
$a
Engineering.
$3
210888
650
4
$a
Robotics.
$3
181952
650
4
$a
Applied mathematics.
$3
377601
653
$a
Control theory
653
$a
Markov processes
653
$a
Multi-agent systems
653
$a
Optimization
653
$a
Swarm robotics
690
$a
0537
690
$a
0771
690
$a
0364
710
2
$a
Arizona State University.
$b
Mechanical Engineering.
$3
766105
773
0
$t
Dissertations Abstracts International
$g
81-11B.
790
$a
0010
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27955761
筆 0 讀者評論
全部
電子館藏
館藏
1 筆 • 頁數 1 •
1
條碼號
館藏地
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
000000193514
電子館藏
1圖書
電子書
EB 2020
一般使用(Normal)
編目處理中
0
1 筆 • 頁數 1 •
1
多媒體
多媒體檔案
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27955761
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入