Language:
English
繁體中文
Help
圖資館首頁
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
Self-organization of Multi-agent Sys...
~
Arizona State University.
Self-organization of Multi-agent Systems Using Markov Chain Models.
Record Type:
Electronic resources : Monograph/item
Title/Author:
Self-organization of Multi-agent Systems Using Markov Chain Models.
Author:
Biswal, Shiba.
Published:
Ann Arbor : ProQuest Dissertations & Theses, 2020
Description:
160 p.
Notes:
Source: Dissertations Abstracts International, Volume: 81-11, Section: B.
Notes:
Advisor: Berman, Spring.
Contained By:
Dissertations Abstracts International81-11B.
Subject:
Engineering.
Online resource:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27955761
ISBN:
9798645458423
Self-organization of Multi-agent Systems Using Markov Chain Models.
Biswal, Shiba.
Self-organization of Multi-agent Systems Using Markov Chain Models.
- Ann Arbor : ProQuest Dissertations & Theses, 2020 - 160 p.
Source: Dissertations Abstracts International, Volume: 81-11, Section: B.
Thesis (Ph.D.)--Arizona State University, 2020.
This item must not be sold to any third party vendors.
The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal. This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution.
ISBN: 9798645458423Subjects--Topical Terms:
210888
Engineering.
Subjects--Index Terms:
Control theory
Self-organization of Multi-agent Systems Using Markov Chain Models.
LDR
:03645nmm a2200373 4500
001
594554
005
20210521101652.5
008
210917s2020 ||||||||||||||||| ||eng d
020
$a
9798645458423
035
$a
(MiAaPQ)AAI27955761
035
$a
AAI27955761
040
$a
MiAaPQ
$c
MiAaPQ
100
1
$a
Biswal, Shiba.
$3
886561
245
1 0
$a
Self-organization of Multi-agent Systems Using Markov Chain Models.
260
1
$a
Ann Arbor :
$b
ProQuest Dissertations & Theses,
$c
2020
300
$a
160 p.
500
$a
Source: Dissertations Abstracts International, Volume: 81-11, Section: B.
500
$a
Advisor: Berman, Spring.
502
$a
Thesis (Ph.D.)--Arizona State University, 2020.
506
$a
This item must not be sold to any third party vendors.
520
$a
The problem of modeling and controlling the distribution of a multi-agent system has recently evolved into an interdisciplinary effort. When the agent population is very large, i.e., at least on the order of hundreds of agents, it is important that techniques for analyzing and controlling the system scale well with the number of agents. One scalable approach to characterizing the behavior of a multi-agent system is possible when the agents' states evolve over time according to a Markov process. In this case, the density of agents over space and time is governed by a set of difference or differential equations known as a {\\it mean-field model}, whose parameters determine the stochastic control policies of the individual agents. These models often have the advantage of being easier to analyze than the individual agent dynamics. Mean-field models have been used to describe the behavior of chemical reaction networks, biological collectives such as social insect colonies, and more recently, swarms of robots that, like natural swarms, consist of hundreds or thousands of agents that are individually limited in capability but can coordinate to achieve a particular collective goal. This dissertation presents a control-theoretic analysis of mean-field models for which the agent dynamics are governed by either a continuous-time Markov chain on an arbitrary state space, or a discrete-time Markov chain on a continuous state space. Three main problems are investigated. First, the problem of stabilization is addressed, that is, the design of transition probabilities/rates of the Markov process (the agent control parameters) that make a target distribution, satisfying certain conditions, invariant. Such a control approach could be used to achieve desired multi-agent distributions for spatial coverage and task allocation. However, the convergence of the multi-agent distribution to the designed equilibrium does not imply the convergence of the individual agents to fixed states. To prevent the agents from continuing to transition between states once the target distribution is reached, and thus potentially waste energy, the second problem addressed within this dissertation is the construction of feedback control laws that prevent agents from transitioning once the equilibrium distribution is reached. The third problem addressed is the computation of optimized transition probabilities/rates that maximize the speed at which the system converges to the target distribution.
590
$a
School code: 0010.
650
4
$a
Engineering.
$3
210888
650
4
$a
Robotics.
$3
181952
650
4
$a
Applied mathematics.
$3
377601
653
$a
Control theory
653
$a
Markov processes
653
$a
Multi-agent systems
653
$a
Optimization
653
$a
Swarm robotics
690
$a
0537
690
$a
0771
690
$a
0364
710
2
$a
Arizona State University.
$b
Mechanical Engineering.
$3
766105
773
0
$t
Dissertations Abstracts International
$g
81-11B.
790
$a
0010
791
$a
Ph.D.
792
$a
2020
793
$a
English
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27955761
based on 0 review(s)
ALL
電子館藏
Items
1 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
000000193514
電子館藏
1圖書
電子書
EB 2020
一般使用(Normal)
in cat dept.
0
1 records • Pages 1 •
1
Multimedia
Multimedia file
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=27955761
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login