語系:
繁體中文
English
說明(常見問題)
圖資館首頁
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
New Markov Decision Process Formulat...
~
Carnegie Mellon University.
New Markov Decision Process Formulations and Optimal Policy Structure for Assemble-to-Order and New Product Development Problems.
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
New Markov Decision Process Formulations and Optimal Policy Structure for Assemble-to-Order and New Product Development Problems.
作者:
Nadar, Emre.
面頁冊數:
166 p.
附註:
Source: Dissertation Abstracts International, Volume: 73-12(E), Section: A.
附註:
Adviser: Alan Scheller-Wolf.
Contained By:
Dissertation Abstracts International73-12A(E).
標題:
Business Administration, Management.
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3524656
ISBN:
9781267574084
New Markov Decision Process Formulations and Optimal Policy Structure for Assemble-to-Order and New Product Development Problems.
Nadar, Emre.
New Markov Decision Process Formulations and Optimal Policy Structure for Assemble-to-Order and New Product Development Problems.
- 166 p.
Source: Dissertation Abstracts International, Volume: 73-12(E), Section: A.
Thesis (Ph.D.)--Carnegie Mellon University, 2012.
This thesis examines two complex, dynamic problems by employing the theory of Markov Decision Processes (MDPs).
ISBN: 9781267574084Subjects--Topical Terms:
212493
Business Administration, Management.
New Markov Decision Process Formulations and Optimal Policy Structure for Assemble-to-Order and New Product Development Problems.
LDR
:05920nmm 2200337 4500
001
380669
005
20130530092723.5
008
130708s2012 ||||||||||||||||| ||eng d
020
$a
9781267574084
035
$a
(UMI)AAI3524656
035
$a
AAI3524656
040
$a
UMI
$c
UMI
100
1
$a
Nadar, Emre.
$3
603271
245
1 0
$a
New Markov Decision Process Formulations and Optimal Policy Structure for Assemble-to-Order and New Product Development Problems.
300
$a
166 p.
500
$a
Source: Dissertation Abstracts International, Volume: 73-12(E), Section: A.
500
$a
Adviser: Alan Scheller-Wolf.
502
$a
Thesis (Ph.D.)--Carnegie Mellon University, 2012.
520
$a
This thesis examines two complex, dynamic problems by employing the theory of Markov Decision Processes (MDPs).
520
$a
In Chapter 2, I consider generalized assemble-to-order (ATO) " M-systems" with multiple components and multiple products. These systems involve a single "master" product which uses multiple units from each component, and multiple individual products each of which consumes multiple units from a different component. Such systems are common for manufacturers selling an assembled product as well as individual spare parts.
520
$a
I model these systems as infinite-horizon MDPs under the discounted cost criterion. Each component is produced in batches of fixed size in a make-to-stock fashion; batch sizes are determined by individual product sizes. Production times are independent and exponentially distributed. Demand for each product arrives as an independent Poisson process. If not satisfied immediately upon arrival, these demands are lost. Therefore the state of the system can be described by component inventory levels.
520
$a
A control policy specifies when a batch of components should be produced (i.e., inventory replenishment), and whether an arriving demand for each product should be satisfied (i.e., inventory allocation). The convexity property that has been largely used to characterize optimal policies in the MDP literature may fail to hold in our case. Therefore I introduce new functional characterizations for submodularity and supermodularity restricted to certain lattices of the state space. The optimal cost function satisfies these new characterizations: The state space of the problem can be partitioned into disjoint lattices such that, on each lattice, (a) it is optimal to produce a batch of a particular component if and only if the state vector is less than a certain threshold associated with that component, and (b) it is optimal to fulfill a demand of a particular product if and only if the state vector is greater than or equal to a certain threshold associated with that product. I refer to this policy as a lattice-dependent base-stock and lattice-dependent rationing (LBLR) policy. I also show that if the optimization criterion is modified to the average cost rate, LBLR remains optimal.
520
$a
In Chapter 3, I evaluate the use of an LBLR policy for general ATO systems as a heuristic. I numerically compare the globally optimal policy to LBLR and two other heuristics from the literature: a state-dependent base-stock and state-dependent rationing (SBSR) policy, and a fixed base: stockand fixed rationing (FBFR) policy. Taking the average cost rate as the performance criterion, I develop a linear program to find the globally optimal cost, and Mixed Integer Programming formulations to find the optimal cost within each heuristic class. I generate more than 1800 instances for the general ATO problem, not restricted to the assumptions of Chapter 2, such as the M-system product structure. Interestingly, LBLR yields the globally optimal cost in all instances, while SBSR and FBFR provide solutions within 2.7% and 4.8% of the globally optimal cost, respectively. These numerical results also provide several insights into the performance of LBLR relative to other heuristics: LBLR and SBSR perform significantly better than FBFR when replenishment batch sizes imperfectly match the component requirements of the most valuable or most highly demanded product. In addition, LBLR substantially outperforms SBSR if it is crucial to hold a significant amount of inventory that must be rationed.
520
$a
In Chapter 4, I study the problem of project selection and resource allocation in a multistage new product development (NPD) process with stage-dependent resource constraints. As in Chapters 2 and 3, I model the problem as all infinite-horizon MDP, specifically under the discounted cost criterion. Each NPD project undergoes a different experiment at each stage of the NPD process; these experiments generate signals about the true nature of the project. Experimentation times are independent and exponentially distributed. Beliefs about the ultimate outcome of each project are updated after each experiment according to a Bayesian rule. Projects thus become differentiated through their signals, and all available signals for a project determine its category. The state of the system is described by the numbers of projects in each category. A control policy specifies, given the system state, how to utilize the resources at each stage, i.e., the projects (i) to experiment at each stage, and (ii) to terminate.
520
$a
I characterize the optimal control policy as following a new type of strategy, state-dependent non-congestive promotion (SDNCP), for two different special cases of the general problem: (a) when there is a single informative experiment and projects are not terminated, or (b) when there are multiple uninformative experiments. (Abstract shortened by UMI.)
590
$a
School code: 0041.
650
4
$a
Business Administration, Management.
$3
212493
690
$a
0454
710
2
$a
Carnegie Mellon University.
$3
212563
773
0
$t
Dissertation Abstracts International
$g
73-12A(E).
790
1 0
$a
Scheller-Wolf, Alan,
$e
advisor
790
$a
0041
791
$a
Ph.D.
792
$a
2012
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3524656
筆 0 讀者評論
全部
電子館藏
館藏
1 筆 • 頁數 1 •
1
條碼號
館藏地
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
000000079275
電子館藏
1圖書
學位論文
TH 2012
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
多媒體檔案
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3524656
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入