Date of Award
3-23-2017
Document Type
Thesis
Degree Name
Master of Science in Operations Research
Department
Department of Operational Sciences
First Advisor
Matthew J. Robbins, PhD.
Abstract
major focus of the Military Health System is to provide efficient and timely medical evacuation (MEDEVAC) to battlefield casualties. Medical planners are responsible for developing dispatching policies that dictate how aerial military MEDEVAC units are utilized during major combat operations. The objective of this research is to determine how to optimally dispatch MEDEVAC units in response to 9-line MEDEVAC requests to maximize MEDEVAC system performance. A discounted, infinite horizon Markov decision process (MDP) model is developed to examine the MEDEVAC dispatching problem. The MDP model allows the dispatching authority to accept, reject, or queue incoming requests based on the request's classification (i.e., zone and precedence level) and the state of the MEDEVAC system. Rejected requests are rerouted to be serviced by other, non-medical military organizations in theater. Performance is measured in terms of casualty survivability rather than a response time threshold since survival probability more accurately represents casualty outcomes. A representative planning scenario based on contingency operations in southern Afghanistan is utilized to investigate the differences between the optimal dispatching policy and three practitioner-friendly myopic baseline policies. Two computational experiments, a two-level, five-factor screening design and a subsequent three-level, three-factor full factorial design, are conducted to examine the impact of selected MEDEVAC problem features on the optimal policy and the system level performance measure. Results indicate that dispatching the closest available MEDEVAC unit is not always optimal and that dispatching MEDEVAC units considering the precedence level of requests and the locations of busy MEDEVAC units increases the performance of the MEDEVAC system. These results inform the development and implementation of MEDEVAC tactics, techniques, and procedures by military medical planners. Moreover, an open question exists concerning the best exact solution approach for solving Markov decision problems due to recent advances in performance by commercial linear programming (LP) solvers. An analysis of solution approaches for the MEDEVAC dispatching problem reveals that the policy iteration algorithm substantially outperforms the LP algorithms executed by CPLEX 12.6 in regards to computational effort. This result supports the claim that policy iteration remains the superlative solution algorithm for exactly solving computationally tractable Markov decision problems.
AFIT Designator
AFIT-ENS-MS-17-M-137
DTIC Accession Number
AD1051615
Recommended Citation
Jenkins, Phillip R., "Using Markov Decision Processes with Heterogeneous Queueing Systems to Examine Military MEDEVAC Dispatching Policies" (2017). Theses and Dissertations. 797.
https://scholar.afit.edu/etd/797