Date of Award


Document Type


Degree Name

Master of Science


Department of Operational Sciences

First Advisor

Phillip R. Jenkins, PhD


One of the primary duties of the Military Health System is to provide effective and efficient medical evacuation (MEDEVAC) to injured battlefield personnel. To accomplish this, military medical planners seek to develop high-quality dispatching policies that dictate how deployed MEDEVAC assets are utilized throughout combat operations. This thesis seeks to determine dispatching policies that improve the performance of the MEDEVAC system. A discounted, infinite-horizon continuous-time Markov decision process (MDP) model is developed to examine the MEDEVAC dispatching problem. The model incorporates problem features that are not considered under the current dispatching policy (e.g., myopic policy), which tasks the closest-available MEDEVAC unit to service an incoming request. More specifically, the MDP model explicitly accounts for admission control, precedence level of calls, different asset types (e.g., Army versus Air Force helicopters), and threat level at casualty collection points. An approximate dynamic programming (ADP) algorithm is developed within an approximate policy iteration algorithmic framework that leverages kernel regression to approximate the state value function. The ADP algorithm is used to develop high-quality solutions for large scale problems that cannot be solved to optimality due to the curse of dimensionality. We develop a notional scenario based on combat operations in southern Afghanistan to investigate model performance, which is measured in terms of casualty survivability. The results indicate that significant improvement in MEDEVAC system performance can be obtained by utilizing either the MDP or ADP generated policies. These results inform the development and implementation of tactics, techniques and procedures for the military medical planning community.

AFIT Designator


DTIC Accession Number