Date of Award


Document Type


Degree Name

Master of Science


Department of Operational Sciences

First Advisor

Matthew J. Robbins, PhD.


We develop a Markov decision process (MDP) model to examine military medical evacuation (MEDEVAC) dispatch policies. To solve our MDP, we apply an approximate dynamic programming (ADP) technique. The problem of deciding which aeromedical asset to dispatch to which service request is complicated by the service locations and the priority class of each casualty event. We assume requests for MEDEVAC arrive sequentially, with the location and the priority of each casualty known upon initiation of the request. The proposed model finds a high quality dispatching policy which outperforms the traditional myopic policy of sending the nearest available unit. Utility is gained by servicing casualties based on both their priority and the actual time until a casualty arrives at a medical treatment facility (MTF). The model is solved using approximate policy iteration (API) and least squares temporal difference (LSTD). Computational examples are used to investigate dispatch policies for a scenario set in northern Syria. Results indicate that a myopic policy is not always the best policy to use for quickly dispatching MEDEVAC units, and insight is gained into the value of specific MEDEVAC locations.

AFIT Designator


DTIC Accession Number