10.1080/01605682.2025.2528915">
 

Quantifying capability gaps via information relaxation and deep reinforcement learning in infinite-horizon Markov decision processes: A military air battle management application

Document Type

Article

Publication Date

7-23-2025

Abstract

Excerpt: This paper presents a novel application of information relaxation techniques to quantify upper bounds on solution quality in a complex, stochastic, and dynamic assignment problem in military air battle management. Information relaxation refers to relaxing the non-anticipativity constraints in a sequential decision-making problem that require a decision-maker to act only on currently available information. We introduce a temporal event horizon—–an adjustable window into future stochastic outcomes—–to explore the marginal value of information in shaping decision policies.

Comments

This is published by Taylor and Francis as a subscription-access article. It is accessible to subscribers through the DOI link below.

Source Publication

Journal of the Operational Research Society (ISSN 0160-5682, 1476-9360)

This document is currently not available here.

Share

COinS