"A Random Forest-Based Q-Learning Algorithm: Toward Interpretable Artif" by Victor R. Rae

Author

Victor R. Rae

Date of Award

3-2024

Document Type

Thesis

Degree Name

Master of Science

Department

Department of Operational Sciences

First Advisor

Matthew Robbins, PhD

Abstract

A growing demand exists for interpretable artificial intelligence models, leading to extensive research efforts to enhance the explainability and transparency of policies generated by reinforcement learning (RL) methods. This research develops random forest-based RL algorithms as a logical progression in this academic pursuit. The algorithms are evaluated using three standard benchmark environments from OpenAI gym — CartPole, MountainCar, and LunarLander — and compared to implementations of the Deep Q-learning Network (DQN) and Double DQN (DDQN) algorithms for various metrics, including performance, robustness, efficiency, and interpretability. The random forest-based algorithms exhibit superior performance to both neural network-based algorithms in two out of three environments while additionally providing easily interpretable decision tree policies. However, the proffered approach faces challenges in solving the LunarLander environment, indicating limitations in its current ability to scale to larger environments.

AFIT Designator

AFIT-ENS-MS-24-M-096

Comments

A 12-month embargo was observed for posting this work on AFIT Scholar.

Distribution Statement A, Approved for Public Release. PA case number on file.

Share

COinS