Date of Award


Document Type


Degree Name

Doctor of Philosophy (PhD)


Department of Engineering Physics

First Advisor

Scott R. Graham, PhD


This dissertation addresses several problems surrounding the detection of malware using deep learning models trained on assembly language examples. First, it examines the feasibility of detecting examples of malice using deep learning models trained on RISC-V instruction traces. Next, it examines whether models for detecting trace features and code features in RISC-V assembly can be made explainable (providing rationale for a model’s decision based upon the model’s internal workings) or interpretable (providing additional rationale as model output to support a human’s agreement with the model output). Third, this work examines ways in which it is possible to give additional contextual information to a reverse engineer in a human-machine team. Finally, this dissertation performs preliminary studies into how a malware author aware of a model explanation strategy would attempt to break it using adversarial machine learning attacks. Results indicate that deep learning algorithms are useful tools for detecting certain types of malware, that explainability methods are useful tools for helping understand those models’ decisions, and that future deep learning algorithms for malware detection should take obfuscation into account.

AFIT Designator