"Artificial Intelligence and Perception: An Empirical Study" by Anthony J. Neil

Date of Award

3-2024

Document Type

Thesis

Degree Name

Master of Science in Systems Engineering

Department

Department of Systems Engineering and Management

First Advisor

John J. Elshaw, PhD

Abstract

This thesis investigates the impact of adjusting artificial intelligence explainability levels’ outputs on user perception. The overarching study extends within the Explainable Artificial Intelligence (XAI) domain. It examines a spectrum of variables, including performance, cognizance, familiarity, transparency, system bias, and the overall impact of AI, to understand their collective and individual effects that enable effective professional use in an organization. The study aims to illuminate the relationship between the degree of explainability provided by large language models such as ChatGPT, Bard, and Bing AI and the performance of these models when tasked with XAI adjustments.

AFIT Designator

AFIT-ENV-MS-24-M-148

Comments

A 12-month embargo was observed for posting this work on AFIT Scholar.

Distribution Statement A, Approved for Public Release. PA case number on file.

Share

COinS