Date of Award

3-2023

Document Type

Thesis

Degree Name

Master of Science

Department

Department of Operational Sciences

First Advisor

Mark Gallagher, PhD

Abstract

Generative Adversarial Networks (GANs) have received increasing attention in recent years due to their ability to capture complex, high-dimensional data distributions without the need for extensive labeling. Since their conception in 2014, a wide array of GAN variants have been proposed featuring alternative architectures, optimizers, and loss functions with the goal of improving performance and training stability. While this research has yielded GAN variants robust to training set shrinkage and corruption, our research focuses on quantifying the resilience of a GAN architecture to specific modes of image degradation. We conduct systematic experimentation to determine empirically the effects of 10 fundamental image degradation modes applied to the training image dataset on the Fr´ echet inception distance (FID) of images generated by a conditional DCGAN. We find that at the α = 0.05 level, brightening, darkening, and blurring are statistically significantly more detrimental to GAN image quality than removing the data completely, while other degradations are typically safe to keep in datasets. Additionally, we find that in the case of randomized partial occlusion, the FID of resulting GAN images approaches that of the degraded training set for increasing levels of corruption, with GAN FID performance surpassing that of the training set past 75% degradation.

AFIT Designator

AFIT-ENS-MS-23-M-116

Comments

A 12-month embargo was observed.

Approved for public release. Case number on file.

Related work: 2024 article in Expert Systems with Applications, https://scholar.afit.edu/facpub/1476/.

Share

COinS