10.1016/j.eswa.2024.123902">
 

Document Type

Article

Publication Date

2024

Abstract

Generative Adversarial Networks (GANs) have received immense attention in recent years due to their ability to capture complex, high-dimensional data distributions without the need for extensive labeling. Since their conception in 2014, a wide array of GAN variants have been proposed featuring alternative architectures, optimizers, and loss functions with the goal of improving performance and training stability. This manuscript focuses on quantifying the resilience of a GAN architecture to specific modes of image degradation. We conduct systematic experimentation to empirically determine the effects of 10 fundamental image degradation modes, applied to the training image dataset, on the Fréchet inception distance (FID) of images generated by a conditional deep convolutional GAN (cDCGAN). We find that at the α=0.05 level, brightening, darkening, and blurring are statistically significantly more detrimental to the resulting GAN image quality than removing the degraded data completely, while other degradations are typically safe to keep in training datasets. Additionally, we find that in the case of randomized partial occlusion, the FID of the resulting GAN images approaches that of the degraded training set for increasing levels of occlusion, with the surprising result that GAN FID performance is equal to that of the training set at 75% degradation.

Comments

This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. CC BY-NC 4.0.

Source Publication

Expert Systems with Applications

Share

COinS