Document Type

Article

Publication Date

8-2019

Abstract

In recent years, convolutional neural networks have achieved state-of-the-art performance in a number of computer vision problems such as image classification. Prior research has shown that a transfer learning technique known as parameter fine-tuning wherein a network is pre-trained on a different dataset can boost the performance of these networks. However, the topic of identifying the best source dataset and learning strategy for a given target domain is largely unexplored. Thus, this research presents and evaluates various transfer learning methods for fine-grained image classification as well as the effect on ensemble networks. The results clearly demonstrate the effectiveness of parameter fine-tuning over random initialization. We find that training should not be reduced after transferring weights, larger, more similar networks tend to be the best source task, and parameter fine-tuning can often outperform randomly initialized ensembles. The experimental framework and findings will help to train models with improved accuracy.

Comments

Sourced from the publisher version at Springer:
Becherer, N., Pecarina, J. M., Nykl, S. L., & Hopkinson, K. M. (2019). Improving Optimization of Convolutional Neural Networks through Parameter Fine-tuning. Neural Computing and Applications, 31(8), 3469–3479. https://doi.org/10.1007/s00521-017-3285-0

This article is distributed under the terms of the Creative Commons Attribution 4.0 International License, CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

DOI

10.1007/s00521-017-3285-0

Source Publication

Neural Computing and Applications

Share

COinS