Affordable Access

Access to the full text

Domain randomization for neural network classification

Authors
  • Valtchev, Svetozar Zarko1
  • Wu, Jianhong1
  • 1 Laboratory of Industrial and Applied Mathematics, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada , Toronto (Canada)
Type
Published Article
Journal
Journal of Big Data
Publisher
Springer Nature
Publication Date
Jul 02, 2021
Volume
8
Issue
1
Identifiers
DOI: 10.1186/s40537-021-00455-5
Source
Springer Nature
Keywords
Disciplines
  • Research
License
Green

Abstract

Large data requirements are often the main hurdle in training neural networks. Convolutional neural network (CNN) classifiers in particular require tens of thousands of pre-labeled images per category to approach human-level accuracy, while often failing to generalized to out-of-domain test sets. The acquisition and labelling of such datasets is often an expensive, time consuming and tedious task in practice. Synthetic data provides a cheap and efficient solution to assemble such large datasets. Using domain randomization (DR), we show that a sufficiently well generated synthetic image dataset can be used to train a neural network classifier that rivals state-of-the-art models trained on real datasets, achieving accuracy levels as high as 88% on a baseline cats vs dogs classification task. We show that the most important domain randomization parameter is a large variety of subjects, while secondary parameters such as lighting and textures are found to be less significant to the model accuracy. Our results also provide evidence to suggest that models trained on domain randomized images transfer to new domains better than those trained on real photos. Model performance appears to remain stable as the number of categories increases.

Report this publication

Statistics

Seen <100 times