Abstract:
In this study, we compare deep learning methods for generating images of handwritten characters. This problem can be thought of as a restricted Turing test: A human draws...Show MoreMetadata
Abstract:
In this study, we compare deep learning methods for generating images of handwritten characters. This problem can be thought of as a restricted Turing test: A human draws a character from any desired alphabet and the system synthesizes images with similar appearances. The intention here is not to merely duplicate the input image but to add random perturbations to give the impression of being human-produced. For this purpose, the images produced by two different generative models (Generative Adversarial Network and Variational Autoencoder) and the related training method (Reptile) are examined with respect to their visual quality in a subjective manner. Also, the capability of transferring the knowledge that is obtained by the model is challenged by using different datasets for the training and test processes. Using the proposed model and meta-learning method, it is possible to produce not only images similar to the ones in the training set but also novel images that belong to a class which is seen for the first time.
Date of Conference: 24-26 April 2019
Date Added to IEEE Xplore: 22 August 2019
ISBN Information:
Print on Demand(PoD) ISSN: 2165-0608