Combining Noise-to-Image and Image-to-Image GANs: Brain MR Image Augmentation for Tumor Detection

Convolutional Neural Networks (CNNs) can achieve excellent computer-assisted diagnosis performance, relying on sufficient annotated training data. Unfortunately, most medical imaging datasets, often collected from various scanners, are small and fragmented. In this context, as a Data Augmentation (DA) technique, Generative Adversarial Networks (GANs) can synthesize realistic/diverse additional training images to fill the data lack in the real image distribution; researchers have improved classification by augmenting images with noise-to-image (e.g., random noise samples to diverse pathological images) or image-to-image GANs (e.g., a benign image to a malignant one). Yet, no research has reported results combining (i) noise-to-image GANs and image-to-image GANs or (ii) GANs and other deep generative models, for further performance boost. Therefore, to maximize the DA effect with the GAN combinations, we propose a two-step GAN-based DA that generates and refines brain MR images with/without tumors separately: (i) Progressive Growing of GANs (PGGANs), multi-stage noise-to-image GAN for high-resolution image generation, first generates realistic/diverse 256 x 256 images--even a physician cannot accurately distinguish them from real ones via Visual Turing Test; (ii) UNsupervised Image-to-image Translation or SimGAN, image-to-image GAN combining GANs/Variational AutoEncoders or using a GAN loss for DA, further refines the texture/shape of the PGGAN-generated images similarly to the real ones. We thoroughly investigate CNN-based tumor classification results, also considering the influence of pre-training on ImageNet and discarding weird-looking GAN-generated images. The results show that, when combined with classic DA, our two-step GAN-based DA can significantly outperform the classic DA alone, in tumor detection (i.e., boosting sensitivity from 93.63% to 97.53%) and also in other tasks.


I. INTRODUCTION
Convolutional Neural Networks (CNNs) are playing a key role in medical image analysis, updating the state-of-the-art in many tasks [1]- [3], when large-scale annotated training data are available. However, preparing such massive medical data is demanding; thus, for better diagnosis, researchers generally adopt classic Data Augmentation (DA) techniques, such as geometric/intensity transformations of original images [4], [5]. Those augmented images, however, intrinsically have a similar distribution to the original ones, resulting in limited performance improvement. In this sense, Generative Adversarial Network (GAN)-based DA can considerably increase the performance [6]; since the generated images are realistic but completely new samples, they can fill the real image distribution uncovered by the original dataset.
The main problem in computer-assisted diagnosis lies in small and fragmented medical imaging datasets from various scanners; thus, researchers have improved classification by augmenting images with noise-to-image GANs (e.g., random noise samples to diverse pathological images [7]) or imageto-image GANs (e.g., a benign image to a malignant one [8]). However, no research has reported results achieved by combining (i) noise-to-image GANs and image-to-image GANs or (ii) GANs and other common deep generative models, such as Variational AutoEncoders (VAEs) using a single objective [9], for further performance boost. So, how can we maximize DA effect under limited training images using the GAN combinations? Aiming to generate and refine brain MR images with/without tumors separately, we propose a two-step GAN-based DA approach: (i) Progressive Growing of GANs (PGGANs) [10], low-tohigh resolution noise-to-image GAN, first generates realistic and diverse 256 × 256 images-the PGGANs is beneficial for DA since most CNN architectures adopt around 256 × 256 input sizes (e.g., InceptionResNetV2 [11]: 299 × 299, ResNet-50 [12]: 224 × 224); (ii) UNsupervised Image-toimage Translation (UNIT) [13] or SimGAN [14], image-toimage GAN combining GANs/VAEs or using a GAN loss for DA, further refines the texture/shape of the PGGANgenerated images to fit them into the real image distribution. We thoroughly investigate CNN-based tumor classification results, also considering the influence of pre-training on ImageNet [15] and discarding weird-looking GAN-generated images. Moreover, we evaluate the synthetic images' realism via Visual Turing Test [16] by an expert physician, and visualize the data distribution of real/synthetic images via t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm [17]. When combined with classic DA, our twostep GAN-based DA approach remarkably outperforms the classic DA alone, boosting sensitivity 93.63% to 97.53% 1 .
Research Questions. We mainly address two questions: • GAN Selection: Which GAN architectures are wellsuited for realistic/diverse medical image generation? • Medical DA: How to use GAN-generated images as additional training data for better CNN-based diagnosis?
Contributions. Our main contributions are as follows: • Whole Image Generation: This research shows that PGGANs can generate realistic/diverse 256×256 whole medical images, and not only small pathological areas. • Two-step GAN-based DA: This novel two-step approach, combining for the first time noise-to-image and image-to-image GANs, remarkably boosts tumor detection performance. • Misdiagnosis Prevention: This study firstly analyzes how medical GAN-based DA is associated with pretraining on ImageNet and discarding weird-looking synthetic images to achieve high sensitivity with small/fragmented datasets from various scanners.

II. GENERATIVE ADVERSARIAL NETWORKS
VAEs often suffer from blurred samples despite easier training, due to the injected noise and imperfect reconstruction using a single objective function; meanwhile, GANs [6] have revolutionized image generation in terms of realism and diversity [18] based on a two-player objective function: a generator G tries to generate realistic images to fool a discriminator D while maintaining diversity; D attempts to distinguish between the real and the generator's synthetic images. However, difficult GAN training from the twoplayer objective function accompanies artifacts and mode collapse [19], when generating high-resolution images (e.g., 256 × 256 pixels) [20]; to tackle this, multi-stage noise-toimage GANs have been proposed: AttnGAN [21] generates images from text using attention-based multi-stage refinement; PGGANs [10] generates realistic images using incremental multi-stage training from low resolution to high. Contrarily, to obtain images with desired texture and shape, researchers have proposed image-to-image GANs: UNIT [13] translates images using both GANs and VAEs; SimGAN [14] translates images for DA using a self-regularization term and local adversarial loss. Especially in medical imaging, to handle small and fragmented datasets from multiple scanners, researchers have exploited both noise-to-image and image-to-image GANs as DA techniques to improve classification: researchers used the noise-to-image GANs to augment liver lesion Computed Tomography (CT) [22] and chest cardiovascular abnormality X-ray images [23]; others used the image-to-image GANs to augment breast cancer mammography images [8] and bone lesion X-ray images [24], translating benign images to malignant ones and vice versa.
However, to the best of our knowledge, we are the first to combine noise-to-image and image-to-image GANs to maximize the DA performance. Moreover, this is the first medical GAN work generating whole 256 × 256 images, instead of regions of interest (i.e., small pathological areas) alone, for robust classification. Along with classic image transformations, a novel approach-augmenting realistic and diverse whole medical images with the two-step GAN-may become a clinical breakthrough.

A. BRATS 2016 TRAINING DATASET
We use a dataset of 240×240 contrast-enhanced T1-weighted (T1c) brain axial MR images of 220 High-Grade Glioma cases from the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) 2016 [25]. T1c is the most common sequence in tumor detection thanks to its high-contrast [26].

B. PGGAN-BASED IMAGE GENERATION
Pre-processing For better GAN/ResNet-50 training, we select the slices from #30 to #130 among the whole 155 slices to omit initial/final slices, which convey negligible useful information; also, since tumor/non-tumor annotation in the BRATS 2016 dataset, based on 3D volumes, is highly incorrect/ambiguous on 2D slices, we exclude (i) tumor images tagged as non-tumor, (ii) non-tumor images tagged as tumor, (iii) borderline images with unclear tumor/nontumor appearance, and (iv) images with missing brain parts due to the skull-stripping procedure 2 . For tumor detection, we divide the whole dataset (220 patients) into: • Training set (154 patients/4, 679 tumor/3, 750 non-tumor images); • Validation set (44 patients/750 tumor/608 non-tumor images); • Test set (22 patients/1, 232 tumor/1, 013 non-tumor images). During the GAN training, we only use the training set to be fair; for better GAN training, the training set images are zero-padded to reach a power of 2, 256 × 256 pixels from 240 × 240. Fig. 2 shows example real MR images.
PGGANs [10] is a GAN training method that progressively grows a generator and discriminator: starting from low resolution, new layers model details as training progresses. This study adopts the PGGANs to synthesize realistic and diverse 256 × 256 brain MR images (Fig. 3); we train and generate tumor/non-tumor images separately. PGGAN Implementation Details The PGGAN architecture adopts the Wasserstein loss using gradient penalty [19]: where the discriminator D is the set of 1-Lipschitz functions, P r is the data distribution by the true data sample y, and P g is 2 Although this discarding procedure could be automated, we manually conducted it for more reliability; this does not affect our conclusion.

Real MR Images
Generator Discriminator  the model distribution by the generated sampleỹ. A gradient penalty is added for the random sampleŷ ∼ Pŷ. We train it for 100 epochs with a batch size of 16 and 1.0 × 10 −3 learning rate for the Adam optimizer [27]. During training, we apply random cropping in 0-15 pixels as DA.

C. UNIT/SIMGAN-BASED IMAGE REFINEMENT
Refinement We further refine the texture and shape of PGGAN-generated tumor/non-tumor images separately to fit them into the real image distribution using UNIT [13] or SimGAN [14]. SimGAN remarkably improved eye gaze estimation results after refining non-GAN-based synthetic images from the UnityEyes simulator via image-to-image translation [14]; thus, we also expect such performance improvement after refining synthetic images from a noise-toimage GAN (i.e., PGGANs) via an image-to-image GAN (i.e., UNIT/SimGAN) with considerably different GANbased algorithms.
We randomly select 3, 000 real/3, 000 PGGAN-generated tumor images for tumor image training, and we performed the same for non-tumor image training. To find suitable refining steps for each architecture, we pick the UNIT/SimGAN models with the highest accuracy on tumor detection validation, when pre-trained and combined with classic DA, among 20, 000/50, 000/100, 000 steps, respectively.
UNIT [13] is an image-to-image translation method based on both GANs and VAEs; it jointly learns image distributions in different domains using images from the marginal distributions in each domain with a shared-latent space.
UNIT Implementation Details The UNIT architecture adopts the following loss: Using the multiple encoders jointly solves learning problems of the VAE 1 /VAE 2 and GAN 1 /GAN 2 for the image reconstruction streams, image translation streams, and cycle-reconstruction streams. We train it for 100, 000 steps with a batch size of 1 and 1.0 × 10 −4 learning rate for the Adam optimizer [27]. The learning rate is reduced by half every 20,000 steps. During training, we apply horizontal flipping as DA.
SimGAN [14] is an image-to-image GAN designed for DA that adopts a self-regularization term/local adversarial loss; it updates a discriminator with a history of refined images.
SimGAN Implementation Details The SimGAN architecture adopts the following loss: where θ denotes the function parameters, x i is the i th PGGAN-generated training image, and Y is the real images. The first part L real adds realism to the synthetic images, while the second part L reg preserves the tumor/non-tumor features. We train it for 20, 000 steps with a batch size of 10 and 1.0 × 10 −4 learning rate for the Stochastic Gradient Descent (SGD) optimizer [28]. The learning rate is reduced by half at 15,000 steps. During training, we apply horizontal flipping as DA. We use batch normalization [29] layers.

D. TUMOR DETECTION USING RESNET-50
Pre-processing. As ResNet-50's input size is 224 × 224 pixels, we resize the whole real images from 240 × 240 and whole synthetic images from 256 × 256.
ResNet-50 [12] is a 50-layer residual learning-based CNN and we adopt it to detect brain tumors in MR images (i.e., the binary classification of images with/without tumors). We chose the ResNet-50 for comparing DA setups due to its outstanding performance in image classification tasks [30].
To confirm the effect of PGGAN-based DA and its refinement using UNIT/SimGAN, we compare the following 10 DA setups under sufficient images both with/without ImageNet [15]  Whereas medical imaging researchers widely use the ImageNet initialization despite different textures of natural/medical images, recent study found that such ImageNettrained CNNs are biased towards recognizing textures rather than shapes [31]; thus, we aim to investigate how the medical GAN-based DA affects classification performance with/without the pre-training. As the classic DA, we adopt a random combination of horizontal/vertical flipping, rotation up to 10 degrees, width/height shift up to 8%, shearing up to 8%, zooming up to 8%, and constant filling of points outside the input boundaries (Fig. 4). For the PGGAN-based DA and its refinement, we only use success cases after discarding weird-looking synthetic images (Fig. 5); DenseNet-169 [32] extracts image features and k-means++ [33] clusters the features into 200 groups, and then we manually discard each cluster containing similar weird-looking images. To verify its effect, we also conduct the PGGAN-based DA experiment without the discarding step.
ResNet-50 Implementation Details The ResNet-50 architecture adopts the binary cross-entropy loss for binary classification both with/without ImageNet pre-training. For robust training, before the final sigmoid layer, we use a 0.5 dropout [34], linear dense, and batch normalization [29] layers-training with GAN-based DA tends to be unstable especially without the batch normalization layer. We use a batch size of 96, 1.0 × 10 −2 learning rate for the SGD optimizer [28] with 0.9 momentum, and early stopping of 20 epochs. The learning rate was multiplied by 0.1 every 20 epochs for the training from scratch and by 0.5 every 5 epochs for the ImageNet pre-training.

E. CLINICAL VALIDATION USING VISUAL TURING TEST
To quantitatively evaluate the (i) realism of the PGGANbased synthetic images and (ii) clearness of their tumor/nontumor features, we supply, in random order, to an expert physician a random selection of: • 50 real tumor images; • 50 real non-tumor images; • 50 synthetic tumor images; • 50 synthetic non-tumor images. Then, the physician has to classify them as both (i) real/synthetic and (ii) tumor/non-tumor, without previously knowing which is real/synthetic and tumor/non-tumor. The so-called Visual Turing Test [16] can probe the human ability to identify attributes and relationships in images, also for visually evaluating GAN-generated images [14]; this also applies to medical images for clinical decision-making tasks [35], [36], wherein physicians' expertise is critical.

F. VISUALIZATION USING T-SNE
To visually analyze distributions of geometricallytransformed and each GAN-based images by PGGANs/ UNIT/SimGAN against real images (i.e., 4 setups), we adopt t-SNE [17] on a random selection of: • 300 real tumor images; • 300 real non-tumor images; • 300 geometrically-transformed or each GAN-based tumor images; • 300 geometrically-transformed or each GAN-based non-tumor images. We select only 300 images per each category for better visualization. The t-SNE method reduces the dimensionality to represent high-dimensional data into a lower-dimensional (2D/3D) space; it non-linearly balances between the input data's local and global aspects using perplexity.

t-SNE Implementation Details
The t-SNE uses a perplexity of 100 for 1, 000 iterations to visually represent a 2D space.

IV. RESULTS
This section shows how PGGANs generates synthetic brain MR images and how UNIT and SimGAN refine them. The results include instances of synthetic images, their quantitative evaluation by an expert physician, their t-SNE visualization, and their influence on tumor detection.  Fig. 5 illustrates examples of synthetic MR images by PG-GANs. We visually confirm that, for about 75% of cases, it successfully captures the T1c-specific texture and tumor appearance, while maintaining the realism of the original brain MR images; but, for the rest 25%, the generated images lack clear tumor/non-tumor features or contain unrealistic features (i.e., hyper-intensity, gray contours, and odd artifacts).

B. MR IMAGES REFINED BY UNIT/SIMGAN
UNIT and SimGAN differently refine PGGAN-generated images-they render the texture/contours while maintaining the overall shape (Fig. 6). Non-tumor images change more remarkably than tumor images for both UNIT/SimGAN; it probably derives from unsupervised image translation's loss for consistency to avoid image collapse, resulting in conservative change for more complicated images. Table 1 shows the brain tumor classification results with/without DA. ImageNet pre-training generally outper-VOLUME x, 2019  forms training from scratch despite different image domains (i.e., natural images to medical images). As expected, classic DA remarkably improves classification, while no clear difference exists between the 200,000/400,000 classic DA under sufficient geometrically-transformed training images. When pre-trained, each GAN-based DA (i.e., PG-GANs/UNIT/SimGAN) alone helps classification due to the robustness from GAN-generated images; but, without pretraining, it harms classification due to the biased initialization from the GAN-overwhelming data distribution. Similarly, without pre-training, PGGAN-based DA without clustering/discarding causes poor classification due to the synthetic images with severe artifacts, unlike the PGGAN-based DA's comparable results with/without the discarding step when pre-trained.

C. TUMOR DETECTION RESULTS
When combined with the classic DA, each GAN-based DA significantly outperforms the GAN-based DA or classic DA alone-the former fills the real image distribution uncovered by the original dataset, while the latter provides the robustness on training for most cases; here, both image-toimage GAN-based DA, especially UNIT, produce remarkably higher sensitivity than the PGGAN-based DA after refinement. Specificity is higher than sensitivity for every DA setup with pre-training, probably due to the training data imbalance; but interestingly, without pre-training, sensitivity is higher than specificity for both image-to-image GANbased DA-thus, when combined with the classic DA, the UNIT-based DA achieves the highest sensitivity 97.53%, allowing to significantly alleviate the risk of overlooking the tumor diagnosis. Table 2 indicates the confusion matrix for the Visual Turing Test. The expert physician classifies a few PGGANgenerated images as real despite their high resolution (i.e., 256 × 256 pixels). The synthetic images successfully capture tumor/non-tumor features; unlike the non-tumor images, the expert recognizes a considerable number of the mild/modest tumor images as non-tumor for both real/synthetic cases. It derives from clinical tumor diagnosis relying on a full 3D volume, instead of a single 2D slice.

E. T-SNE RESULTS
As Fig. 7 represents, the real tumor/non-tumor image distributions largely overlap while the non-tumor images distribute wider. The geometrically-transformed tumor/non-tumor image distributions also often overlap, and both images distribute wider than the real ones. All GAN-based synthetic images by PGGANs/UNIT/SimGAN distribute widely, while their tumor/non-tumor images overlap much less than the geometrically-transformed ones; the UNIT-refined images show a more similar distribution to the real ones than the PGGAN/SimGAN-based images, probably due to the UNIT's loss function adopting both GANs/VAEs-overall, the GAN-based images, especially the UNIT-refined images, fill the distribution uncovered by the real or geometricallytransformed ones with less tumor/non-tumor overlap.

V. CONCLUSION
Visual Turing Test and t-SNE results show that PGGANs, multi-stage noise-to-image GAN, can generate realistic and diverse 256 × 256 brain MR images with/without tumors separately. The generated images can improve tumor classification, when combined with classic DA-especially af-ter refining them with UNIT or SimGAN, image-to-image GANs; thanks to an ensemble effect from those GANs' different algorithms, the refined images can replace missing data points of the training dataset with less tumor/non-tumor overlap and regularize the model, and thus handle the data imbalance with improved generalization. Especially, UNIT outperforms SimGAN, probably due to the effect of combining both GANs and VAEs.
Regarding better medical GAN-based DA, ImageNet pretraining generally improves classification despite different textures of natural/medical images; but, without pre-training, the GAN-refined images may help achieve better sensitivity, allowing to alleviate the risk of overlooking the tumor diagnosis. GAN-generated images typically include odd artifacts; however, only without pre-training, discarding them boosts DA performance.
Overall, by minimizing the number of annotated images required for medical imaging tasks, the two-step GAN-based DA can shed light not only on classification, but also on object detection [37] and segmentation [38]. Moreover, other potential medical applications exist: (i) A data anonymization tool to share patients' data outside their institution for training without losing detection performance. This GANbased application is reported in [38]; (ii) A physician training tool to show random pathological images for medical students/radiology trainees despite infrastructural/legal constraints [39]. As future work, we plan to define a new GAN loss function that explictly aims at optimizing the classification results, instead of visual realism, similarly to the threeplayer GAN proposed in [40].