Progressive with Purpose: Guiding Progressive Inpainting DNNs through Context and Structure

The advent of deep learning in the past decade has significantly helped advance image inpainting. Although achieving promising performance, deep learning-based inpainting algorithms still struggle from the distortion caused by the fusion of structural and contextual features, which are commonly obtained from, respectively, deep and shallow layers of a convolutional encoder. Motivated by this observation, we propose a novel progressive inpainting network that maintains the structural and contextual integrity of a processed image. More specifically, inspired by the Gaussian and Laplacian pyramids, the core of the proposed network is a feature extraction module named GLE. Stacking GLE modules enables the network to extract image features from different image frequency components. This ability is important to maintain structural and contextual integrity, for high frequency components correspond to structural information while low frequency components correspond to contextual information. The proposed network utilizes the GLE features to progressively fill in missing regions in a corrupted image in an iterative manner. Our benchmarking experiments demonstrate that the proposed method achieves clear improvement in performance over many state-of-the-art inpainting algorithms.


I. INTRODUCTION
I MAGE inpainting is the task of restoring missing patches of pixels in an image [1], [2].As the name suggests, inpainting targets filling in missing parts of an image (i.e., image holes) with contextually meaningful information so that the image can be restored to its original form.This could be a quite difficult task for machines, for it is an ill-posed inverse problem [3].It requires not only the ability to predict what is missing, but also whether it fits within the context of the image or not.Thus, a key to attaining satisfying inpainting results is to ensure that the reconstructed pixels are consistent with the uncorrupted region and exhibit coherence in both structure and texture.
As a main remedy for restoring image quality, inpainting is of great importance nowadays, for modern societies are increasingly reliant on visual content with images as its building block, from surveillance systems to autonomous vehicles, media streaming, and conference calls.Storing, displaying, and exchanging huge amount of images make them prone to damages, one of which is missing pixels (image holes).It is thus unsurprising to see increasing research interest in inpainting within the computer vision community.

A. Motivation
Many image inpainting techniques have been proposed over the past two decades.They could loosely be grouped into two major categories: traditional and modern.The main defining difference between the two categories is the use of deep learning.The traditional category of techniques could collectively be divided into two sub-categories [3]: exemplarbased and diffusion-based.The former approach [4] searches for the best matching patches from known regions and pastes them into missing regions.Such techniques have high computational costs for patch searching and generate unrealistic results due to the lack of perspective transformation.The diffusion-based techniques, on the other hand, recreate a missing region with features from its surrounding known region.Although diffusion-based techniques are more efficient than their exemplar-based counterparts, they result in oversmoothed inpainting results because of regularization based on partial-differential equations.
The advent of deep learning in computer vision has created a surge of inpainting techniques that utilize Deep Neural Networks (DNNs).Although those techniques exhibit some overlap with the tradition diffusion-based techniques, they define the state-of-the-art in inpainting, and, therefore, merit a category of their own.Most of the early works on inpainting with deep learning, like [5] [6], follow a two-stage approach, which firstly learns the image structure from a given edge/structure map of a corrupted image, then refines the missing region with a texture generator.However, two-stage image inpainting methods usually cause artifacts due to their limited ability to recover both structure and texture.
To deal with those artifacts, progressive inpainting techniques have been explored.They rely on the idea that not all predicted pixels in a region plagued with artifacts are defective; some are good predictions that could be utilized to improve the re-generated region and weed out the artifacts.Hence, those techniques fill in the missing holes by iterating over the image and learning from previously-predicted pixels.Examples of such techniques are the full-resolution residual network proposed by Guo et al. [7] and the iterative confidence feedback network proposed by Zheng et al. [8].Despite the improvement they provide over two-stage techniques, the performance of progressive techniques is still prone to artifacts.This could be traced back to the inexplicit modelling of structure and texture in those techniques.
Fusing structure and texture awareness with progressive inpainting is arguably the most promising approach to overcoming visual artifacts, which shall be followed in this pa-per.Developing structure and texture-aware algorithms has been explored recently in Guo et al. [9].Two different but coupled autoencoders are trained with structure and texture constraints to fill in holes in corrupted images.The results are encouraging, but the algorithm can cause distortion in deep parts of the hole due to one-stage feature fusion.This could be overcome by incorporating progressive inpainting into the learning process.

B. Contribution
In an attempt to bring together progressive learning and fusion of texture and structure, this paper presents a Gaussian-Laplacian feature Extraction (GLE) module.The main contributions of the proposed architecture are summarized below: • GLE Module: Inspired by image pyramid, we propose a GLE module to obtain features from high and low image frequency components.Those components provide texture information (low-frequency components) and structure information (high-frequency components).The GLE module leverages those multi-frequency components to learn textural and structural features.• Iterative Reinpainting Component: A progressive reinpainting component is developed such that it gradually fills in the corrupted regions of an image.It utilizes features learned by the GLE modules from different frequency components to fill in the outer edge of the corrupted regions iteratively until the region is restored.• Benchmarking and Evaluation Experiments: Various experiments are designed to evaluate the performance of the proposed architecture and show the benefits of the GLE module and the reinpainting component.The experiments also compare the proposed inpainting algorithm to some state-of-the-art algorithms to situate its contribution to the inpainting problem.

C. Paper organization
The organization of this paper is shown as follows.Section II reviews works that are related to our method.Section III details the architecture of the proposed progressive image inpainting network.Sections IV and V illustrate the experimental setup and the experimental results.Section VI concludes this paper.

II. RELATED WORK
The proposed solution is developed on top of a rich literature of image inpainting with deep learning.To facilitate the discussion, the following three subsections will review some concepts related to the proposed solution and some relevant inpainting solutions.They should lay the necessary groundwork for the detailed description in Section III

A. Variants of Texture and Structure Inpainting
Inpainting based on texture and structure has been attempted in various forms in the literature.The concepts of style and content have been introduced in [10], which could be viewed as derivatives of texture and structures, respectively.They are used to build a two-stage inpainting DNN.In the first, two encoders extract style and content latent information separately, and the second stage synthesize a full image from that information.Semantic segmentation masks are another alternative that helps capture structure information.They have been utilized in [11]- [13] as a way to guide texture generation.In all those papers, an encoder network learns to generate a latent representation of the corrupted image that captures the structure.It does so by pushing the decoder network to recover not only the inpaintd image but also its segmentation mask.The three papers differ in the details of how to encode a corrupted image and generate a structurally consistent image, but they all match in the objective, capturing the structure to produce meaningfully inpainted image.

B. Progressive Image Inpainting
Progressive image inpainting, as the name suggests, aims to recover images gradually by utilizing features from undamaged and recently recovered regions.Overall, algorithms following this approach could be grouped in two broad categories: (i) contextual information-based algorithms, and (ii) structural constraints-based algorithm.Both are briefly reviewed below.
Contextual information-based algorithms rely mainly on CNN features extracted from input images to restore damaged regions.As a pioneer of the contextual features-based algorithms, Hsu et al. [14] propose using several deep convolution networks to learn progressive inpainting from multiple image scales, from low to high resolution images.Zhang et al. [15] recognize how inpainting lends itself to recurrent modelling; they propose to use several generative networks inter-connected with LSTM module, which progressively fills in the missing region of an image.More recently, Li et al. [16] extend that analogy further.They propose a recurrent feature reasoning module with knowledge-consistent attention, which can progressively enhance the details in masked regions.
Compared with their contextual information-based counterparts, the structural constraints-based algorithms take advantage of additional external structural constraints provided by edge detection algorithms.The algorithms in [17] [6] utilize contour or edge maps as a guide for image completion.To progressively complete the image, Li et al. [18] propose a U-net that recovers the edge maps while inpainting images progressively.These approaches, collectively, seek to tackle image inpainting by introducing structural constraints, yet their performance remains limited by a lack of information for recovering deeper pixels in the missing regions.

C. Gaussian and Laplacian Pyramid
A classical approach to image inpainting is centered around the idea of building multi-scale image pyramids, in which inpainting is done progressively from one scale to anothercommonly from smallest to largest scale.Those pyramids are usually called Gaussian or Laplacian pyramids based on the type of filters used to generate them.Specifically, let G denote the Gaussian smooth operator, I τ express the input image to the τ th level of Gaussian pyramid.Q denotes upsmpling operation, D denotes downsampling operation.The formulas for the output images G τ , F τ of τ th Gaussian pyramid and Laplacian pyramid are Inpainting algorithms using Gaussian and Laplacian pyramids could roughly be clustered into three groups: inpainting on multiple Gaussian pyramids [19], inpainting on multiple Laplacian pyramids pyramids [20] [21], and inpainting on multiple Gaussian and Laplacian [21].
The difference between the first and the second group is in how the inpainting algorithm is applied on different image pyramids.For instance, Farid et al. [19] first generate multiple Gaussian pyramids until most missing pixels are eliminated by the smoothing operation.Then, their algorithm copies and pastes the missing pixels from the small-scale image (top of the pyramid) to the large-scale images (bottom of the pyramid).In contrast, [20] utilizes Laplacian pyramid with patch search to recover missing pixels from small to large scale images in the pyramid.Because of the limitation of exemplarbased methods, both kinds of methods suffer from unrealistic inpainting results.
Benefiting from the combination of structure and texture, the third inpainting group (i.e., algorithms relying on multiple Gaussian and Laplacian pyramids) usually achieve better performance than their counterparts relying only on one of the two pyramids.However, the additional cost from inpainting both pyramids is relatively high compared to that of the former two groups.

III. PROPOSED INPAINTING ALGORITHM
Like a person solving a jigsaw puzzle, an inpainting algorithm should fill in the missing regions by gradually piecing pixels together while keeping an eye on context and structure.Progressive algorithms, as mentioned earlier, restore missing pixels gradually using undamaged and recently recovered pixels, yet they do not jointly maintain contextual and structural information.This observation fuels the work in this paper; a Deep neural Network (DNN) is designed such that it progressively inpaints with the purpose of maintaining structure and context information.Hence, it is described as being progressive with purpose.
The idea behind the proposed algorithm is to break down the inpainting task into three main stages, namely feature extraction (first stage), iterative inpainting (second stage), and enhancing and reconstruction (third stage).The first stage is aimed to extract multi-level features from the corrupted image, which mimics, to some extent, feature extraction from image pyramids used in classical inpainting algorithms such as [19], [20].The multi-level features capture contextual and structural information.They are fed to the iterative inpainting stage, which attempts to recover some of the missing information gradually over several iterations.Each one generates a pair of feature volumes.The pairs are passed to the enhancement and reconstruction stage to enhance the recovered information, fuse them into one feature volume, and reconstruct the complete image.The architecture of the proposed algorithm is depicted in Figure 1.
The architecture is detailed in the following four subsections.The first one presents a formal description of how progressive inpainting restores missing pixels.The following three are a deep-dive into the three stages of the proposed architecture, describing the inner workings of each stage.Finally, the last subsection presents the loss function used to train the architecture.

A. Rationale Behind the Proposed Algorithm
Let a be the original image, and b be the corrupted image.We denote the conditional probability distribution of the original image given the corrupted image by p A|B .Image inpainting can be formulated as a maximum a posterior (MAP) estimation problem: Clearly, we have Our algorithm aims to produce an approximate version of mmap .
We divide the corrupted region into T concentric regions, and progressively recover the corrupted region in an inward manner from the 1 th to the T th concentric region.
Let m(τ) r denote the inpainted r th concentric region at the τ th step.The process proceeds as follows.At the τ th step (with τ from 1 to T ), we generate m(τ) τ based on the valid region n and the τ −1 inpainted concentric regions m(τ−1) , respectively.At the end of the T th step, we collected m(τ) r (1 ≤ τ ≤ T , 1 ≤ r ≤ τ ) generated throughout the process and perform an enhancement.Specifically, for τ from 1 to T − 1, we leverage   + P E / q 5 Z J 7 U i r f n B Y q F 9 M 4 c m g f H a I i c t E Z q q B r V E U 1 R N E j e k a v 6 M 1 6 s l 6 s d + t j 0 r p g T W f 2 0 B 9 Y n z + 1 0 p i 0 < / l a t e x i t > Fhigh(⌧ + 1) < l a t e x i t s h a 1 _ b a s e 6 4 = " h s K w q b u 8 0 d Z w q y h o 0 F r F q + 0 Q z w S V r A A f B 2 o l i J P I F a / m D y 7 H f u m N K 8 1 j e w j B h 3 Y j 0 J Q 8 4 J W A k r 7 j v R g R C P 8 B X X u Y C u 4 c s 5 P 1 w N C q 7 Q N J j 5 8 g r l u y K P Q H + S 5 w Z K a E Z 6 l 7 x 0 + 3 u a 1 Y p z U q n e n J Z q F 7 M 4 8 m g P H a A y c t A Z q q F r V E c N R N E D e k I v 6 N V 6 t J 6 t N + t 9 2 p q z Z j O 7 6 B e s j 2 9 I O J l 8 < / l a t e x i t >          A q e C T c t O q l h C 6 J g M W V / T i I R M u d n s g y k + 0 4 q P g 1 j q i g D e V l 0 q n X 7 I t a / e 6 y 0 r g p 4 i i h E 3 S K q s h G V 6 i B m q i F 2 o i i R / S M X t G b 8 W S 8 G O / G x 7 x 1 x S h m j t A f G J 8 / l 6 K W U g = = < / l a t e x i t > H(⌧ 1) < l a t e x i t s h a 1 _ b a s e 6 4 = " t 3 e q o O S e s N v h G p Z g 1

B. Feature Extraction Stage
The feature extraction stage is inspired by Gaussian and Laplacian pyramids.The main component of this stage is a sequence of convolution, Gaussian smoothing, upsampling, another convolution, and subtraction.This sequence will be henceforth referred to as the GLE module.As shown in the first column of Figure 1, the input is first passed through a convolutional layer with 64 kernels, and a ReLU activation function.Next, the generated feature continue passing through a convolutional layer with a number of kernels that is double the number of input channels.Each kernel has a 7 × 7 height and width, 2×2 stride, and 3×3 padding.It results in a reduced size feature that is then blurred using a 3 × 3 Gaussian kernel moving with a stride of 1 and implementing a padding of 1 to maintain the spatial dimensions fixed.The smoothed feature is passed to the next feature module and to the upsampling layer, as well.It is upsampled using nearest neighbor to recover the original input size before it is passed through the second convolutional layer.This convolution is characterized with the same hyper-parameters as those of the first one, but it has half the number of kernels recovering the same number of channels as that of the input tensor.The output feature map is produced by subtracting the original input from the feature map coming from the second convolutional layer.Let I τ −1 denotes the input feature maps of τ th GLE module, G denotes the gaussian smoothing operation, U p denotes the upsample operation, Λ Gs denotes the weights of the convolutional layer before the gaussian smoothing operation, and Λ U p denotes the weights of convolution layer after the upsample operation.The GLE module can be expressed as where τ ∈ {1, 2, ..., 5}.F τ denotes output feature maps from τ th GLE module.The F 6 is generated right after the gaussian smooth layer of 5 th GLE module.
To produce various levels of features, this stage is designed to have 5 GLE modules stacked consecutively, each one feeds into the next.A corrupted image, one with missing pixels and denoted by I in in Figure 1, a corrupted structural image I struc used in [5] and a binary mask M in are the input to the first module, and the output is the blurred feature volume I 1 as well as the difference feature volume F 1 .I 1 has half the height and width of the input image and double the number of channels, and it is passed to the next GLE module.F 1 , on the other hand, is buffered to construct the feature pyramid output that represents the output of the feature extraction stage.The pyramid is formed by stacking the difference-feature volumes generated by each GLE module, namely F 1 , . . ., F 6 .

C. Iterative Inpainting Stage
This is the second stage of the proposed solution, which is based on the concept of progressive inpainting.The main elements of this stage are partial convolution, regular convolution, and feature attention.These elements make up two parallel branches, in which features are processed iteratively.The following three subsections detail the inner workings of this stage.
1) Partial Convolution: Partial convolution is a fundamental tool to fill the irregular holes in deep learning-based image inpainting and keep track of the unfilled regions of the image.To see how a partial convolution layer accomplishes this, let W k ∈ R C×H ×W denote the weight tensor of the kth kernel in a partial convolution layer, X i,j ∈ R C×H ×W denote the input feature patch extracted from the input tensor X in ∈ R C×H×W centered around (i, j)-th pixels, where C, H and W are, respectively, the number of channels, height, and width of the patch and C, H and W are, respectively, the number of channels, height, and width of the input tensor.Also, let H i,j denote a H × W binary patch centered around the (i, j)-th pixel, and H i,j is a C × H × W binary tensor formed by stacking C copies of the matrix H i,j .Then, the (i, j)-th value of the k-th output feature map, i.e., y i,j,k , produced by a partial convolution layer-before activation-is given by 1 is a C × H × W tensor of all ones, and b ∈ R is the bias associated with the k-th kernel.Following a partial convolution is a mask update to make sure that the mask is keeping up with the updated feature map coming out of the partial convolution.
Let the full mask be given by where H and W are, respectively, the height and width of the mask such that H H and W W , and Hi,j is a submatrix forming a block in H centered around (i, j)-th pixel.This mask is updated by convolving an all one kernel with the mask.Let U be a H × W kernel of all ones.Then, the updated mask is given by where * is the convolution operation with a stride equal to that of the partial convolution kernel.More about partial convolution could be found in [22].
2) Feature Attention: For any feature volume F ∈ R C×H×W , an attention tensor could be generated using cosine similarity and softmax.Let f i,j and f i ,j denote pair of feature values at location i, j and i ,j .Then, their cosine similarity is computed as follows: where z i,j,i ,j denotes the cosine similarity score between the f i,j and f i ,j3 .Let Z i,j ∈ R H×W denotes the score matrix of a feature vector at location i, j and all C-dimensional feature vectors in F. The softmax function is applied across the height and width to generate the attention score of location i, j in F. Formally, this is expressed as follows: where Ẑi,j ∈ R H×W .The final feature volume F has an attention tensor Ẑ ∈ R HW ×H×W formed by stacking HW score maps Ẑi,j .Based on the calculated score map, we reuse the feature patches from the input of feature attention module as de-convolutional filters to reconstruct the new feature map.
3) Putting It All Together: Iterative inpainting is built on top of the feature extraction stage with the feature pyramid as its input.This is illustrated in the middle column of Figure 1.The pyramid is first split into two halves; feature maps coming from the first three feature modules (i.e., the first three from the input side) are concatenated to form the feature volume F low ∈ R Cin×Hin×Win with C in channels, H in height, and W in width.Feature maps coming from the last three feature modules form another feature volume denoted F high ∈ R Cin×Hin×Win .Those two volumes are sent down two different but parallel iterative branches that have the same composition of layers.Both start with two partial convolutions with leaky ReLU activations, followed by a feature attention module.The specifications of each layer are detailed at the bottom of the middle column of Figure 1.
Each branch processes the input volume iteratively, which is done as follows.Let τ represent a time index for the iterative process.Both F low (0) or F high (0) goes through the partial convolutions and the attention module, making up the first iteration (τ = 1).The outputs, denoted F low (τ + 1) and F high (τ + 1), are used to initialize the next iteration as well as construct a new feature volume.A copy of F low (τ + 1) and F high (τ + 1) is sent back to the input to undergo the next iteration.Another copy is sent forward to a concatenation operation to form part of a new feature volume denoted F cat .This keeps on going for T iterations (τ ∈ {1, 2, . . ., T }) until F cat is complete, i.e., a tensor of dimensions C cat ×H cat ×W cat where C cat = 2T C in is formed.This tensor is, finally, passed to a convolution layer with leaky ReLU activation, which generates the intermediate feature volume F int .
Remark: Please note that the F int feature volume comprises C int = C cat = 2T C in feature maps, which could be split into T sub-volumes.This is important for the sake of the third and final stage of the proposed architecture.

D. Enhancement and Reconstruction Stage
1) Reinpainting Component: The main idea behind the reinpainting component is to re-enhance the fused feature subvolumes in F int .This is done along two branches that process two different concatenations of feature sub-volumes from F int .See Figure 1.Let F int (τ ) represent the τ -th sub-volume in F int , where τ ∈ {1, . . ., T − 1}.The first branch concatenates F int (τ − 1), F int (τ ), and F int (τ + 1) and passes them into three convolutional layers with ReLU activations.The result is multiplied with the updated mask of iteration τ −1 from the second stage, i.e., H(τ − 1), to eliminate the negative effect of the unfilled region in each iteration.
The second branch is symmetric with the first, but focuses on different sub-volumes.It concatenates F int (τ ) and F int (τ +1) and passes them through three convolutional layers with ReLU activations.The result here is multiplied with the difference of two updated masks from iterations τ and τ − 1, i.e., H(τ ) − H(τ − 1), which only contains information from the intersection region between F int (τ ) and F int (τ − 1).The results of the two branches are combined with the sub-volume F int (τ ) to produce a new sub-volume F reinp (τ ).
2) Reconstruction Component: The reinpainting model outputs a feature volume F reinp that is fed to the reconstruction component.This is the final component of the proposed architecture, responsible for producing the complete image.A visualization of the reconstruction component is illustrated in the left panel of Fig. 1.This component adopts the feature merging module from [16] which fuses the feature group based on the filled locations in each iteration.The merge module feeds into three upsampling layers followed by a partial convolution layer, three residual blocks, and a sequence of three convolutional layers.The complete architecture is summarized in Algorithm 1 4 .

E. Loss Functions
This section describes the loss functions for training the proposed inpainting network.It is a composite loss with multiple terms accounting for different aspects that the proposed algorithm needs to maintain.Perceptual loss and style loss are two of those terms that are popular for solving image generation problems.They are calculated using groundtruth and output feature maps obtained from a pretrained VGG model [23].Groundtruth features are those produced by the max-pooling layers of the VGG network when the input is the complete groundtruth image whereas output features are those obtained from the same pooling layers but with restored image as an input.Formally, the perceptual loss is given by and the style loss is given by where φ gt θ denotes the vectorized groundtruth feature map from the θ th pooling layer of VGG-16, φ out θ denotes the vectorized output feature map from the θ th pooling layer of pretrained VGG-16, and C θ , H θ , and W θ are, respectively, the number of channels, height, and width of the θ th feature map.
The third term of the composite loss is the total variation loss, which enforces smoothness in the region of predicted pixels (i.e., the holes) [22], [24].Formally, this term is formulated as follows.Let I i,j out denote the pixel value of output image at location i, j, N denote the total number of elements in the output image, R denote the set of pixels surrounding a corrupted pixel I i,j out .The total variation loss is given by The last two terms in the composite loss are first norms of the difference between the output and groundtruth images.Let I out denote the output image from the proposed algorithm, I gt denote the ground truth image, and Hgt denote the groundtruth mask of the image.The two terms are, then, given by where L valid expresses the first norm loss between undamaged region of the output image and the ground truth image, and L hole expresses the first norm loss between filled region of the output image and the ground truth image.The composite loss, as the name suggests, is a weighted sum of all the above terms where λ valid , λ hole , λ perc , λ style , and λ tv are all hyper-parameters scaling the contribution of each of their respective terms to the composite loss.

IV. EXPERIMENTAL SETUP
The proposed algorithm needs to be put to test in order to demonstrate its performance.This section presents the experimental setup adopted to evaluate its performance.It describes the development datasets, the implementation details, and the benchmark algorithms.

A. Datasets
Four development datasets are adopted here: • Paris Streetview Dataset [27]

B. Implementation Details
The proposed algorithm is trained with batch size of 4 on two NVIDIA 1080 TITANs.We use corrupted images, structural maps, and irregular holes as inputs, which are resized to 256 × 256.Adam [30] optimizer is used to train the network.The training is conducted with a learning rate of 10 −4 , and the network is fine-tuned with a learning rate of 10 −5 .The network is trained on Paris and CelebA Dataset for 40 epochs and fine-tuned for 20 epochs.On Place2 Dataset, the network is trained for 200 epochs and fine-tuned for 100 epochs.During the fine-tuning, only the weights of batch normalization layers are frozen while the rest are adjusted.The hyper-parameters of the loss function are set to λ valid = 1, λ hole = 6, λ perc = 0.05, λ style = 120, and λ tv = 0.1.History graphs of smoothed training loss and testing performance vs. number of steps on Celeba dataset are shown in Fig. 2. The total number of parameters of the proposed network is 82 million.This means its memory footprint assuming float-32 representation is roughly 312 MB and its inference time averages 0.147 ms per image.

C. Benchmark Algorithms and Evaluation Metrics
The proposed algorithm is compared to five state-of-theart methods, namely PIC [25], PC [22], PRVS [18], EC [6], RFR [16] and MDEFE [26].We use PIC as baseline which is a probabilistically principled framework in image inpainting.PC is a fundamental technique that can be considered as another baseline in image inpainting.EC is a two-stage image inpainting method based on the edge recovering method.PRVS and RFR belong to the family of progressively image inpainting.The PRVS progressively recover the image structural information while RFR recover the image contextual information.Those five algorithms are henceforth referred to as the benchmark algorithms.The proposed algorithm is compared to all five using three metrics, which are Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) and mean first norm loss (L 1 ).

V. EVALUATION RESULTS
The performance of the proposed algorithm is evaluated in this section using the setup described in Section IV.The evaluation starts with quantitative analysis where the proposed algorithm is benchmarked to others.Then, a qualitative analysis follows.It presents a comparison of the quality of inpainted images between the proposed algorithm and the benchmark algorithms.Finally, this section is concluded with an ablation analysis illustrating the value of each novel component in the proposed network.

A. Quantitative Analysis
The proposed algorithm is compared to the benchmark algorithms on the basis of PSNR, SSIM, and L 1 loss.Table I presents the comparison results on the Paris StreetView, CelebA, and Place2 datasets.It presents the results for different choices of masking percentage (i.e., mask ratio).The performance of the proposed algorithm stands out throughout the table; despite the slim margin in some cases, its performance could be argued to best all other competing algorithms on all three datasets.

B. Qualitative Analysis
The above quantitative results are translated into visual analysis to demonstrate the inpainting quality of the proposed algorithm.This is done through a few examples from three datasets, namely Paris StreetView, CelebA, and Place2 datasets.Fig. 3, 4 and 5 show three corrupted images, their groundtruth, the inpainted images by the proposed and benchmark algorithms.The proposed algorithm can generate realistic details and structures.Specifically, in the top row of Fig. 3, the window produced by the proposed algorithm is clearer than those produced by other methods.Further evidence could be seen in the top row of Fig. 4 and both rows of Fig. 5.In the former, the hair strands atop the man's forehead are better defined and clearer in the image produced by the proposed algorithm compared to those produced by the benchmark algorithms; they look similar to those strands depicted in the groundtruth image.Both rows of Fig. 5 show artifacts in the inpainted region by benchmark algorithms while the proposed algorithm does not suffer from such artifacts, producing a more pleasing image to the eye.

C. Ablation Analysis on Proposed Architecture
The proposed architecture is closely examined to get a better understanding of the role of the novel components.More to the point, the GLE module and the reinpainting component are novel parts that set the proposed architecture apart form the other inpainting algorithms.Therefore, this section will focus on shedding some light on their roles in the inpainting process.The objective is to address the question: how much of an impact do the GLE module and the reinpainting component have on the performance of the algorithm?This is going to be done in three experiments.The first has the two parts removed and the performance of the remaining architecture is evaluated.This helps establish the baseline results.The other two experiments examine the impact of adding each of the two parts, i.e., GLE and reinpainting, separately on the inpainting performance.The results of the five experiments are shown below.
1) Removing the GLE Module and Reinpainting Component: The GLE module and reinpainting component are both removed from the proposed architecture.To avoid jeopardizing the capacity of the proposed model, the GLE module is removed by stripping away the Gaussian smoothing and upsampling layers making a direct path from the first to the second convolution layers of the module.
Removing both parts chips away from the inpainting performance of the architecture.This is evident in Table II; with all three metrics, the table shows a clear degradation in performance on the Paris StreetView dataset when the architecture is trained and tested without the GLE module and the reinpainting component.
2) Removing the GLE Module: Using the same removal strategy in the above section (Section V-C1), the GLE module is removed in this experiment while keeping the reinpainting component.The result of doing so is a slight improvement in the performance compared to the baseline case, i.e., no GLE and reinpainting, as Table II shows.However, the performance is still worse compared to having both parts plugged in.The results of this experiment could be used to argue for the value of the GLE module; it helps the proposed architecture extract expressive features from different frequency components of the image.
3) Removing the Reinpainting Component: Using the same removing strategy once again, the reinpainting component is removed while keeping the GLE module.This setting is labelled as Reinpainting-1 in Table II.It is hypothesized that reinpainting has the ability to fill large holes by accessing features from neighboring iterations.The results in Table II verify that hypothesis to some extent; removing the reinpainting component degrades the performance of the architecture despite the presence of the GLE module.4) Value of progression for reinpainting: The hypothesis about the reinpainting component being able to fill large holes is further examined here.More to the point, it will be argued that accessing sub-volumes from different iterations (i.e., F int (τ − 1) and F int (τ + 1)) has added value to the inpainting process.This is first done by restricting the input to the re-inpainting component to only the τ -th feature subvolume, i.e., F int (τ ).Then, two experiments are conducted with and without the GLE components.
• The GLE modules are removed as described in Section V-C1.The performance in this case is very close to that of removing the whole reinpainting component.This is indicated in Table II under Reinpainting-2.This verifies that the F int (τ ) and H(τ ) can not provide more useful information for the reinpainting process.The redundant information even sightly causes the performance to degrade.
• The GLE modules are put back and the experiment is repeated again.Again, the results, shown in Table II, further verify that the input features from neighbouring iterations are useful for enhancing the re-inpainting results.

D. Ablation Analysis on Loss Function
The performance of the proposed method is further investigated based on each component of the loss function.The ablation analysis is done in three steps, each of which is illustrating the incremental value of certain terms in the loss function.The three steps are discussed below: • Using L valid and L hole : Using L valid and L hole as the only terms of the loss functions, we observe that the mean l 1 is slightly better compared to the final results; however, the PSNR and SSIM are not satisfactory especially for the case of corrupted images with large holes.The undesirable performance indicates that the loss has to account for the semantic information in the inpainted image.• Using L perc and L style : Constructing the loss function using L perc and L style alone, we observe an improvement in the PSNR and SSIM for large holes.The performance shows the role and effectiveness of L perc and L style in discovering semantic information in the valid region; however, the mean l 1 loss increases significantly due to the lack of L valid and L hole .• Using L valid , L hole , L perc and L style : After using the L valid , L hole , L perc and L style as the terms of the loss function, the PSNR, SSIM and mean l 1 all improve but for the case of low mask ratio.However, for the case of high mask ratio, the PSNR and SSIM resulting from using L perc and L style alone are slightly better than the case of combining all four L valid , L hole , L perc and L style .It is hypothesized that the noise may be generated from the pixel and feature levels.To eliminate such noise, the L tv is added to the loss function, which improves the overall performance.

VI. CONCLUSION
This paper introduces a three-stage neural network architecture that is able to progressively inpaint corrupted images while maintaining their structural and contextual integrity.In its core is a novel Gaussian-Laplacian feature Extraction (GLE) module.Stacking GLE modules constructs the first stage of the architecture and enables the network to build a feature pyramid of different frequency components, disintegrating structural (high frequency) and contextual (low frequency) information.The feature pyramid is the key for structurally-and contextually-aware progressive inpainting; low-and high-frequency components are iteratively but separately inpainted and fused in the second stage.The third, and final, stage enhances the fused features before it reconstructs the inpainted image.Experimental results and benchmarking show that the three-stage architecture is able to restore fine details in the corrupted region, outperforming the state-of-the-art algorithms.Ablation experiments reveal that the GLE module and the reinpainting component are responsible for the superior performance of the proposed architecture.
Let m be the ground truth of the corrupted region, n and c be the valid region and corrupted region of the corrupted image, then a = m ∪ n, b = c ∪ n.Note that the conditional distribution of m given n, denoted by p M|N , is a projected version of p A|B and can be learned from the training dataset.The MAP estimation of a based on b can be reduced to the MAP estimation of m based on n: mmap = arg max m p M|N ( m|n).

Fhigh
t e x i t s h a 1 _ b a s e 6 4 = " r L 9 6 h 2 N H h T x 6 m / w 5 r 9 x 0 + a g r Q 8 G H u / N M D P P i w T X Y N v f V m 5 p e W V 1 L b 9 e 2 N j c 2 t 4 p 7 u 4 1 d R g r y h o 0 F K F q e 0 Q z w S V r A A f B 2 p F i J P A E a 3 m j q 9 R v 3 T O l e S j v Y B y x b k A G k v u c E j B S r 3 j o B g S G n o + v e 4 k L 7 A G S I R 8 M J 5 O y C y Q + 6 R V L d s W e A i 8 S J y M l l K H e K 3 6 5 / Z D G A Z N A B d G 6 4 9 g R d B O i g F P B J g U 3 1 i wi d E Q G r G O o J A H T 3 W T 6 x g Q f G 6 W P / V C Z k o C n 6 u + J h A R a j w P P d K Z H 6 3 k v F f / z O j H 4 F 9 2 E y y g G J u l s k R 8 L D C F O M 8 F 9 r h g F M T a E U M X Nr Z g O i S I U T H I F E 4 I z / / I i a V Y r z m m l e n t W q l 1 m c e T R A T p C Z e S g c 1 R D N 6 i O G o i i R / S M X t G b 9 W S 9 W O / W x 6 w 1 Z 2 U z + + g P r M 8 f W 5 a Z D A = = < / l a t e x i t > Flow(⌧ ) < l a t e x i t s h a 1 _ b a s e 6 4 = " D 5 6 y A Z O 7 c 7 M 2 6 K C h 2 V i Q n S d k 0 K U = " > A A A C B X i c b V B N S 8 N A E N 3 4 W e t X 1 K M e F o t Q L y W p g h 6 L g n i s Y D + g K W W z 3 b R L N 5 u w O 1 F L 6 M W L f 8 W L B 0 W 8 + h + 8 + W / c t D 1 o 6 4 O B x 3 s z z M z z Y 8 E 1 O M 6 3 t b C 4 t L y y m l v L r 2 9 s b m 3 b O 7 t 1 H S W K s h q N R K S a P t F M c M l q w E G w Z q w Y C X 3 B G v 7 g M v M b d 0 x p H s l b G M a s H Z K e 5 A G n B I z U s Q + 8 k E D f D / B V J / W A P U A q o v v R q O g B S Y 4 7 d s E p O W P g e e J O S Q F N U e 3 Y X 1 4 3 o k n I J F B B t G 6 5 T g z t l C j g V L B R 3 k s 0 i w k d k B 5 r G S p J y H Q 7 H X 8 x w k d G 6 e I g U q Y k 4 L H 6 e y I l o d b D 0 D e d 2 c 1 6 1 s v E / 7 x W A s F 5 O + U y T o B J O l k U J A J D h L N I c J c r R k E M D S F U c X M r p n 2 i C A U T X N 6 E 4 M 6

Flow(⌧ + 1 )
< l a t e x i t s h a 1 _ b a s e 6 4 = " U w K 9 O e p / 4 L O R c e r 2 y 4 U / F u N s V s o = " > A A A C B 3 i c b V D J S g N B E O 2 J W 4 x b 1 K M g j U G I C G E m C n o M C u I x g l k g M w w 9 n Z 6 k S c 9 C d 4 0 a h r l 5 8 V e 8 e F D E q 7 / g z b + x s x w 0 8 U H B 4 7 0 q q u p 5 s e A K T P P b y C 0 s L i 2 v 5 F c L a + s b m 1 v F 7 Z 2 m i h J J W Y N G I p J t j y g m e M g a w E G w d i w Z C T z B W t 7 g c u S 3 7 p h U P A p v Y R g z J y C 9 k P u c E t C S W 9 y 3 A w J 9 z 8 d X b m o D e 4 B U R P g y X o x 3 4 2 P S m j O m M 7 v o D 4 z P H 6 H U m S Q = < / l a t e x i t > Flow(⌧ + 1) < l a t e x i t s h a 1 _ b a s e 6 4 = " U w K 9 O e p / 4 L O R c e r 2 y 4 U / F u N s V s o = " > A A A C B 3 i c b V D J S g N B E O 2 J W 4 x b 1 K M g j U G I C G E m C n o M C u I x g l k g M w w 9 n Z 6 k S c 9 C d 4 0 a h r l 5 8 V e 8 e F D E q 7 / g z b + x s x w 0 8 U H B 4 7 0 q q u p 5 s e A K T P P b y C 0 s L i 2 v 5 F c L a + s b m 1 v F 7 Z 2 m i h J J W Y N G I p J t j y g m e M g a w E G w d i w Z C T z B W t 7 g c u S 3 7 p h U P A p v Y R g z J y C 9 k P u c E t C S W 9 y 3 A w J 9 z 8 d X b m o D e 4 B U R P 6 N V 6 t J 6 t N + t 9 2 p q z Z j O 7 6 B e s j 2 9 I O J l 8 < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = " h u 8 X m j c s w B p A B y f o D 4 z P H 5 b O l n o = < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = " 3 4 l v C z d F T m B k g 8 4 r o r K 6 t 5 z b y m 1 v b O 7 u F v f 2 m j l P F e I P F M l Z t n 2 o u R c Q b K F D y d q I 4 D X 3 J W / 7 o a u q 3 7 r j S I o 5 u c Z x w L 6 S D S A S C U T R S q 4 s 0 P X N P e 4 W i Y 1 e r 5 X K l S n 4 T 1 3 Z m K M I C 9 V 7 h r d u P W R r y C J m k W n d c J 0 E v o w o F k 3 y S 7 6 a a J 5 S N 6 I B 3 D I 1 o y L W X z c 6 d k G O j 9 E k Q K 1 M R k p n 6 d S K j o d b j 0 D e d I c W h / u l N x b + 8 T a 6 U a h e L O P J w A I d w D D 6 c Q Q 2 u o A 4 N I H A L 9 / A I T 4 5 0 H p x n 5 2 X e m n M W M / v w D c 7 r B 5 w D j y s = < / l a t e x i t > Concat ⌧ + 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 0 T 2 2 E L O x w n u o p 2 s U u b R S j O o b X U = " > A A A B 7 X i c d V D J S g N B E K 2 J W 4 x b 1 K O X x i A I w j B j A p p b 0 I v H C G a B Z A g 9 n Z 6 k T U / 3 0 N 0 j h C H / 4 M W D I l 7 9 H 2 / + j Z 1 F c H 1 Q 8 H i v i q p 6 Y c K Z N p 7 3 7 u S W l l d W 1 / L r h Y 3 N r e 2 d 4 u 5 e U 8 t U E d o g k k v V D r G m n A n a M M x w 2 k 4 U x X H I a S s c X U 7 9 1 h 1 V m k l x Y 8 Y J D W I 8 E C x i B B s r N b s G p y d + r 1 j y 6 6 j P F C W G j y 3 B R D F 7 K y J D r D A x N q C C D e H z U / Q / a Z 6 6 f t k 9 v a 6 U a h e L O P J w A I d w D D 6 c Q Q 2 u o A 4 N I H A L 9 / A I T 4 5 0 H p x n 5 2 X e m n M W M / v w D c 7 r B 5 w D j y s = < / l a t e x i t > ⌧, < l a t e x i t s h a 1 _ b a s e 6 4 = " 3 4 l v C z d F T m B k g 8 4 r o i P 3 m Y v L g L I = " > A A A B 7 H i c d V D J S g N B E K 2 J W 4 x b 1 K O X x i B 4 k G G y g O Y W 9 O I x g l k g G U J P p y d p 0 t M z d N c I I e Q b v H h Q x K s f 5 M 2 / s b M I r g 8 K H u 9 V U V U v S K Q w 6 H n v T m Z l d W 1 9 I 7 u Z 2 9 r e 2 d 3 L 7 x 8 0 T Z x q x h s s l r F u B 9 R w K R R v o E D J 2 4 n m N A o k b w W j q 5 n f u u P a i F j d 4 j j h f k Q H S o S C U b R S o 4 s 0 P e v l C 5 5 b r Z b L l S r 5 T Y q u N 0 c B l q j 3 8 m / d f s z a 6 U a h e L O P J w A I d w D D 6 c Q Q 2 u o A 4 N I H A L 9 / A I T 4 5 0 H p x n 5 2 X e m n M W M / v w D c 7 r B 5 w D j y s = < / l a t e x i t > ⌧, < l a t e x i t s h a 1 _ b a s e 6 4 = " 3 4 l v C z d F T m B k g 8 4 r o r K 6 t 5 z b y m 1 v b O 7 u F v f 2 m j l P F e I P F M l Z t n 2 o u R c Q b K F D y d q I 4 D X 3 J W / 7 o a u q 3 7 r j S I o 5 u c Z x w L 6 S D S A S C U T R S q 4 s 0 P X N P e 4 W i Y 1 e r 5 X K l S n 4 T 1 3 Z m K M I C 9 V 7 h r d u P W R r y C J m k W n d c J 0 E v o w o F k 3 y S 7 6 a a J 5 S N 6 I B 3 D I 1 o y a 6 U a h e L O P J w A I d w D D 6 c Q Q 2 u o A 4 N I H A L 9 / A I T 4 5 0 H p x n 5 2 X e m n M W M / v w D c 7 r B 5 w D j y s = < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = " 5 z y 5 N q M u 1 I u H / z U W J m a Q 9 j N r c 1 E = " > A A A C A X i c b V D L S s N A F J 3 4 r P U V d S O 4 G S x C 3 Z S k C r o s u u m y g n 1 A E 8 p k M m m H T h 7 M 3 A g l 1 I 2 / 4 s a F I m 7 9 C 3 f + j Z M 2 C 2 0 9 c O F w z r 3 c e 4 + X C K 7 A s r 6 N l d W 1 9 Y 3 N 0 l Z 5 e 2 d 3 b 9 8 8 O O y o O J W U t W k s Y t n z i G K C R 6 w N H A T r J Z K R 0 B O s 6 4 1 v c 7 / 7 w K T i c X Q P k 4 S 5 I R l G P O C U g J Y G 5 r E T E h h 5 Q e Y A F z 7 D z S m u O k D S 8 4 F Z s W r W D H i Z 2 A W p o A K t g f n l + D F N Q x Y B F U S p v m 0 l 4 G Z E C U m 8 5 + m O J T r Q y w H 0 l d I e C Z + n s i J Y F S k 8 D T n d n F a t H Lx P + 8 X g L + t Z v y M E 6 A h X S + y E 8 E h g h n g e A B l 4 y C m G h C q O T 6 V k x H R B I K O r a S D s F e f H m Z t G t V + 6 J a u 7 s s 1 2 / y O I r o G J 2 g C r L R F a q j B m q i F q L o E T 2 j V / R m P B k v x r v x M W 8 t G P n M I f o D 4 / M H g i y W x A = = < / l a t e x i t > H(⌧ 1)< l a t e x i t s h a 1 _ b a s e 6 4 = " t 3 e q o O S e s N v h C U m 8 5 + m O J T r Q y w H 0 l d I e C Z + n s i J Y F S k 8 D T n d n F a t H L x P + 8 X g L + t Z v y M E 6 A h X S + y E 8 E h g h n g e A B l 4 y C m G h C q O T 6 V k x H R B I K O r a S D s F e f H m Z t G t V + 6 J a u 7 s s 1 2 / y O I r o G J 2 g C r L R F a q j B m q i F q L o E T 2 j V / R m P B k v x r v x M W 8 t G P n M I f o D 4 / M H g i y W x A = = < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = " 0 3 Y T n c tV O A e D u i c / g d b R j g u 2 B i k = " > A A A B 8 n i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x U Q Z d F N 7 q r Y B 8 w H U o m z b S h m W R I M k I Z + h lu X C j i 1 q 9 x 5 9 + Y a W e h r Q c C h 3 P u J e e e M O F M G 9 f 9 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o o 2 W q C G 0 T y a X q h V h T z g R t G 2 Y 4 7 S W K 4 j j k t B t O b n O / + 0 S V Z l I 8 m m l C g x i P B I s Y w c Z K f j / G Z h x G 6 H 7 g D a o 1 t + 7 O g V a J V 5 A a F G g N q l / 9 o S R p T I U h H G v t e 2 5 i g g w r w w i n s 0 o / 1 T T B Z I J H 1 L d U 4 J j q I J t H n q E z q w x R J J V 9 w q C 5 + n s j w 7 H W 0 z i 0 k 3 l E v e z l 4 n + e n 5 r o O si Y S F J D B V l 8 F K U c G Y n y + 9 G Q K U o M n 1 q C i W I 2 K y J j r D A x t q W K L c F b P n m V d B p 1 7 6 L e e L i s N W + K O s p w A q d w D h 5 c Q R P u o A V t I C D h G V 7 h z T H O i / P u f C x G S 0 6 x c w x / 4 H z + A H H S k L M = < / l a t e x i t > F4 < l a t e x i t s h a 1 _ b a s e 6 4 = " b u z u W Y 0 u i s k 0 I D 9 W U 1 B y K w T K 0 9 k = " > A A A B 8 n i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B F c l Z l a 0 G V R E J c V 7 A O m Q 8 m k m T Y 0 k w x J R i h D P 8 O N C 0 X c + j X u / Bs z 7 S y 0 9 U D g c M 6 9 5 N w T J p x p 4 7 r f z t r 6 x u b W d m m n v L u 3 f 3 B Y O T r u a J k q Q t t E c q l 6 I d a U M 0 H b h h l O e 4 m i O A 4 5 7 Y a T 2 9 z v P l G l m R S P Z p r Q I M Y j w S J G s L G S 3 4 + x G Y c R u h s 0 B p W q W 3 P n Q K v E K 0 g V C r Q G l a / + U J I 0 p s I Q j r X 2 P T c x Q Y a V Y Y T T W b m f a p p g M s E j 6 l s q c E x 1 k M 0 j z 9 C 5 V Y Y o k s o + Y d B c / b 2 R 4 V j r a R z a y T y i X v Z y 8 T / P T 0 1 0 H W R M J K m h g i w + i l K O j E T 5 / W j I F C W G T y 3 B R D G b F Z E x V p g Y 2 1 L Z l u A t n 7 x K O v W a d 1 m r P z S q z Z u i j h K c w h l c g A d X 0 I R 7 a E E b C E h 4 h l d 4 c 4 z z 4 r w 7 H 4 v R N a f Y O Y E / c D 5 / A H H M k L M = < / l a t e x i t > F5 < l a t e x i t s h a 1 _ b a s e 6 4 = " k 6 1 9 T N E m 9 o 2 d j / O E L 1 s H 6 s f I g v k = " > A A A B 8 n i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B F c l Z m q 6 L I o i M s K 9 g H T o W T S T B u a S Y Y k I 5 S h n + H G h S J u / R p 3 / o 2 Z d h b a e i B w O O d e c u 4 J E 8 6 0 c d 1 v Z 2 V 1 b X 1 j s 7 R V 3 t 7 Z 3 d u v H B y 2 t U w V o S 0 i u V T d E G v K m a A t w w y n 3 U R R H I e c d s L x b e 5 3 n q j S T I p H M 0 l o E O O h Y B E j 2 F j J 7 8 X Y j M I I 3 f U v + 5 W q W 3 N n Q M v E K 0 g V C j T 7 l a / e Q J I 0 p s I Q j r X 2 P T c x Q Y a V Y Y T T a b m X a p p g M s Z D 6 l s q c E x 1 k M 0 i T 9 G p V Q Y o k s o + Y d B M / b 2 R 4 V j r S R z a y T y i X v R y 8 T / P T 0 1 0 H W R M J K m h g s w / i l K O j E T 5 / W j A F C W G T y z B R D G b F Z E R V p g Y 2 1 L Z l u A t n r x M 2 v W a d 1 6 r P 1 x U G z d F H S U 4 h h M 4 A w + u o A H 3 0 I Q W E J D w D K / w 5 h j n x X l 3 P u a j K 0 6 x c w R / 4 H z + A H N Q k L Q = < / l a t e x i t > F6 < l a t e x i t s h a 1 _ b a s e 6 4 = " / b d t S R z f H a 7 c X l Z U + 1 A C y p 0 j Z K Y = " > A A A B 8 n i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B F c l Z k q 6 r I o i M s K 9 g H T o W T S T B u a S Y Y k I 5 S h n + H G h S J u / R p 3 / o 2 Z d h b a e i B w O O d e c u 4 J E 8 6 0 c d 1 v Z 2 V 1 b X 1 j s 7 R V 3 t 7 Z 3 d u v H B y 2 t U w V o S 0 i u V T d E G v K m a A t w w y n 3 U R R H I e c d s L x b e 5 3 n q j S T I p H M 0 l o a d 1 6 r P 1 x U G z d F H S U 4 h h M 4 A w + u o A H 3 0 I Q W E J D w D K / w 5 h j n x X l 3 P u a j K 0 6 x c w R / 4 H z + A H T U k L U = < / l a t e x i t > F3 < l a t e x i t s h a 1 _ b a s e 6 4 = " G 5 b D n QO C 8 v t R J G c l M 7 c i i h u k L 9 g = " > A A A B 8 n i c b V B N S w M x F H x b v 2 r 9 q n r 0 E i y C p 7 L b C n o s C u K x g q 2 F 7 V K y a b Y N z S Z L k h X K 0 p / h x Y M i X v 0 1 3 v w 3 Z t s 9 a O t A Y J h 5 j 8 y b M O F M G 9 f 9 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o q 2 W q C O 0 Q y a X q h V h T z g T t G G Y 4 7 S W K 4 j j k 9 D G c 3 O T + 4 x N V m k n x Y K Y J D W I 8 E i x i B B s r + f 0 Y m 3 E Y o d t B c 1 C t u X V 3 D r R K v I L U o E B 7 U P 3 q D y V J Y y o M 4 V h r 3 3 M T E 2 R Y G U Y 4 n V X 6 q a Y J J h M 8 or 6 l A s d U B 9 k 8 8 g y d W W W I I q n s E w b N 1 d 8 b G Y 6 1 n s a h n c w j 6 m U v F / / z / N R E V 0 H G R J I a K s j i o y j l y E i U 3 4 + G T F F i + N Q S T B S z W R E Z Y 4 W J s S 1 V b A n e 8 s m r p N u o e 8 1 6 4 / 6 i 1 r o u 6 i j D C Z z C O X h w C S 2 4 g z Z 0 g I C E Z 3 i F N 8 c 4 L 8 6 7 8 7 E Y L T n F z j H 8 g f P 5 A 3 B I k L I = < / l a t e x i t > F2 < l a t e x i t s h a 1 _ b a s e 6 4 = " z i 8 q K M J p L Z y j k X b B j G A 5 o k n I W s I = " > A A A B 8 n i c b V D L S g M x F M 3 U V 6 2 v q k s 3 w S K 4 K j N V 0 G V R E J c V 7 A O m Q 8 m k m T Y 0 k w z J H a E M / Q w 3 L h R x 6 9 e 4 8 2 / M t L P Q 1 g O B w z n 3 k n N P m A h u w H W / n d L a + s b m V n m 7 s r O 7 t 3 9 Q

( 1 ≤
Fig.1.A graphical description of the proposed solution.It shows all three stages and details their main components and elements.In feature extraction stage, all input images are converted into 6 feature volumes: F 1 , F 2 , F 3 , F 4 , F 5 , F 6 which are classified as low and high-frequency feature volumes: F low (0) and F high (0).In iterative inpainting stage, these feature volumes ( F low (τ ) and F high (τ ) (0 ≤ τ ≤ 6)) are utilized to recover the missing region progressively and generate inpainted features from each iteration, namely F int (τ ).Furthermore, the re-inpainting component enhances each inpainted feature by leveraging features from neighboring iterations, and it stores enhanced features as F reinp (τ ).In the end, the reconstruction component uses all enhanced features to produce the fully recovered image. )

Fig. 2 .
Fig. 2. The history graph of Smoothed training loss and testing performance vs. number of steps on Celeba dataset.
F merged ← F eatureM erge(F eatureP ool) 19: I out ← Reconstruction(F merged ) 20: return I out [29]e2 Dataset[28]This dataset contains 8 million images which are collected from 365 scene categories, like streets, indoor rooms and so on.•NVIDIAIrregularMask Dataset Dataset[29]is a popular irregular mask dataset.This dataset contains 12000 irregular masks which are randomly drawn by individuals. •

TABLE II ABLATION
STUDY RESULTS ON THE PARIS STREETVIEW DATASET BASED ON NETWORK STRUCTURE