Keywords

1 Introduction

Image inpainting, the task of filling in holes in an image, can be used in many applications. For example, it can be used in image editing to remove unwanted image content, while filling in the resulting space with plausible imagery. Previous deep learning approaches have focused on rectangular regions located around the center of the image, and often rely on expensive post-processing. The goal of this work is to propose a model for image inpainting that operates robustly on irregular hole patterns (see Fig. 1), and produces semantically meaningful predictions that incorporate smoothly with the rest of the image without the need for any additional post-processing or blending operation.

Fig. 1.
figure 1

Masked images and corresponding inpainted results using our partial-convolution based network.

Fig. 2.
figure 2

From left to right, top to bottom: Fig. 2(a): image with hole. Fig. 2(b): inpainted result of PatchMatch [2]. Fig. 2(c): inpainted result of Iizuka et al. [10]. Fig. 2(d): Yu et al. [36]. Fig. 2(e) and (f) are using the same network architecture as Sect. 3.2 but using typical convolutional network, Fig. 2(e) uses the pixel value 127.5 to initialize the holes. Fig. 2(f) uses the mean ImageNet pixel value. Fig. 2(g): our partial convolution based results which are agnostic to hole values.

Recent image inpainting approaches that do not use deep learning use image statistics of the remaining image to fill in the hole. PatchMatch [2], one of the state-of-the-art methods, iteratively searches for the best fitting patches to fill in the holes. While this approach generally produces smooth results, it is limited by the available image statistics and has no concept of visual semantics. For example, in Fig. 2(b), PatchMatch was able to smoothly fill in the missing components of the painting using image patches from the surrounding shadow and wall, but a semantically-aware approach would make use of patches from the painting instead.

Deep neural networks learn semantic priors and meaningful hidden representations in an end-to-end fashion, which have been used for recent image inpainting efforts. These networks employ convolutional filters on images, replacing the removed content with a fixed value. As a result, these approaches suffer from dependence on the initial hole values, which often manifests itself as lack of texture in the hole regions, obvious color contrasts, or artificial edge responses surrounding the hole. Examples using a U-Net architecture with typical convolutional layers with various hole value initialization can be seen in Fig. 2(e) and (f). (For both, the training and testing share the same initialization scheme).

Conditioning the output on the hole values ultimately results in various types of visual artifacts that necessitate expensive post-processing. For example, Iizuka et al. [10] uses fast marching [30] and Poisson image blending [21], while Yu et al. [36] employ a following-up refinement network to refine their raw network predictions. However, these refinement cannot resolve all the artifacts shown as Fig. 2(c) and (d). Our work aims to achieve well-incorporated hole predictions independent of the hole initialization values and without any additional post-processing.

Another limitation of many recent approaches is the focus on rectangular shaped holes, often assumed to be center in the image. We find these limitations may lead to overfitting to the rectangular holes, and ultimately limit the utility of these models in application. Pathak et al. [20] and Yang et al.  [34] assume \(64\times 64\) square holes at the center of a 128 \(\times \) 128 image. Iizuka et al. [10] and Yu et al. [36] remove the centered hole assumption and can handle irregular shaped holes, but do not perform an extensive quantitative analysis on a large number of images with irregular masks (51 test images in  [8]). In order to focus on the more practical irregular hole use case, we collect a large benchmark of images with irregular masks of varying sizes. In our analysis, we look at the effects of not just the size of the hole, but also whether the holes are in contact with the image border.

To properly handle irregular masks, we propose the use of a Partial Convolutional Layer, comprising a masked and re-normalized convolution operation followed by a mask-update step. The concept of a masked and re-normalized convolution is also referred to as segmentation-aware convolutions in  [7] for the image segmentation task, however they did not make modifications to the input mask. Our use of partial convolutions is such that given a binary mask our convolutional results depend only on the non-hole regions at every layer. Our main extension is the automatic mask update step, which removes any masking where the partial convolution was able to operate on an unmasked value. Given sufficient layers of successive updates, even the largest masked holes will eventually shrink away, leaving only valid responses in the feature map. The partial convolutional layer ultimately makes our model agnostic to placeholder hole values.

In summary, we make the following contributions:

  • we propose the use of partial convolutions with an automatic mask update step for achieving state-of-the-art on image inpainting.

  • while previous works fail to achieve good inpainting results with skip links in a U-Net [32] with typical convolutions, we demonstrate that substituting convolutional layers with partial convolutions and mask updates can achieve state-of-the-art inpainting results.

  • to the best of our knowledge, we are the first to demonstrate the efficacy of training image-inpainting models on irregularly shaped holes.

  • we propose a large irregular mask dataset, which will be released to public to facilitate future efforts in training and evaluating inpainting models.

2 Related Work

Non-learning approaches to image inpainting rely on propagating appearance information from neighboring pixels to the target region using some mechanisms like distance field [1, 3, 30]. However, these methods can only handle narrow holes, where the color and texture variance is small. Big holes may result in over-smoothing or artifacts resembling Voronoi regions such as in [30]. Patch-based methods such as [5, 15] operate by searching for relevant patches from the image’s non-hole regions or other source images in an iterative fashion. However, these steps often come at a large computation cost such as in [26]. PatchMatch [2] speeds it up by proposing a faster similar patch searching algorithm. However, these approaches are still not fast enough for real-time applications and cannot make semantically aware patch selections.

Deep learning based methods typically initialize the holes with some constant placeholder values e.g. the mean pixel value of ImageNet [24], which is then passed through a convolutional network. Due to the resulting artifacts, post-processing is often used to ameliorate the effects of conditioning on the placeholder values. Content Encoders [20] first embed the 128 \(\times \) 128 image with 64 \(\times \) 64 center hole into low dimensional feature space and then decode the feature to a 64 \(\times \) 64 image. Yang et al. [34] takes the result from Content Encoders as input and then propagates the texture information from non-hole regions to fill the hole regions as postprocessing. Song et al. [28] uses a refinement network in which a blurry initial hole-filling result is used as the input, then iteratively replaced with patches from the closest non-hole regions in the feature space. Li et al. [16] and Iizuka et al. [10] extended Content Encoders by defining both global and local discriminators; then Iizuka et al. [10] apply Poisson blending as a post-process. Following [10], Yu et al. [36] replaced the post-processing with a refinement network powered by the contextual attention layers.

Amongst the deep learning approaches, several other efforts also ignore the mask placeholder values. In Yeh et al. [35], searches for the closest encoding to the corrupted image in a latent space, which is then used to condition the output of a hole-filling generator. Ulyanov et al. [32] further found that the network needs no external dataset training and can rely on the structure of the generative network itself to complete the corrupted image. However, this approach can require a different set of hyper parameters for every image, and applies several iterations to achieve good results. Moreover, their design [32] is not able to use skip links, which are known to produce detailed output. With standard convolutional layers, the raw features of noise or wrong hole initialization values in the encoder stage will propagate to the decoder stage. Our work also does not depend on placeholder values in the hole regions, but we also aim to achieve good results in a single feedforward pass and enable the use of skip links to create detailed predictions.

Our work makes extensive use of a masked or reweighted convolution operation, which allows us to condition output only on valid inputs. Harley et al. [7] recently made use of this approach with a soft attention mask for semantic segmentation. It has also been used for full-image generation in PixelCNN [18], to condition the next pixel only on previously synthesized pixels. Uhrig et al. [31] proposed sparsity invariant CNNs with reweighted convolution and max pooling based mask updating mechanism for depth completion. For image inpainting, Ren et al. [22] proposed shepard convolution layer where the same kernel is applied for both feature and mask convolutions. The mask convolution result acts as both the reweighting denominator and updated mask, which does not guarantee the hole to evolve during updating due to the kernel’s possible negative entries. It cannot handle big holes properly either. Discussions of other CNN variants like [4] are beyond the scope of this work.

3 Approach

Our proposed model uses stacked partial convolution operations and mask updating steps to perform image inpainting. We first define our convolution and mask update mechanism, then discuss model architecture and loss functions.

3.1 Partial Convolutional Layer

We refer to our partial convolution operation and mask update function jointly as the Partial Convolutional Layer. Let \(\mathbf {W}\) be the convolution filter weights for the convolution filter and b its the corresponding bias. \(\mathbf {X}\) are the feature values (pixels values) for the current convolution (sliding) window and \(\mathbf {M}\) is the corresponding binary mask. The partial convolution at every location, similarly defined in [7], is expressed as:

$$\begin{aligned} x' = {\left\{ \begin{array}{ll} \mathbf {W}^T(\mathbf {X}\odot \mathbf {M})\frac{\text {sum}(\mathbf {1})}{\text {sum}(\mathbf {M})} + b, &{} \text {if } \text {sum}(\mathbf {M})>0 \\ 0, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(1)

where \(\odot \) denotes element-wise multiplication, and \(\mathbf {1}\) has same shape as M but with all the elements being 1. As can be seen, output values depend only on the unmasked inputs. The scaling factor \(\text {sum}(\mathbf {1})/\text {sum}(\mathbf {M})\) applies appropriate scaling to adjust for the varying amount of valid (unmasked) inputs.

After each partial convolution operation, we then update our mask as follows: if the convolution was able to condition its output on at least one valid input value, then we mark that location to be valid. This is expressed as:

$$\begin{aligned} m'={\left\{ \begin{array}{ll} 1, &{} \text {if } \text {sum}(\mathbf {M})>0 \\ 0, &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(2)

and can easily be implemented in any deep learning framework as part of the forward pass. With sufficient successive applications of the partial convolution layer, any mask will eventually be all ones, if the input contained any valid pixels.

3.2 Network Architecture and Implementation

Implementation. Partial convolution layer is implemented by extending existing standard PyTorch [19], although it can be improved both in time and space using custom layers. The straightforward implementation is to define binary masks of size \(\hbox {C} \times \hbox {H} \times \hbox {W}\), the same size with their associated images/features, and then to implement mask updating is implemented using a fixed convolution layer, with the same kernel size as the partial convolution operation, but with weights identically set to 1 and no bias. The entire network inference on a 512 \(\times \) 512 image takes 0.029 s on a single NVIDIA V100 GPU, regardless of the hole size.

Network Design. We design a UNet-like architecture [23] similar to the one used in [11], replacing all convolutional layers with partial convolutional layers and using nearest neighbor up-sampling in the decoding stage. The skip links will concatenate two feature maps and two masks respectively, acting as the feature and mask inputs for the next partial convolution layer. The last partial convolution layer’s input will contain the concatenation of the original input image with hole and original mask, making it possible for the model to copy non-hole pixels. Network details are found in the supplementary file.

Partial Convolution as Padding. We use the partial convolution with appropriate masking at image boundaries in lieu of typical padding. This ensures that the inpainted content at the image border will not be affected by invalid values outside of the image – which can be interpreted as another hole.

3.3 Loss Functions

Our loss functions target both per-pixel reconstruction accuracy as well as composition, i.e. how smoothly the predicted hole values transition into their surrounding context.

Given input image with hole \(\mathbf {I}_{in}\), initial binary mask \(\mathbf {M}\) (0 for holes), the network prediction \(\mathbf {I}_{out}\), and the ground truth image \(\mathbf {I}_{gt}\), we first define our per-pixel losses \(\mathcal {L}_{hole} = \Vert (1-M)\odot (I_{out} - I_{gt})\Vert _1\) and \(\mathcal {L}_{valid} = \Vert M \odot (I_{out} -I_{gt})\Vert _1\). These are the \(L^1\) losses on the network output for the hole and the non-hole pixels respectively.

Next, we define the perceptual loss, introduced by Gatys at al. [6]:

$$\begin{aligned} \mathcal {L}_{perceptual} = \sum _{n=0}^{N-1} {\Vert {\varPsi }_{n}(\mathbf {I}_{out})-{\varPsi }_{n}(\mathbf {I}_{gt})\Vert }_{1} + \sum _{n=0}^{N-1} {\Vert {\varPsi }_{n}(\mathbf {I}_{comp})-{\varPsi }_{n}(\mathbf {I}_{gt})\Vert }_{1} \end{aligned}$$
(3)

Here, \(\mathbf {I}_{comp}\) is the raw output image \(\mathbf {I}_{out}\), but with the non-hole pixels directly set to ground truth. The perceptual loss computes the \(L^1\) distances between both \(\mathbf {I}_{out}\) and \(\mathbf {I}_{comp}\) and the ground truth, but after projecting these images into higher level feature spaces using an ImageNet-pretrained VGG-16 [27]. \(\varPsi _{n}\) is the activation map of the nth selected layer. We use layers pool1, pool2 and pool3 for our loss.

We further include the style-loss term, which is similar to the perceptual loss [6], but we first perform an autocorrelation (Gram matrix) on each feature map before applying the \(L^1\).

$$\begin{aligned} \mathcal {L}_{style_{out}} = \sum _{n=0}^{N-1}{ {\Big |\Big |K_n\big ({{\big (\varPsi }_{n}(\mathbf {I}_{out})\big )}^{\intercal }{{\big (\varPsi }_{n}(\mathbf I _{out})\big )} - {\big ({\varPsi }_{n}(\mathbf {I}_{gt})\big )}^{\intercal }{\big ({\varPsi }_{n}(\mathbf {I}_{gt})\big )\big )}\Big |\Big |}_{1}} \end{aligned}$$
(4)
$$\begin{aligned} \mathcal {L}_{style_{comp}} = \sum _{n=0}^{N-1}{ {\Big |\Big |K_n\big ({{\big (\varPsi }_{n}(\mathbf {I}_{comp})\big )}^{\intercal }{{\big (\varPsi }_{n}(\mathbf {I}_{comp})\big )} -{\big ({\varPsi }_{n}(\mathbf {I}_{gt})\big )}^{\intercal }{\big ({\varPsi }_{n}(\mathbf {I}_{gt})\big )\big )}\Big |\Big |}_{1}} \end{aligned}$$
(5)

Here, we note that the matrix operations assume that the high level features \(\varPsi (x)_n\) is of shape \((H_nW_n)\times C_n\), resulting in a \(C_n\times C_n\) Gram matrix, and \(K_n\) is the normalization factor \(1/C_nH_nK_n\) for the nth selected layer. Again, we include loss terms for both raw output and composited output.

Our final loss term is the total variation (TV) loss \(\mathcal {L}_{tv}\): which is the smoothing penalty [12] on P, where P is the region of 1-pixel dilation of the hole region.

$$\begin{aligned} \mathcal {L}_{tv} = \sum _{(i,j) \in P, (i,j+1) \in P} {\Vert \mathbf {I}_{comp}^{i,j+1} - \mathbf {I}_{comp}^{i,j}\Vert _{1}} + \sum _{(i,j) \in P, (i+1,j) \in P}{\Vert \mathbf {I}_{comp}^{i+1,j} - \mathbf {I}_{comp}^{i,j}\Vert _{1}} \end{aligned}$$
(6)

The total loss \(\mathcal {L}_{total}\) is the combination of all the above loss functions.

$$\begin{aligned} \mathcal {L}_{total} = \mathcal {L}_{valid} \,+\, 6\mathcal {L}_{hole} \,+\, 0.05\mathcal {L}_{perceptual} \,+\, 120(\mathcal {L}_{style_{out}} \,+\, \mathcal {L}_{style_{comp}}) \,+\, 0.1\mathcal {L}_{tv} \end{aligned}$$
(7)

The loss term weights were determined by performing a hyperparameter search on 100 validation images.

Ablation Study of Different Loss Terms. Perceptual loss [12] is known to generate checkerboard artifacts. Johnson et al. [12] suggests to ameliorate the problem by using the total variation (TV) loss. We found this not to be the case for our model. Figure 3(b) shows the result of the model trained by removing \(\mathcal {L}_{style_{out}}\) and \(\mathcal {L}_{style_{comp}}\) from \(\mathcal {L}_{total}\). For our model, the additional style loss term is necessary. However, not all the loss weighting schemes for the style loss will generate plausible results. Figure 3(f) shows the result of the model trained with a small style loss weight. Compared to the result of the model trained with full \(\mathcal {L}_{total}\) in Fig. 3(g), it has many fish scale artifacts. However, perceptual loss is also important; grid-shaped artifacts are less prominent in the results with full \(\mathcal {L}_{total}\) (Fig. 3(k)) than the results without perceptual loss (Fig. 3(j)). We hope this discussion will be useful to readers interested in employing VGG-based high level losses.

Fig. 3.
figure 3

In top row, from left to right: input image with hole, result without style loss, result using full \(\mathcal {L}_{total}\), and ground truth. In middle row, from left to right: input image with hole, result using small style loss weight, result using full \(\mathcal {L}_{total}\), and ground truth. In bottom row, from left to right: input image with hole, result without perceptual loss, result using full \(\mathcal {L}_{total}\), and ground truth. (a) Input. (b) no \(\mathcal {L}_{style}\). (c) full. (d) GT. (e) Input. (f) Small \(\mathcal {L}_{style}\). (g) full \(\mathcal {L}_{total}\). (h) GT. (i) Input. (j) no \(\mathcal {L}_{perceptual}\). (k) full \(\mathcal {L}_{total}\). (l) GT.

4 Experiments

4.1 Irregular Mask Dataset

Previous works generate holes in their datasets by randomly removing rectangular regions within their image. We consider this insufficient in creating the diverse hole shapes and sizes that we need. As such, we begin by collecting masks of random streaks and holes of arbitrary shapes. We found the results of occlusion/dis-occlusion mask estimation method between two consecutive frames for videos described in [29] to be a good source of such patterns. We generate 55,116 masks for the training and 24,866 masks for testing. During training, we augment the mask dataset by randomly sampling a mask from 55,116 masks and later perform random dilation, rotation and cropping. All the masks and images for training and testing are with the size of 512 \(\times \) 512.

We create a test set by starting with the 24,866 raw masks and adding random dilation, rotation and cropping. Many previous methods such as [10] have degraded performance at holes near the image borders. As such, we divide the test set into two: masks with and without holes close to border. The split that has holes distant from the border ensures a distance of at least 50 pixels from the border.

We also further categorize our masks by hole size. Specifically, we generate 6 categories of masks with different hole-to-image area ratios: (0.01, 0.1], (0.1, 0.2], (0.2, 0.3], (0.3, 0.4], (0.4, 0.5], (0.5, 0.6]. Each category contains 1000 masks with and without border constraints. In total, we have created \(6\times 2 \times 1000=12,000\) masks. Some examples of each category’s masks can be found in Fig. 4.

Fig. 4.
figure 4

Some test masks for each hole-to-image area ratio category. 1, 3 and 5 are shown using their examples with border constraint; 2, 4 and 6 are shown using their examples without border constraint.

4.2 Training Process

Training Data. We use 3 separate image datasets for training and testing: ImageNet dataset [24], Places2 dataset [37] and CelebA-HQ [13, 17]. We use the original train, test, and val splits for ImageNet and Places2. For CelebA-HQ, we randomly partition into 27 K images for training and 3 K images for testing.

Training Procedure. We initialize the weights using the initialization method described in [9] and use Adam [14] for optimization. We train on a single NVIDIA V100 GPU (16GB) with a batch size of 6.

Initial Training and Fine-Tuning. Holes present a problem for Batch Normalization because the mean and variance will be computed for hole pixels, and so it would make sense to disregard them at masked locations. However, holes are gradually filled with each application and usually completely gone by the decoder stage.

In order to use Batch Normalization in the presence of holes, we first turn on Batch Normalization for the initial training using a learning rate of 0.0002. Then, we fine-tune using a learning rate of 0.00005 and freeze the Batch Normalization parameters in the encoder part of the network. We keep Batch Normalization enabled in the decoder. This not only avoids the incorrect mean and variance issues, but also helps us to achieve faster convergence. ImageNet and Places2 models train for 10 days, whereas CelebA-HQ trains in 3 days. All fine-tuning is performed in one day.

4.3 Comparisons

We compare with 4 methods:

  • PM: PatchMatch [2], the state-of-the-art non-learning based approach

  • GL: Method proposed by Iizuka et al. [10]

  • GntIpt: Method proposed by Yu et al. [36]

  • Conv: Same network structure as our method but using typical convolutional layers. Loss weights were re-determined via hyperparameter search.

Fig. 5.
figure 5

Comparisons of test results on ImageNet. (a) Input. (b) PM. (c) GL. (d) GntIpt. (e) PConv. (f) GT.

Our method is denoted as PConv. A fair comparison with GL and GntIpt would require retraining their models on our data. However, the training of both approaches use local discriminators assuming availability of the local bounding boxes of the holes, which would not make sense for the shape of our masks. As such, we directly use their released pre-trained modelsFootnote 1. For PatchMatch, we used a third-party implementationFootnote 2. As we do not know their train-test splits, our own splits will likely differ from theirs. We evaluate on 12,000 images randomly assigning our masks to images without replacement.

Qualitative Comparisons. Figures 5 and 6 shows the comparisons on ImageNet and Places2 respectively. GT represents the ground truth. We compare with GntIpt [36] on CelebA-HQ in Fig. 9. GntIpt tested CelebA-HQ on 256 \(\times \) 256 so we downsample the images to be 256 \(\times \) 256 before feeding into their model. It can be seen that PM may copy semantically incorrect patches to fill holes, while GL and GntIpt sometimes fail to achieve plausible results through post-processing or refinement network. Figure 7 shows the results of Conv, which are with the distinct artifacts from conditioning on hole placeholder values.

Fig. 6.
figure 6

Comparison of test results on Places2 images. (a) Input. (b) PM. (c) GL. (d) GntIpt. (e) PConv. (f) GT.

Fig. 7.
figure 7

Comparison between typical convolution layer based results (Conv) and partial convolution layer based results (PConv).

Quantitative Comparisons. As mentioned in [36], there is no good numerical metric to evaluate image inpainting results due to the existence of many possible solutions. Nevertheless we follow the previous image inpainting works [34, 36] by reporting \(\ell _1\) error, PSNR, SSIM [33], and the inception score [25]. \(\ell _1\) error, PSNR and SSIM are reported on Places2, whereas the Inception score (IScore) is reported on ImageNet. Note that the released model for [10] was trained only on Places2, which we use for all evaluations. Table 1 shows the comparison results. It can be seen that our method outperforms all the other methods on these measurements on irregular masks.

Table 1. Comparisons with various methods. Columns represent different hole-to-image area ratios. N = no border, B = border

User Study. In addition to quantitative comparisons, we also evaluate our algorithm via a human subjective study. We perform pairwise A/B tests without showing hole positions or original input image with holes, deployed on the Amazon Mechanical Turk (MTurk) platform. We perform two different kinds of experiments: unlimited time and limited time. We also report the cases with and without holes close to the image boundaries separately. For each situation, We randomly select 300 images for each method, where each image is compared 10 times.

For the unlimited time setting, the workers are given two images at once: each generated by a different method. The workers are then given unlimited time to select which image looks more realistic. We also shuffle the image order to ensure unbiased comparisons. The results across all different hole-to-image area ratios are summarized in Fig. 8(a). The first row shows the results where the holes are at least 50 pixels away from the image border, while the second row shows the case where the holes may be close to or touch image border. As can be seen, our method performs significantly better than all the other methods (50% means two methods perform equally well) in both cases.

For the limited time setting, we compare all methods (including ours) to the ground truth. In each comparison, the result of one method is chosen and shown to the workers along with the ground truth for a limited amount of time. The workers are then asked to select which image looks more natural. This evaluates how quickly the difference between the images can be perceived. The comparison results for different time intervals are shown in Fig. 8(b). Again, the first row shows the case where the holes do not touch the image boundary while the second row allows that. Our method outperforms the other methods in most cases across different time periods and hole-to-image area ratios.

Fig. 8.
figure 8

User study results. We perform two kinds of experiments: unlimited time and limited time. (a) In the unlimited time setting, we compare our result with the result generated by another method. The rate where our result is preferred is graphed. 50% means two methods are equal. In the first row, the holes are not allowed to touch the image boundary, while in the second row it is allowed. (b) In the limited time setting, we compare all methods to the ground truth. The subject is given some limited time (250 ms, 1000 ms or 4000 ms) to select which image is more realistic. The rate where ground truth is preferred over the other method is reported. The lower the curve, the better.

Fig. 9.
figure 9

Test results on CelebA-HQ. (a) Input. (b) GntIpt. (c) PConv (Ours). (d) Ground Truth.

5 Discussion and Extension

5.1 Discussion

We propose the use of a partial convolution layer with an automatic mask updating mechanism and achieve state-of-the-art image inpainting results. Our model can robustly handle holes of any shape, size location, or distance from the image borders. Further, our performance does not deteriorate catastrophically as holes increase in size, as seen in Fig. 10. However, one limitation of our method is that it fails for some sparsely structured images such as the bars on the door in Fig. 11, and, like most methods, struggles on the largest of holes.

Fig. 10.
figure 10

Inpainting results with various dilation of the hole region from left to right: 0, 5, 15, 35, 55, and 95 pixels dilation respectively. Top row: input; bottom row: corresponding inpainted results.

Fig. 11.
figure 11

Failure cases. Each group is ordered as input, our result and ground truth.