Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Generative image modeling is of fundamental interest in computer vision and machine learning. Early works [20, 21, 26, 30, 32, 36] studied statistical and physical principles of building generative models, but due to the lack of effective feature representations, their results are limited to textures or particular patterns such as well-aligned faces. Recent advances on representation learning using deep neural networks [16, 29] nourish a series of deep generative models that enjoy joint generative modeling and representation learning through Bayesian inference [1, 9, 14, 15, 28, 34] or adversarial training [3, 8]. Those works show promising results of generating natural images, but the generated samples are still in low resolution and far from being perfect because of the fundamental challenges of learning unconditioned generative models of images.

Fig. 1.
figure 1

An example that demonstrates the problem of conditioned image generation from visual attributes. We assume a vector of visual attributes is extracted from a natural language description, and then this attribute vector is combined with learned latent factors to generate diverse image samples. (Color figure online)

In this paper, we are interested in generating object images from high-level description. For example, we would like to generate portrait images that all match the description “a young girl with brown hair is smiling” (Fig. 1). This conditioned treatment reduces sampling uncertainties and helps generating more realistic images, and thus has potential real-world applications such as forensic art and semantic photo editing [12, 19, 40]. The high-level descriptions are usually natural languages, but what underlies its corresponding images are essentially a group of facts or visual attributes that are extracted from the sentence. In the example above, the attributes are (hair color: brown), (gender: female), (age: young) and (expression: smile). Based on this assumption, we propose to learn an attribute-conditioned generative model.

Indeed, image generation is a complex process that involves many factors. Other than enlisted attributes, there are many unknown or latent factors. It has been shown that those latent factors are supposed to be interpretable according to their semantic or physical meanings [4, 17, 27]. Inspired by layered image models [23, 38], we disentangle the latent factors into two groups: one related to uncertain properties of foreground object and the other related to the background, and model the generation process as layered composition. In particular, the foreground is overlaid on the background so that the background visibility depends on the foreground shape and position. Therefore, we propose a novel layered image generative model with disentangled foreground and background latent variables. The entire background is first generated from background variables, then the foreground variables are combined with given attributes to generate object layer and its shape map determining the visibility of background and finally the image is composed by the summation of object layer and the background layer gated by its visibility map. We learn this layered generative model in an end-to-end deep neural network using a variational auto-encoder [15] (Sect. 3). Our variational auto-encoder includes two encoders or recognition models for approximating the posterior distributions of foreground and background latent variables respectively, and two decoders for generating a foreground image and a full image by composition. Assuming the latent variables are Gaussian, the whole network can be trained end-to-end by back-propagation using the reparametrization trick.

Generating realistic samples is certainly an important goal of deep generative models. Moreover, generative models can be also used to perform Bayesian inference on novel images. Since the true posterior distribution of latent variables is unknown, we propose a general optimization-based approach for posterior inference using image generation models and latent priors (Sect. 4).

We evaluate the proposed model on two datasets, the Labeled Faces in the Wild (LFW) dataset [10] and the Caltech-UCSD Birds-200-2011 (CUB) dataset [37]. In the LFW dataset, the attributes are 73-dimensional vectors describing age, gender, expressions, hair and many others [18]. In the CUB dataset, the 312-dimensional binary attribute vectors are converted from descriptions about bird parts and colors. We organize our experiments in the following two tasks. First, we demonstrate the quality of attribute-conditioned image generation with comparisons to nearest-neighbor search, and analyze the disentangling performance of latent space and corresponding foreground-background layers. Second, we perform image reconstruction and completion on a set of novel test images by posterior inference with quantitative evaluation. Results from those experiments show the superior performance of the proposed model over previous art. The contributions of this paper are summarized as follows:

  • We propose a novel problem of conditioned image generation from visual attributes.

  • We tackle this problem by learning conditional variational auto-encoders and propose a novel layered foreground-background generative model that significantly improves the generation quality of complex images.

  • We propose a general optimization-based method for posterior inference on novel images and use it to evaluate generative models in the context of image reconstruction and completion.

2 Related Work

Image Generation. In terms of generating realistic and novel images, there are several recent work [3, 4, 8, 9, 17, 25] that are relevant to ours. Dosovitskiy et al. [4] proposed to generate 3D chairs given graphics code using deep convolutional neural networks, and Kulkarni et al. [17] used variational auto-encoders [15] to model the rendering process of 3D objects. Both of these models [4, 17] assume the existence of a graphics engine during training, from which they have (1) virtually infinite amount of training data and/or (2) pairs of rendered images that differ only in one factor of variation. Therefore, they are not directly applicable to natural image generation. While both work [4, 17] studied generation of rendered images from complete description (e.g., object identity, view-point, color) trained from synthetic images (via graphics engine), generation of images from an incomplete description (e.g., class labels, visual attributes) is still under-explored. In fact, image generation from incomplete description is a more challenging task and the one-to-one mapping formulation of [4] is inherently limited. Gregor et al. [9] developed recurrent variational auto-encoders with spatial attention mechanism that allows iterative image generation by patches. This elegant algorithm mimics the process of human drawing but at the same time faces challenges when scaling up to large complex images. Recently, generative adversarial networks (GANs) [3, 7, 8, 25] have been developed for image generation. In the GAN, two models are trained to against each other: a generative model aims to capture the data distribution, while a discriminative model attempts to distinguish between generated samples and training data. The GAN training is based on a min-max objective, which is known to be challenging to optimize.

Layered Modeling of Images. Layered models or 2.1D representations of images have been studied in the context of moving or still object segmentation [11, 23, 38, 39, 41]. The layered structure is introduced into generative image modeling [20, 35]. Tang et al. [35] modeled the occluded images with gated restricted Boltzmann machines and achieved good inpainting and denoising results on well cropped face images. Le Roux et al. [20] explicitly modeled the occlusion layer in a masked restricted Boltzmann machine for separating foreground and background and demonstrated promising results on small patches. Though similar to our proposed gating in the form, these models face challenges when applied to model large natural images due to its difficulty in learning hierarchical representation based on restricted Boltzmann machine.

Multimodal Learning. Generative models of image and text have been studied in multimodal learning to model joint distribution of multiple data modalities [22, 31, 33]. For example, Srivastava and Salakhutdinov [33] developed a multimodal deep Boltzmann machine that models joint distribution of image and text (e.g., image tag). Sohn et al. [31] proposed improved shared representation learning of multimodal data through bi-directional conditional prediction by deriving a conditional prediction model of one data modality given the other and vice versa. Both of these works focused more on shared representation learning using hand-crafted low-level image features and therefore have limited applications such as conditional image or text retrieval than actual generation of images.

3 Attribute-Conditioned Generative Modeling of Images

In this section, we describe our proposed method for attribute-conditioned generative modeling of images. We first describe a conditional variational auto-encoder, followed by the formulation of layered generative model and its variational learning.

3.1 Base Model: Conditional Variational Auto-Encoder (CVAE)

Given the attribute \(y\in \mathbb {R}^{N_y}\) and latent variable \(z\in \mathbb {R}^{N_z}\), our goal is to build a model \(p_\theta (x | y,z)\) that generates realistic image \(x\in \mathbb {R}^{N_x}\) conditioned on y and z. Here, we refer \(p_\theta \) a generator (or generation model), parametrized by \(\theta \). Conditioned image generation is simply a two-step process in the following:

  1. 1.

    Randomly sample latent variable z from prior distribution p(z);

  2. 2.

    Given y and z as conditioning variable, generate image x from \(p_\theta (x|y, z)\).

Here, the purpose of learning is to find the best parameter \(\theta \) that maximizes the log-likelihood \(\log p_\theta (x|y)\). As proposed in [15, 28], variational auto-encoders try to maximize the variational lower bound of the log-likelihood \(\log p_{\theta }(x|y)\). Specifically, an auxiliary distribution \(q_{\phi }(z|x,y)\) is introduced to approximate the true posterior \(p_{\theta }(z|x,y)\). We refer the base model a conditional variational auto-encoder (CVAE) with the conditional log-likelihood

$$\begin{aligned} \log p_\theta (x|y) = KL ( q_\phi (z|x,y) || p_\theta (z|x,y) ) + \mathcal {L}_{\text {CVAE}} (x,y;\theta , \phi ), \end{aligned}$$

where the variational lower bound

$$\begin{aligned} \mathcal {L}_{\text {CVAE}} (x,y;\theta , \phi ) = -KL (q_\phi (z|x,y) || p_\theta (z)) + \mathbb {E}_{q_\phi (z|x,y)} \big [ \log p_\theta (x|y,z) \big ] \end{aligned}$$
(1)

is maximized for learning the model parameters.

Here, the prior \(p_{\theta } (z) \) is assumed to follow isotropic multivariate Gaussian distribution, while two conditional distributions \(p_\theta (x|y,z)\) and \(q_\phi (z|x,y)\) are multivariate Gaussian distributions whose mean and covariance are parametrized by \(\mathcal {N}\left( \mu _{\theta }(z,y), diag(\sigma ^2_{\theta }(z,y))\right) \) and \(\mathcal {N}\left( \mu _\phi (x, y), diag(\sigma ^2_\phi (x, y))\right) \), respectively. We refer the auxiliary proposal distribution \(q_\phi (z|x,y)\) a recognition model and the conditional data distribution \(p_{\theta } (x|y,z)\) a generation model.

The first term \(\textit{KL}(q_\phi (z|x,y)||p_\theta (z))\) is a regularization term that reduces the gap between the prior p(z) and the proposal distribution \(q_\phi (z|x,y)\), while the second term \(\log p_\theta (x|y,z)\) is the log likelihood of samples. In practice, we usually take as a deterministic generation function the mean \(x=\mu _{\theta }(z,y)\) of conditional distribution \(p_{\theta }(x|z,y)\) given z and y, so it is convenient to assume the standard deviation function \(\sigma _{\theta }(z,y)\) is a constant shared by all the pixels as the latent factors capture all the data variations. We will keep this assumption for the rest of the paper if not particularly mentioned. Thus, we can rewrite the second term in the variational lower bound as reconstruction loss \(L(\cdot , \cdot )\) (e.g., \(\ell _2\) loss):

$$\begin{aligned} \mathcal {L}_{\text {CVAE}} =&-\textit{KL} ( q_\phi (z|x,y) || p_\theta (z) ) - \mathbb {E}_{q_\phi (z|x,y)} L(\mu _\theta (y,z), x) \end{aligned}$$
(2)

Note that the discriminator of GANs [8] can be used as the loss function \(L(\cdot , \cdot )\) as well, especially when \(\ell _2\) (or \(\ell _1\)) reconstruction loss may not capture the true image similarities. We leave it for future study.

Fig. 2.
figure 2

Graphical model representations of attribute-conditioned image generation models (a) without (CVAE) and (b) with (disCVAE) disentangled latent space.

3.2 Disentangling CVAE with a Layered Representation

An image x can be interpreted as a composite of a foreground layer (or a foreground image \(x_F\)) and a background layer (or a background image \(x_B\)) via a matting equation [24]:

$$\begin{aligned} x = x_F \odot (1 - g) + x_B \odot g, \end{aligned}$$
(3)

where \(\odot \) denotes the element-wise product. \(g \in [0,1]^{N_x}\) is an occlusion layer or a gating function that determines the visibility of background pixels while \(1-g\) defines the visibility of foreground pixels. However, the model based on Eq. (3) may suffer from the incorrectly estimated mask as it gates the foreground region with imperfect mask estimation. Instead, we approximate the following formulation that is more robust to estimation error on mask:

$$\begin{aligned} x = x_F + x_B \odot g. \end{aligned}$$
(4)

When lighting condition is stable and background is at a distance, we can safely assume foreground and background pixels are generated from independent latent factors. To this end, we propose a disentangled representation \(z = [z_F, z_B]\) in the latent space, where \(z_F\) together with attribute y captures the foreground factors while \(z_B\) the background factors. As a result, the foreground layer \(x_F\) is generated from \(\mu _{\theta _F} (y,z_F)\) and the background layer \(x_B\) from \(\mu _{\theta _B} (z_B)\). The foreground shape and position determine the background occlusion so the gating layer g is generated from \(s_{\theta _g} (y,z_F)\) where the last layer of \(s(\cdot )\) is sigmoid function. In summary, we approximate the layered generation process as follows:

  1. 1.

    Sample foreground and background latent variables \(z_F\sim p (z_F)\), \(z_B\sim p (z_B)\);

  2. 2.

    Given y and \(z_{F}\), generate foreground layer \(x_{F}\sim \mathcal {N}\left( \mu _{\theta _{F}}(y,z_{F}),\sigma _{0}^{2} I_{N_{x}} \right) \) and gating layer \(g\sim Bernoulli\left( s_{\theta _{g}}(y,z_{F})\right) \); here, \(\sigma _{0}\) is a constant. The background layer (which correspond to \(x_{B}\)) is implicitly computed as \(\mu _{\theta _{B}}(z_{B})\).

  3. 3.

    Synthesize an image \(x\sim \mathcal {N}\left( \mu _{\theta }(y,z_{F},z_{B}),\sigma _{0}^{2} I_{N_{x}} \right) \) where \(\mu _{\theta }(y,z_{F},z_{B}) = \mu _{\theta _{F}}(y,z_{F})+s_{\theta _{g}}(y,z_{F})\odot \mu _{\theta _{B}}(z_{B})\).

Learning. It is very challenging to learn our layered generative model in a fully-unsupervised manner since we need to infer about \(x_F\), \(x_B\), and g from the image x only. In this paper, we further assume the foreground layer \(x_F\) (as well as gating variable g) is observable during the training and we train the model to maximize the joint log-likelihood \(\log p_{\theta } (x,x_F,g|y)\) instead of \(\log p_\theta (x|y)\). With disentangled latent variables \(z_F\) and \(z_B\), we refer our layered model a disentangling conditional variational auto-encoder (disCVAE). We compare the graphical models of disCVAE with vanilla CVAE in Fig. 2. Based on the layered generation process, we write the generation model by

$$\begin{aligned} p_\theta (x_F, g, x, z_F, z_B|y) =&\; p_\theta (x|z_F, z_B, y) p_\theta (x_F,g|z_F, y) p_\theta (z_F) p_\theta (z_B), \end{aligned}$$
(5)

the recognition model by

$$\begin{aligned} q_\phi (z_F, z_B|x_F,g,x,y) =&\; q_\phi (z_B|z_F, x_F, g, x, y) q_\phi (z_F|x_F, g, y) \end{aligned}$$
(6)

and the variational lower bound \(\mathcal {L}_{\text {disCVAE}} (x_F,g,x,y;\theta , \phi )\) is given by

$$\begin{aligned}&\; \mathcal {L}_{\text {disCVAE}} (x_F,g,x,y;\theta , \phi )=\nonumber \\&\; -\textit{KL} ( q_\phi (z_F|x_F,g,y) || p_\theta (z_F) ) - \mathbb {E}_{q_\phi (z_F|x_F,g, y)} \big [ \textit{KL} ( q_\phi (z_B|z_F, x_F, g, x, y) || p_\theta (z_B) ) \big ]\nonumber \\&\ - \mathbb {E}_{q_\phi (z_F|x_F, g, y)} \big [L(\mu _{\theta _F}(y, z_F), x_F) + \lambda _g L(s_{\theta _g}(y, z_F), g) \big ]\nonumber \\&\ - \mathbb {E}_{q_\phi (z_F, z_B|x_F, g, x, y)} L(\mu _{\theta } (y, z_F, z_B), x) \end{aligned}$$
(7)

where \(\mu _{\theta } (y, z_F, z_B)=\mu _{\theta _F}(y, z_F) + s_{\theta _g}(y, z_F)\odot \mu _{\theta _B}(z_B)\) as in Eq. (4). We further assume that \(\log p_\theta (x_F, g|z_F, y) = \log p_\theta (x_F|z_F, y) + \lambda _g \log p_\theta (g|z_F, y)\), where we introduce \(\lambda _g\) as additional hyperparameter when decomposing the probablity \(p_\theta (x_F,g|z_F, y)\). For the loss function \(L(\cdot , \cdot )\), we used reconstruction error for predicting x or \(x_F\) and cross entropy for predicting the binary mask g. See the supplementary material for details of the derivation. All the generation and recognition models are parameterized by convolutional neural networks and trained end-to-end in a single architecture with back-propagation. We will introduce the exact network architecture in the experiment section.

4 Posterior Inference via Optimization

Once the attribute-conditioned generative model is trained, the inference or generation of image x given attribute y and latent variable z is straight-forward. However, the inference of latent variable z given an image x and its corresponding attribute y is unknown. In fact, the latent variable inference is quite useful as it enables model evaluation on novel images. For simplicity, we introduce our inference algorithm based on the vanilla CVAE and the same algorithm can be directly applied to the proposed disCVAE and the other generative models such as GANs [3, 7]. Firstly we notice that the recognition model \(q_\phi (z|y,x)\) may not be directly used to infer z. On one hand, as an approximate, we don’t know how far it is from the true posterior \(p_\theta (z|x,y)\) because the KL divergence between them is thrown away in the variational learning objective; on the other hand, this approximation does not even exist in the models such as GANs. We propose a general approach for posterior inference via optimization in the latent space. Using Bayes’ rule, we can formulate the posterior inference by

$$\begin{aligned} \max _z \log p_\theta (z|x,y) =&\; \max _z \big [ \log p_\theta (x|z,y) + \log p_\theta (z|y) \big ] \nonumber \\ =&\; \max _z \big [ \log p_\theta (x|z,y) + \log p_\theta (z) \big ] \end{aligned}$$
(8)

Note that the generation models or likelihood terms \(p_\theta (x|z,y)\) could be non-Gaussian or even a deterministic function (e.g. in GANs) with no proper probabilistic definition. Thus, to make our algorithm general enough, we reformulate the inference in (8) as an energy minimization problem,

$$\begin{aligned} \min _{z} E(z,x,y) =&\; \min _{z} \big [L(\mu (z,y), x) + \lambda R(z)\big ] \end{aligned}$$
(9)

where \(L(\cdot ,\cdot )\) is the image reconstruction loss and \(R(\cdot )\) is a prior regularization term. Taking the simple Gaussian model as an example, the posterior inference can be re-written as,

$$\begin{aligned} \min _{z} E(z,x,y) =&\; \min _{z} \big [\Vert \mu (z,y)-x\Vert ^2 + \lambda \Vert z\Vert ^2)\big ] \end{aligned}$$
(10)

Note that we abuse the mean function \(\mu (z,y)\) as a general image generation function. Since \(\mu (z,y)\) is a complex neural network, optimizing (9) is essentially error back-propagation from the energy function to the variable z, which we solve by the ADAM method [13]. Our algorithm actually shares a similar spirit with recently proposed neural network visualization [42] and texture synthesis algorithms [6]. The difference is that we use generation models for recognition while their algorithms use recognition models for generation. Compared to the conventional way of inferring z from recognition model \(q_\phi (z|x,y)\), the proposed optimization contributed to an empirically more accurate latent variable z and hence was useful for reconstruction, completion, and editing.

5 Experiments

Datasets. We evaluated our model on two datasets: Labeled Faces in the Wild (LFW) [10] and Caltech-UCSD Birds-200-2011 (CUB) [37]. For experiments on LFW, we aligned the face images using five landmarks [43] and rescaled the center region to \(64\times 64\). We used 73 dimensional attribute score vector provided by [18] that describes different aspects of facial appearance such as age, gender, or facial expression. We trained our model using 70 % of the data (9,000 out of 13,000 face images) following the training-testing split (View 1) [10], where the face identities are distinct between train and test sets. For experiments on CUB, we cropped the bird region using the tight bounding box computed from the foreground mask and rescaled to \(64\times 64\). We used 312 dimensional binary attribute vector that describes bird parts and colors. We trained our model using 50 % of the data (6,000 out of 12,000 bird images) following the training-testing split [37]. For model training, we held-out 10 % of training data for validation.

Data Preprocessing and Augmentation. To make the learning easier, we preprocessed the data by normalizing the pixel values to the range \({\left[ -1, 1\right] }\). We augmented the training data with the following image transformations [5, 16]: (1) flipping images horizontally with probability 0.5, (2) multiplying pixel values of each color channel with a random value \(c \in {\left[ 0.97, 1.03\right] }\), and (3) augmenting the image with its residual with a random tradeoff parameter \(s \in {\left[ 0, 1.5\right] }\). Specifically, for CUB experiments, we performed two extra transformations: (4) rotating images around the centering point by a random angle \(\theta _r \in {\left[ -0.08, 0.08\right] }\), (5) rescaling images to the scale of \(72\times 72\) and performing random cropping of \(64\times 64\) regions. Note that these methods are designed to be invariant to the attribute description.

Architecture Design. For disCVAE, we build four convolutional neural networks (one for foreground and the other for background for both recognition and generation networks) for auto-encoding style training. The foreground encoder network consists of 5 convolution layers, followed by 2 fully-connected layers (convolution layers have 64, 128, 256, 256 and 1024 channels with filter size of \(5 \times 5\), \(5 \times 5\), \(3 \times 3\), \(3 \times 3\) and \(4 \times 4\), respectively; the two fully-connected layers have 1024 and 192 neurons). The attribute stream is merged with image stream at the end of the recognition network. The foreground decoder network consists of 2 fully-connected layers, followed by 5 convolution layers with 2-by-2 upsampling (fully-connected layers have 256 and \(8 \times 8 \times 256\) neurons; the convolution layers have 256, 256, 128, 64 and 3 channels with filter size of \(3 \times 3\), \(5 \times 5\), \(5 \times 5\), \(5 \times 5\) and \(5 \times 5\). The foreground prediction stream and gating prediction stream are separated at the last convolution layer. We adopt the same encoder/decoder architecture for background networks but with fewer number of channels. See the supplementary material for more details.

For all the models, we fixed the latent dimension to be 256 and found this configuration is sufficient to generate \(64 \times 64\) images in our setting. We adopt slightly different architectures for different datasets: we use 192 dimensions to foreground latent space and 64 dimensions to background latent space for experiments on LFW dataset; we use 128 dimensions for both foreground and background latent spaces on CUB dataset. Compared to vanilla CVAE, the proposed disCVAE has more parameters because of the additional convolutions introduced by the two-stream architecture. However, we found that adding more parameters to vanilla CVAE does not lead to much improvement in terms of image quality. Although both [4] and the proposed method use segmentation masks as supervision, naive mask prediction was not comparable to the proposed model in our setting based on the preliminary results. In fact, the proposed disCVAE architecture assigns foreground/background generation to individual networks and composite with gated interaction, which we found very effective in practice.

Implementation Details. We used ADAM [13] for stochastic optimization in all experiments. For training, we used mini-batch of size 32 and the learning rate 0.0003. We also added dropout layer of ratio 0.5 for the image stream of the encoder network before merging with attribute stream. For posterior inference, we used the learning rate 0.3 with 1000 iterations. The models are implemented using deep learning toolbox Torch7 [2].

Baselines. For the vanilla CVAE model, we used the same convolution architecture from foreground encoder network and foreground decoder network. To demonstrate the significance of attribute-conditioned modeling, we trained an unconditional variational auto-encoders (VAE) with almost the same convolutional architecture as our CVAE.

5.1 Attribute-Conditioned Image Generation

To examine whether the model has the capacity to generate diverse and realistic images from given attribute description, we performed the task of attribute-conditioned image generation. For each attribute description from testing set, we generated 5 samples by the proposed generation process: \(x \sim p_\theta (x|y, z)\), where z is sampled from isotropic Gaussian distribution. For vanilla CVAE, x is the only output of the generation. In comparison, for disCVAE, the foreground image \(x_F\) can be considered a by-product of the layered generation process. For evaluation, we visualized the samples generated from the model in Fig. 3 and compared them with the corresponding image in the testing set, which we name as “reference” image. To demonstrate that model did not exploit the trivial solution of attribute-conditioned generation by memorizing the training data, we added a simple baseline as experimental comparison. Basically, for each given attribute description in the testing set, we conducted the nearest neighbor search in the training set. We used the mean squared error as the distance metric for the nearest neighbor search (in the attribute space). For more visual results and code, please see the supplementary material and the project website: https://sites.google.com/site/attribute2image/.

Fig. 3.
figure 3

Attribute-conditioned image generation.

Attribute-conditioned Face Image Generation. As we can see in Fig. 3, face images generated by the proposed models look realistic and non-trivially different from each other, especially for view-point and background color. Moreover, it is clear that images generated by disCVAE have clear boundaries against the background. In comparison, the boundary regions between the hair area and background are quite blurry for samples generated by vanilla CVAE. This observation suggests the limitation of vanilla CVAE in modeling hair pattern for face images. This also justifies the significance of layered modeling and latent space disentangling in our attribute-conditioned generation process. Compared to the nearest neighbors in the training set, the generated samples can better reflect the input attribute description.

Attribute-conditioned Bird Image Generation. Compared to the experiments on LFW database, the bird image modeling is more challenging because the bird images have more diverse shapes and color patterns and the binary-valued attributes are more sparse and higher dimensional. As we can see in Fig. 3, there is a big difference between two versions of the proposed CVAE model. Basically, the samples generated by vanilla CVAE are blurry and sometimes blended with the background area. However, samples generated by disCVAE have clear bird shapes and reflect the input attribute description well. This confirms the strengths of the proposed layered modeling of images.

Attribute-conditioned Image Progression. To better analyze the proposed model, we generate images with interpolated attributes by gradually increasing or decreasing the values along each attribute dimension. We regard this process as attribute-conditioned image progression. Specifically, for each attribute vector, we modify the value of one attribute dimension by interpolating between the minimum and maximum attribute value. Then, we generate images by interpolating the value of y between the two attribute vectors while keeping latent variable z fixed. For visualization, we use the attribute vector from testing set.

Fig. 4.
figure 4

Attribute-conditioned image progression. The visualization is organized into six attribute groups (e.g., “gender”, “age”, “facial expression”, “eyewear”, “hair color” and “primary color (blue vs. yellow)”). Within each group, the images are generated from \(p_\theta (x|y,z)\) with \(z \sim \mathcal {N}(0,I)\) and \(y = [y_\alpha ,y_{rest}]\), where \(y_\alpha = (1-\alpha ) \cdot y_{min} + \alpha \cdot y_{max}\). Here, \(y_{min}\) and \(y_{max}\) stands for the minimum and maximum attribute value respectively in the dataset along the corresponding dimension. (Color figure online)

As we can see in Fig. 4, samples generated by progression are visually consistent with attribute description. For face images, by changing attributes like “gender” and “age”, the identity-related visual appearance is changed accordingly but the viewpoint, background color, and facial expression are well preserved; on the other hand, by changing attributes like “facial expression”,“eyewear” and “hair color”, the global appearance is well preserved but the difference appears in the local region. For bird images, by changing the primary color from one to the other, the global shape and background color are well preserved. These observations demonstrated that the generation process of our model is well controlled by the input attributes.

Fig. 5.
figure 5

Analysis: latent space disentangling.

Analysis: Latent Space Disentangling. To better analyze the disCVAE, we performed the following experiments on the latent space. In this model, the image generation process is driven by three factors: attribute y, foreground latent variable \(z_F\) and background latent variable \(z_B\). By changing one variable while fixing the other two, we can analyze how each variable contributes to the final generation results. We visualize the samples x, the generated background \(x_B\) and the gating variables g in Fig. 5. We summarized the observations as follows: (1) The background of the generated samples look different but with identical foreground region when we change background latent variable \(z_B\) only; (2) the foreground region of the generated samples look diverse in terms of viewpoints but still look similar in terms of appearance and the samples have uniform background pattern when we change foreground latent variable \(z_F\) only. Interestingly, for face images, one can identify a “hole” in the background generation. This can be considered as the location prior of the face images, since the images are relatively aligned. Meanwhile, the generated background for birds are relatively uniform, which demonstrates our model learned to recover missing background in the training set and also suggests that foreground and background have been disentangled in the latent space.

5.2 Attribute-Conditioned Image Reconstruction and Completion

Image Reconstruction. Given a test image x and its attribute vector y, we find z that maximizes the posterior \(p_\theta (z|x, y)\) following Eq. (9).

Image Completion. Given a test image with synthetic occlusion, we evaluate whether the model has the capacity to fill in the occluded region by recognizing the observed region. We denote the occluded (unobserved) region and observed region as \(x_u\) and \(x_o\), respectively. For completion, we first find z that maximizes the posterior \(p_\theta (z|x_o, y)\) by optimization (9). Then, we fill in the unobserved region \(x_u\) by generation using \(p_\theta (x_u|z,y)\). For each face image, we consider four types of occlusions: occlusion on the eye region, occlusion on the mouth region, occlusion on the face region and occlusion on right half of the image. For occluded regions, we set the pixel value to 0. For each bird image, we consider blocks of occlusion of size \(8\times 8\) and \(16 \times 16\) at random locations.

Fig. 6.
figure 6

Attribute-conditioned image reconstruction and completion.

In Fig. 6, we visualize the results of image reconstruction (a, b) and image completion (c–h). As we can see, for face images, our proposed CVAE models are in general good at reconstructing and predicting the occluded region in unseen images (from testing set). However, for bird images, vanilla CVAE model had significant failures in general. This agreed with the previous results in attribute-conditioned image generation.

In addition, to demonstrate the significance of attribute-conditioned modeling, we compared our vanilla CVAE and disCVAE with unconditional VAE (attribute is not given) for image reconstruction and completion. It can be seen in Fig. 6(c) and (d), the generated images using attributes actually perform better in terms of expression and eyewear (“smiling” and “sunglasses”).

For quantitative comparisons, we measured the pixel-level mean squared error on the entire image and occluded region for reconstruction and completion, respectively. We summarized the results in Table 1 (mean squared error and standard error). The quantitative analysis highlighted the benefits of attribute-conditioned modeling and the importance of layered modeling.

Table 1. Quantitative comparisons on face reconstruction and completion tasks.

6 Conclusion

To conclude, this paper studied a novel problem of attribute-conditioned image generation and proposed a solution with CVAEs. Considering the compositional structure of images, we proposed a novel disentangling CVAE (disCVAE) with a layered representation. Results on faces and birds demonstrate that our models can generate realistic samples with diverse appearance and especially disCVAE significantly improved the generation quality on bird images. To evaluate the learned generation models on the novel images, we also developed an optimization-based approach to posterior inference and applied it to the tasks of image reconstruction and completion with quantitative evaluation.