Probabilistic moving least squares with spatial constraints for nonlinear color transfer between images

https://doi.org/10.1016/j.cviu.2018.11.001Get rights and content

Abstract

The color of a scene may vary from image to image because the photographs are taken at different times, with different cameras, and under different camera settings. To align the color of a scene between images, we introduce a novel color transfer framework based on a scattered point interpolation scheme. Compared to the conventional color transformation methods that use a parametric mapping or color distribution matching, we solve for a full nonlinear and nonparametric color mapping in the 3D RGB color space by employing the moving least squares framework. We further strengthen the transfer with a probabilistic modeling of the color transfer in the 3D color space as well as spatial constraints to deal with mis-alignments, noise, and spatially varying illumination. Experiments show the effectiveness of our method over previous color transfer methods both quantitatively and qualitatively. In addition, our framework can be applied for various instances of color transfer such as transferring color between different camera models, camera settings, and illumination conditions, as well as for video color transfers.

Introduction

Color of a scene may vary from image to image because the photographs are taken at different times (illumination change), with different cameras (camera spectral sensitivity change), and under different camera settings (in-camera imaging parameter change (Kim et al., 2012)) (Fig. 1). Photographs of a scene may also vary due to different photographic adjustment styles of the users (Bychkovsky et al., 2011).

In general, color transfer refers to the process of transforming color of an image so that the color becomes consistent with the color of another image.1 Color transfer is applied to many computer vision and graphics problems. One main application is the computational color constancy in which the color is transferred to remove the color cast by the illumination (Brainard and Freeman, 1997, Gijsenij et al., 2011). It is also used to generate color consistent image panoramas and 3D texture-maps (Kim and Pollefeys, 2008, Xiong and Pulli, 2010a, Xu and Mulligan, 2010), as well as to enhance and manipulate images by emulating the tone and the color style of other images (Huang and Chen, 2009, HaCohen et al., 2013, Tsai et al., 2016).

The goal of this paper is to introduce a new mechanism for transferring color between images. We are particularly interested in employing a full nonlinear and nonparametric color mapping in the 3D RGB color space instead of using a parametric color transformation, modeling color channels separately, or matching statistical color measures (mean and variance) between images in an uncorrelated color space. Utilizing a full 3D color transformation is especially useful for explaining the in-camera imaging pipeline which was recently introduced in Kim et al. (2012). To solve the nonparametric 3D color transfer problem, we employ a scattered point interpolation scheme based on moving least squares and make it more robust by combining it with a probabilistic modeling of the color transfer. We further include spatial constraints to the probabilistic moving least squares framework to deal with local variations of a color due to local illumination changes, viewpoint changes, etc. Our framework can be applied for various instances of color transfer such as transferring color between different camera models (e.g. iPhone and a Canon DSLR) and camera settings (e.g. white balance and picture styles), illumination conditions, and photographic retouch styles as shown in Fig. 1. Note that this work focuses on transferring color between images of a same scene, different from works that transfer color between different scenes (Reinhard et al., 2001).

A preliminary version of this work was presented in Hwang et al. (2014). On top of adding significantly more experiments, we improved the previous algorithm in Hwang et al. (2014) by introducing a new weight to account for spatially varying color transfer 3.3. By considering the distance from the location of control points, color transfer for the target pixel is dominated by closer and more similar control points. While the previous method could only work for global color transfers (one-to-one color mappings), the new framework can be applied for local color transfers (one-to-many color mappings) due to spatially varying illumination, non-Lambertian scenes, etc. We have also added more implementation details and included results using different registration algorithm, whereas the previous work only showed results with registration by homographies.

Section snippets

Color transfer

Given an RGB value x=[r,g,b], the most commonly used method for transferring the color is to apply a linear transformation: x=Mx, where M is a 3 × 3 matrix describing the mapping of the three color channel values. Although the matrix M can be of any arbitrary form, a simple diagonal model is used more often than not, especially in the computational color constancy work (Brainard and Freeman, 1997). While the linear transformation model provides a simple yet effective way to transform colors,

Color transfer algorithm using probabilistic moving least squares

We introduce a mechanism for transforming color given a set of correspondences between a pair of images I and J. By employing a nonlinear and nonparametric method, we can model various sources of color changes between images without targeting a specific form of the color change (e.g. exposure change, illumination change, etc.) in addition to modeling the color change more accurately compared to parametric methods such as the linear 3 × 3 mapping and the distribution matching (Reinhard et al.,

Experiments

In this section, we provide a variety of experiments to validate our B-PMLS algorithm for color transfer. We first provide quantitative evaluations of different color transfer algorithms (Reinhard (Reinhard et al., 2001), 3 × 3, 2nd Poly (Ilie and Welch, 2005), IDT (Pitie et al., 2005), BTF (Kim and Pollefeys, 2008), Tai (Tai et al., 2005), CIM (Oliveira et al., 2011), PMLS (Hwang et al., 2014)) in Table 1. Note that Reinhard, IDT, and Tai do not require explicit color matches between two

Discussion

We have presented a new mechanism for transferring color between images using a probabilistic moving least squares framework. Our color transfer framework can be applied to many instances of color variation such as different camera, different camera setting, and global/local tonal retouch very well as seen in the paper. Through numerous experiments, we have shown that our method can transfer color between images more accurately than the previous color transfer methods and be used for

Acknowledgments

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF-2016R1A2B4014610), by Ministry of Culture, Sports and Tourism(MCST) and Korea Content Agency(KOCCA) in the Culture Technology(CT) Research & Development Program 2015, and the Cross-Ministry Giga KOREA Project (MSIT) No. GK18P0200.

References (45)

  • AgarwalS. et al.

    Building rome in a day

    Commun. ACM

    (2011)
  • BonneelN. et al.

    Wasserstein barycentric coordinates: histogram regression using optimal transport

    ACM Trans. Graph.

    (2016)
  • BonneelN. et al.

    Example-based video color grading

    ACM Trans. Graph.

    (2013)
  • BoseN.K. et al.

    Superresolution and noise filtering using moving least squares

    IEEE Trans. Image Process.

    (2006)
  • BrainardD.H. et al.

    Bayesian color constancy

    J. Opt. Soc. Amer. A

    (1997)
  • BrownM. et al.

    Automatic panoramic image stitching using invariant features

    Int. J. Comput. Vis.

    (2007)
  • Bychkovsky, V., Paris, S., Chan, E., Durand, F., 2011. Learning photographic global tonal adjustment with a database of...
  • FaridulH.S. et al.

    A survey of color mapping and its applications

  • Faridul, H., Stauder, J., Kervec, J., Trémeau, A., 2013. Approximate cross channel color mapping from sparse color...
  • FleishmanS. et al.

    Robust moving least-squares fitting with sharp features

  • Freedman, D., Kisilev, P., 2010. Object-to-object color transfer: Optimal flows and smsp transformations. In: 2010 IEEE...
  • Frigo, O., Sabater, N., Demoulin, V., Hellier, P., 2014. Optimal transportation for example-guided color transfer. In:...
  • GijsenijA. et al.

    Computational color constancy: survey and experiments

    IEEE Trans. Image Process.

    (2011)
  • Gong, H., Finlayson, G.D., Fisher, R.B., 2016. Recoding color transfer as a color homography. In: Proceedings of...
  • HaCohenY. et al.

    Non-rigid dense correspondence with applications for image enhancement

    ACM Trans. Graph. (Proceedings of ACM SIGGRAPH 2011)

    (2011)
  • HaCohenY. et al.

    Optimizing color consistency in photo collections

    ACM Trans. Graph.

    (2013)
  • HristovaH. et al.

    Style-aware robust color transfer

  • Huang, T.-W., Chen, H.-T., 2009. Landmark-based sparse color representations for color transfer. In: ICCV. pp....
  • Hwang, Y., Lee, J.-Y., Kweon, I.S., Kim, S.J., 2014. Color transfer using probabilistic moving least squares. In:...
  • Ilie, A., Welch, G., 2005. Ensuring color consistency across multiple cameras. In: ICCV. pp....
  • JiaJ. et al.

    Tensor voting for image correction by global and local intensity alignment

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2005)
  • KimS.J. et al.

    A new in-camera imaging model for color computer vision and its application

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2012)
  • Cited by (0)

    View full text