Skip to main content
Log in

Image colorization by using graph bi-Laplacian

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

Image colorization aims to recover the whole color image based on a known grayscale image (luminance or brightness) and some known color pixel values. In this paper, we generalize the graph Laplacian to its second-order variant called graph bi-Laplacian, and then propose an image colorization method by using graph bi-Laplacian. The eigenvalue analysis of graph bi-Laplacian matrix and its corresponding normalized bi-Laplacian matrix is given to show their properties. We apply graph bi-Laplacian approach to image colorization by formulating it as an optimization problem and solving the resulting linear system efficiently. Numerical results show that the proposed method can perform quite well for image colorization problem, and its performance in terms of efficiency and colorization quality for test images can be better than that by the state-of-the-art colorization methods when the randomly given color pixels ratio attains some level.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bezdek, J.C.: Pattern recognition with fuzzy objective function algorithms. Plenum (1981)

  2. Bjørstad, P.: Fast numerical solution of the biharmonic Dirichlet problem on rectangles. SIAM J Numer Anal 20(1), 59–71 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  3. Briggs, I.C.: Machine contouring using minimum curvature. Geophysics 39(1), 39–48 (1974)

    Article  Google Scholar 

  4. Drew, M.S., Finlayson, G.D.: Improvement of colorization realism via the structure tensor. Int J Image Graph 11(04), 589–609 (2011)

    Article  MathSciNet  Google Scholar 

  5. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. Int J Comput Vis 59(2), 167–181 (2004)

    Article  Google Scholar 

  6. Fornasier, M.: Nonlinear projection recovery in digital inpainting for color image restoration. J Math Imaging Vision 24(3), 359–373 (2006)

    Article  MathSciNet  Google Scholar 

  7. Gáspár, C.: Multi-level biharmonic and bi-Helmholtz interpolation with application to the boundary element method. Engineering Analysis with Boundary Elements 24(7), 559–573 (2000)

    Article  MATH  Google Scholar 

  8. Grady, L.J., Polimeni, J.: Discrete calculus: Applied analysis on graphs for computational science. Springer Science & Business Media (2010)

  9. Gupta, R.K., Chia, A.Y.-S., Rajan, D., Ng, E.S., Zhiyong, H.: Image colorization using similar images. In: Proceedings of the 20th ACM international conference on multimedia, pp 369–378. ACM (2012)

  10. Heu, J.-H., Hyun, D.-Y., Kim, C.-S., Lee, S.-U.: Image and video colorization based on prioritized source propagation. In: 2009 16th IEEE international conference on image processing (ICIP), pages 465–468. IEEE (2009)

  11. Hoffmann, S., Plonka, G., Weickert, J.: Discrete Green’s functions for harmonic and biharmonic inpainting with sparse atoms. In: International workshop on energy minimization methods in computer vision and pattern recognition, pp 169–182. Springer (2015)

  12. Kang, S.H., March, R.: Variational models for image colorization via chromaticity and brightness decomposition. IEEE Trans Image Process 16(9), 2251–2261 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  13. Kawulok, M., Kawulok, J., Smolka, B.: Discriminative textural features for image and video colorization. IEICE Trans Inf Syst 95(7), 1722–1730 (2012)

    Article  Google Scholar 

  14. Kim, T.H., Lee, K.M., Lee, S.U.: Edge-preserving colorization using data-driven random walks with restart. In: 2009 16th IEEE international conference on image processing (ICIP), pp 1661–1664. IEEE (2009)

  15. Lagodzinski, P., Smolka, B.: Digital image colorization based on probabilistic distance transformation. In: ELMAR, 2008. 50th international symposium, vol 2, pp 495–498. IEEE (2008)

  16. Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. In ACM transactions on graphics (TOG), vol 23, pp 689–694. ACM (2004)

  17. Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Trans Pattern Anal Mach Intell 30(2), 228–242 (2008)

    Article  Google Scholar 

  18. Levin, A., Rav-Acha, A., Lischinski, D.: Spectral matting. IEEE Trans Pattern Anal Mach Intell 30(10), 1699–1712 (2008)

    Article  Google Scholar 

  19. Lezoray, O., Ta, V.T., Elmoataz, A.: Nonlocal graph regularization for image colorization. In: 2008 19th international conference on pattern recognition (ICPR), pages 1–4. IEEE (2008)

  20. Li, F., Bao, Z., Liu, R., Zhang, G.: Fast image inpainting and colorization by Chambolle’s dual method. J Vis Commun Image Represent 22(6), 529–542 (2011)

    Article  Google Scholar 

  21. Lou, Y., Zeng, T., Osher, S., Xin, J.: A weighted difference of anisotropic and isotropic total variation model for image processing. Siam Journal on Imaging Sciences 8(3), 1798–1823 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  22. Luo, D., Huang, H., Ding, C., Nie, F.: On the eigenvectors of p-laplacian. Mach Learn 81(1), 37–51 (2010)

    Article  MathSciNet  Google Scholar 

  23. Mohar, B., Alavi, Y., Chartrand, G., Oellermann, O.R.: The Laplacian spectrum of graphs. Graph Theory, Combinatorics, and Applications 2(871-898), 12 (1991)

    Google Scholar 

  24. Ng, M.K., Qiu, G., Yip, A.M.: Numerical methods for interactive multiple-class image segmentation problems. Int J Imaging Syst Technol 20(3), 191–201 (2010)

    Article  Google Scholar 

  25. Pierre, F., Aujol, J.-F., Bugeau, A., Papadakis, N., Ta, V.-T: Luminance-chrominance model for image colorization. SIAM Journal on Imaging Sciences 8(1), 536–563 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  26. Quang, M.H., Kang, S.H., Le, T.M.: Image and video colorization using vector-valued reproducing kernel Hilbert spaces. J Math Imaging Vision 37(1), 49–65 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  27. Sapiro, G.: Inpainting the colors. In: IEEE international conference on image processing 2005, vol 2, pp II–698. IEEE (2005)

  28. Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal Mach Intell 22(8), 888–905 (2000)

    Article  Google Scholar 

  29. Tai, X.C., Wu, C.: Augmented lagrangian method, dual methods and split bregman iteration for rof model. In: International conference on scale space and variational methods in computer vision, pp 502–513 (2009)

  30. Terzopoulos, D.: Multilevel computational processes for visual surface reconstruction. Computer Vision, Graphics, and Image Processing 24(1), 52–96 (1983)

    Article  Google Scholar 

  31. Von Luxburg, U.: A tutorial on spectral clustering. Stat Comput 17(4), 395–416 (2007)

    Article  MathSciNet  Google Scholar 

  32. Wang, S., Zhang, Z.: Colorization by matrix completion. In: Proceedings of the 29th AAAI conference on artificial intelligence. Citeseer (2012)

  33. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4), 600–612 (2004)

    Article  Google Scholar 

  34. Weiss, Y.: Segmentation using eigenvectors: a unifying view. In: The Proceedings of the 7th IEEE international conference on computer vision, 1999, vol 2, pp 975–982. IEEE (1999)

  35. Welsh, T., Ashikhmin, M., Mueller, K.: Transferring color to greyscale images. In ACM transactions on graphics (TOG), vol 21, pp 277–280. ACM (2002)

  36. Baatz, P.A.M.W., Fornasier, M., Schönlieb, C.-B.: Inpainting of ancient austrian frescoes. In: Sarhangi, R., Séquin, C.H. (eds.) Bridges Leeuwarden: Mathematics, music, art, architecture, culture, pp 163–170. Tarquin Publications, London (2008)

  37. Yao, Q., Kwok, J.T.: Colorization by patch-based local low-rank matrix completion. In: Proceedings of the 29th AAAI conference on artificial intelligence, pp 1959–1965 (2015)

  38. Yatziv, L., Sapiro, G.: Fast image and video colorization using chrominance blending. IEEE Trans Image Process 15(5), 1120–1129 (2006)

    Article  Google Scholar 

  39. Yu, S.X., Shi, J.: Multiclass spectral clustering. In: The Proceedings of the 9th IEEE International Conference on computer vision, 2003, pp 313–319. IEEE (2003)

  40. Zhang, R., Zhu, J.-Y., Isola, P., Geng, X., Lin, A.S., Yu, T., Efros, A.A.: Real-time user-guided image colorization with learned deep priors. ACM Trans Graph 36(4), 119 (2017)

    Google Scholar 

  41. Zhang, X., Chan, T.F.: Wavelet inpainting by nonlocal total variation. Inverse Problems and Imaging 4(1), 191–210 (2017)

    MATH  Google Scholar 

  42. Zheng, Y., Essock, E.A.: A local-coloring method for night-vision colorization utilizing image analysis and fusion. Information Fusion 9(2), 186–199 (2008)

    Article  Google Scholar 

Download references

Funding

This work is supported in part by the National Natural Science Foundation of China (NSFC) (No. 61731009, No. 11671002), Science and Technology Commission of Shanghai Municipality (No. 18dz2271000), HKRGC GRF 1202715, 12306616, 12200317, 12300218, and HKBU RC-ICRS/16-17/03.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fang Li.

Additional information

Communicated by: Raymond Chan

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1. Proof of Proposition

Part (1)::

Denote the right-hand side term of (7) as

$$ E(u)=\sum\limits_{i}\left( \sum\limits_{j\in \mathcal{N}_{i}}w_{ij}(u_{j}-u_{i})\right)^{2}. $$

It is obvious that E(u) is a quadratic form. In E(u), the terms involving ui are

$$ E_{1}=\left( \sum\limits_{j\in \mathcal{N}_{i}}w_{ij}(u_{i}-u_{j})\right)^{2} +\sum\limits_{j\in \mathcal{N}_{i}}\left( \sum\limits_{k\in \mathcal{N}_{j}}w_{jk}(u_{j}-u_{k})\right)^{2}. $$

Note that in the second term k can be equal to i since \(i\in \mathcal {N}_{j}\). Expanding the first terms of E1, we get

$$ \begin{array}{@{}rcl@{}} E_{11}&=&\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{l\in \mathcal{N}_{i}}w_{ij}w_{il}(u_{i}-u_{j})(u_{i}-u_{l})\\ &=&\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{l\in \mathcal{N}_{i}}w_{ij}w_{il}{u_{i}^{2}} -{\sum}_{j\in \mathcal{N}_{i}}{\sum}_{l\in \mathcal{N}_{i}}w_{ij}w_{il}(u_{i}u_{l}+u_{i}u_{j})+\cdots\\ &=&\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{l\in \mathcal{N}_{i}}w_{ij}w_{il}{u_{i}^{2}}- 2\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{l\in \mathcal{N}_{i}}w_{ij}w_{il}u_{i}u_{j}+\cdots \end{array} $$

where the terms which are not related with ui are omitted. In the second term of E1, ui appears only when k = i or l = i; hence, we have

$$ \begin{array}{@{}rcl@{}} E_{12}&=&\sum\limits_{j\in \mathcal{N}_{i}}\left( \sum\limits_{k\in \mathcal{N}_{j}, k\neq i}w_{jk}(u_{j}-u_{k})+w_{ij}u_{j}-w_{ij}u_{i}\right)^{2}\\ &=&\sum\limits_{j\in \mathcal{N}_{i}}w_{ij}^{2}{u_{i}^{2}}-2w_{ij}^{2}u_{i}u_{j} -2\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{k\in \mathcal{N}_{j},k\neq i}w_{ij}w_{jk}u_{i}u_{j}\\ &&+ 2\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{k\in \mathcal{N}_{j},k\neq i}w_{ij}w_{jk}u_{i}u_{k}+\cdots\\ &=&\sum\limits_{j\in \mathcal{N}_{i}}w_{ij}^{2}{u_{i}^{2}}-\!2\!\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{k\in \mathcal{N}_{j}}w_{ij}w_{jk}u_{i}u_{j} + 2\!\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{k\in \mathcal{N}_{j},k\neq i}w_{ij}w_{jk}u_{i}u_{k}+\cdots \end{array} $$

where the terms not involving ui are omitted. By adding E11 and E12 together, we get that the terms contain ui are

$$ \begin{array}{@{}rcl@{}} &&\left( \sum\limits_{j\in \mathcal{N}_{i}}w_{ij}^{2}+\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{l\in \mathcal{N}_{i}}w_{ij}w_{il}\right){u_{i}^{2}}\\ &&- 2\!\left( \sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{l\in \mathcal{N}_{i}}w_{ij}w_{il} + \sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{k\in \mathcal{N}_{j}}w_{ij}w_{jk}\right)\!u_{i}u_{j} +\! 2\!\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{k\in \mathcal{N}_{j},k\neq i}w_{ij}w_{jk}u_{i}u_{k}. \end{array} $$

By using the definition of graph bi-Laplacian, we obtain that E(u) is the following quadratic form

$$E(u)=u^{T}{{\Delta}_{w}^{2}} u.$$
Part (2)::

According to the formulas (3)–(5), the main diagonal elements of \({{\Delta }_{w}^{2}}\) is positive since W is non-negative and connected. the summation of the i th row of the matrix \({{\Delta }_{w}^{2}}\) is

$$ \begin{array}{@{}rcl@{}}\sum\limits_{j = 1}^{n}({{\Delta}_{w}^{2}})_{ij} &=&\sum\limits_{j\in \mathcal{N}_{i}}w_{ij}^{2}+\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{l\in \mathcal{N}_{i}}w_{ij}w_{il} -\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{k\in \mathcal{N}_{j}}w_{ij}w_{jk}\\ &&-\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{k\in \mathcal{N}_{i}}w_{ij}w_{ik} +\sum\limits_{j\in \mathcal{N}_{i}}\sum\limits_{k\in \mathcal{N}_{j}, k\neq i}w_{ij}w_{jk}.\hspace{2cm} \end{array} $$

The first, the third, and the fifth terms sum to zero; meanwhile, the second and the fourth terms sum to zero. Hence, it follows directly that the row sum is zero.

Part (3)::

Assume \(j\in \mathcal {N}_{i}\). Since G is an undirected graph and W is symmetric, we have \(i\in \mathcal {N}_{j}, w_{ij}=w_{ji}\). By definition (4), we get

$$({{\Delta}_{w}^{2}})_{ji}= \sum\limits_{k\in \mathcal{N}_{i}}w_{ji}w_{ik}+ \sum\limits_{k\in \mathcal{N}_{j}}w_{ji}w_{jk}=({{\Delta}_{w}^{2}})_{ij}, j\in \mathcal{N}_{i}. $$

The symmetry of \(({{\Delta }_{w}^{2}})_{ik},k\in \mathcal {N}_{j}, j\in \mathcal {N}_{i}, k\neq i\) is obvious from definition (5). Hence, \({{\Delta }_{w}^{2}}\) is symmetric. The positive semi-definiteness is a direct consequence of Part (1), which shows that \(u^{T}{{\Delta }_{w}^{2}} u\ge 0 \) for all \(u\in \mathbb {R}^{n}\).

Part (4)::

From equation (7), it is obvious that \({{\Delta }_{w}^{2}}\mathbf {1}= 0. \) Hence, 0 is an eigenvalue of \({{\Delta }_{w}^{2}}\). Since \({{\Delta }_{w}^{2}}\) is positive semi-definite, its eigenvalues are non-negative. So 0 is the smallest eigenvalue of \({{\Delta }_{w}^{2}}\).

Part (5)::

It is a direct consequence of Parts (1), (3), and (4).

Part (6)::

Assume u is an eigenvector with eigenvalue 0. Then, by equation (??), we know that

$$\sum\limits_{j\in \mathcal{N}_{i}}w_{ij}(u_{j}-u_{i})= 0, \quad \forall i. $$

Or equivalently,

$$\sum\limits_{j\in \mathcal{N}_{i}}\tilde{w}_{ij}u_{j} = u_{i}, \quad \forall i, $$

where \(\tilde {w}_{ij} = \frac {w_{ij}}{{\sum }_{j\in \mathcal {N}_{i}}w_{ij}}\). Since wij ≥ 0, we have \(\tilde {w}_{ij}\ge 0\) and \({\sum }_{j\in \mathcal {N}_{i}}\tilde {w}_{ij}= 1\). If u attains maximum u0 at some vertex i, then we can easily derive that \(u_{j}=u_{0}, \forall j\in \mathcal {N}_{i}\). Since the graph is connected, we have uk = u0 at each vertex k. With similar argument, if we assume u attains minimum u0 at some vertex i, we get that uk = u0 at each vertex k. In other words, the eigenvector of \({{\Delta }_{w}^{2}}\) corresponding to eigenvalue 0 must be the constant one vector. Hence, the conclusion follows.

Using the symmetric property of \({{\Delta }_{w}^{2}}\), we can give another proof of Part (1). Using the notations as above, the derivative about E(u) with respect to ui is

$$ \begin{array}{@{}rcl@{}} &&\frac{\partial E}{\partial u_{i}} = 2\left( \sum\limits_{l\in\mathcal{N}_{i}}w_{il}(u_{l}-u_{i})\right) \left( -\sum\limits_{j\in \mathcal{N}_{i}} w_{ij}\right)+ 2\sum\limits_{j\in\mathcal{N}_{i}}\left( \sum\limits_{k\in \mathcal{N}_{j}}w_{jk}(u_{k}-u_{j})\right)w_{ji}.\\ \end{array} $$

It is easy to check that

$$\frac{\partial E}{\partial u}= 2{{\Delta}_{w}^{2}}u. $$

On the other hand, since Δw is symmetric by definition, we have

$$ \frac{\partial}{\partial u}\left( u^{T}{{\Delta}_{w}^{2}}u\right) = {{\Delta}_{w}^{T}}u+{\Delta}_{w}u = 2{{\Delta}_{w}^{2}}u. $$

Hence, the equality (7) holds.

Appendix 2. Proof of Proposition 2

Part (1)::

It can be proved similarly as Part (1) in Proposition 2.

Part (2)::

From part (2) of Proposition (2), the row sum of \({{\Delta }_{w}^{2}}\) equals 0. That is,

$$\sum\limits_{j = 1}^{n}({{\Delta}_{w}^{2}})_{ij}=\overline{D}_{ii}-\sum\limits_{j = 1}^{n}{\overline{W}_{ij}}= 0.$$

Then

$$\sum\limits_{j = 1}^{n}(\overline{{{\Delta}_{w}^{2}}})_{ij}= 1-\frac{{\sum}_{j = 1}^{n}{\overline{W}_{ij}}}{\overline{D}_{ii}}= 0.$$
Part (3)::

Assume λ is an eigenvalue of \(\overline {{{\Delta }_{w}^{2}}}\) with eigenvector v, i.e.,

$$\overline{{{\Delta}_{w}^{2}}}v = \lambda v\Leftrightarrow\overline{D}^{-1}{{{\Delta}_{w}^{2}}}v = \lambda v.$$

Multiplying this eigenvalue equation with \(\overline {D}^{1/2}\) from the left yields

$$\overline{D}^{-1/2}{{{\Delta}_{w}^{2}}}v = \lambda \overline{D}^{1/2}v\Leftrightarrow\overline{D}^{-1/2}{{{\Delta}_{w}^{2}}}\overline{D}^{-1/2}\overline{D}^{1/2}v = \lambda \overline{D}^{1/2}v.$$

Letting \(\omega =\overline {D}^{1/2}v\), we get \(\overline {\overline {{{\Delta }_{w}^{2}}}}\omega =\lambda \omega \) and vice versa. Part (2) implies that the eigenvalues of \(\overline {\overline {{{\Delta }_{w}^{2}}}}\) and \(\overline {{{\Delta }_{w}^{2}}}\) are the same.

Part (4)::

By Part (1), it is obvious that \(\overline {{{\Delta }_{w}^{2}}}\mathbf {1}= 0\).

Part (5)::

The statement about \(\overline {\overline {{{\Delta }_{w}^{2}}}}\) can be derived from (1) and then the statement about \(\overline {{{\Delta }_{w}^{2}}}\) follows from Part (3).

Part (6)::

Assume u is an eigenvector with eigenvalue 0. Then, by equation (8), we know that

$$\sum\limits_{j\in \mathcal{N}_{i}}w_{ij}\left( \frac{u_{i}}{\sqrt{\overline{D}_{ii}}} -\frac{u_{j}}{\sqrt{\overline{D}_{jj}}}\right)= 0, \quad \forall i.$$

Since the graph is connected, with a similar argument as the proof of Part (6) in Proposition 1, we get the following equality hold at all vertices k

$$\frac{u_{k}}{\sqrt{\overline{D}_{kk}}}=\text{const}, \quad \forall k.$$

It means the eigenvector of \(\overline {\overline {{{\Delta }_{w}^{2}}}}\) with eigenvalue 0 must be \(\overline {D}^{1/2}\mathbf {1}\). Hence, the statement of \(\overline {\overline {{{\Delta }_{w}^{2}}}}\) holds. The statement for \(\overline {{{\Delta }_{w}^{2}}}= 0\) holds by Part (2)

Appendix 3. Proof of Lemma 1

Assume x is a nonzero vector such that AKKx = 0. That is, \(D_{K}A{D_{k}^{T}}x = 0\). Then, we have \(x^{T}D_{K}A{D_{k}^{T}}x = 0\). Denote \(y={D_{K}^{T}}x\). For the symmetric matrix A, we construct the polynomial

$$ \begin{array}{@{}rcl@{}} p(t)&=&(ty+Ay)^{T}A(ty+Ay)\\ &=&t^{2}y^{T}Ay+ 2ty^{T}A^{2}y+y^{T}A^{3}y\\ &=&2t\|Ay\|^{2}_{2}+y^{T}A^{3}y. \end{array} $$

The positive semi-definite property of A ensures that p(t) ≥ 0 for all real t. However, if ∥Ay2 ≠ 0 then for sufficiently large negative values of t we would have p(t) < 0. We conclude that ∥Ay2 = 0, so Ay = 0, i.e., \(A{D_{k}^{T}}x = 0\). By proposition 1, the eigenvector of A corresponding to eigenvalue 0 is constant one vector. Then, we have \({D_{K}^{T}}x =\mathbf {1}\). It can not hold since \({D_{K}^{T}}x\) must contain zeros due to the fact that DK is a subsampling matrix.

Appendix 4. Proof of Theorem 1

Note that \(\tilde {A}\) is a block upper triangular with diagonal blocks AKK and I. Hence, \(\det {(\tilde {A})}=\det {(A_{KK})}\det {(I)}=\det {(A_{KK})}\neq 0 \), which implies the uniqueness of solution.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, F., Ng, M.K. Image colorization by using graph bi-Laplacian. Adv Comput Math 45, 1521–1549 (2019). https://doi.org/10.1007/s10444-019-09677-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10444-019-09677-x

Keywords

Mathematics Subject Classification (2010)

Navigation