Skip to main content

A Message-Passing Approach to Phase Retrieval of Sparse Signals

  • Chapter
Excursions in Harmonic Analysis, Volume 4

Part of the book series: Applied and Numerical Harmonic Analysis ((ANHA))

Abstract

In phase retrieval, the goal is to recover a signal \(\boldsymbol{x} \in \mathbb{C}^{N}\) from the magnitudes of linear measurements \(\boldsymbol{Ax} \in \mathbb{C}^{M}\). While recent theory has established that  M ≈ 4 N intensity measurements are necessary and sufficient to recover generic \(\boldsymbol{x}\), there is great interest in reducing the number of measurements through the exploitation of sparse \(\boldsymbol{x}\), which is known as compressive phase retrieval. In this work, we detail a novel, probabilistic approach to compressive phase retrieval based on the generalized approximate message passing (GAMP) algorithm. We then present a numerical study of the proposed PR-GAMP algorithm, demonstrating its excellent phase-transition behavior, robustness to noise, and runtime. For example, to successfully recover  K-sparse signals, approximately \(M \geq 2K\log _{2}(N/K)\) intensity measurements suffice when  K ≪  N and \(\boldsymbol{A}\) has i.i.d Gaussian entries. When recovering a 6k-sparse 65k-pixel grayscale image from 32k randomly masked and blurred Fourier intensity measurements, PR-GAMP achieved 99% success rate with a median runtime of only 12. 6 seconds. Compared to the recently proposed CPRL, sparse-Fienup, and GESPAR algorithms, experiments show that PR-GAMP has a superior phase transition and orders-of-magnitude faster runtimes as the problem dimensions increase.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    \(\boldsymbol{x}\) may represent the sparse transform coefficients of a non-sparse signal-of-interest \(\boldsymbol{s} =\boldsymbol{\varPsi x}\) in a sparsifying basis (or frame) \(\boldsymbol{\varPsi }\), in which case the intensity measurements would be \(\boldsymbol{y} = \vert \boldsymbol{\varPhi s} +\boldsymbol{ w}\vert\) and \(\boldsymbol{A} \triangleq \boldsymbol{\varPhi \varPsi }\).

  2. 2.

    We previously described PR-GAMP in the conference paper [28] and the workshop presentation [29].

  3. 3.

    Here and elsewhere, we use \(\boldsymbol{y}\) when referring to the  M measurements that are available for signal reconstruction. In the canonical (noisy) compressive sensing problem, the measurements take the form \(\boldsymbol{y} =\boldsymbol{ Ax} +\boldsymbol{ w}\), but in the (noisy) compressive phase retrieval problem, the measurements instead take the form \(\boldsymbol{y} = \vert \boldsymbol{Ax} +\boldsymbol{ w}\vert\).

  4. 4.

    http://sourceforge.net/projects/gampmatlab/.

  5. 5.

    PR-GAMP is part of the GAMPmatlab package at http://sourceforge.net/projects/gampmatlab/.

  6. 6.

    http://users.isy.liu.se/rt/ohlsson/code/CPRL.zip.

  7. 7.

    Since CPRL rarely gave NMSE < 10−6, we reduced the definition of “success” to NMSE < 10−4 for this subsection only.

  8. 8.

    Although the complexity of GAMP is known to scale as  OMN) for this type of \(\boldsymbol{A}\), the values of  M and  N in Table 4 are too small for this scaling law to manifest.

  9. 9.

    For GESPAR, we used the November 2013 version of the Matlab code provided by the authors at https://sites.google.com/site/yoavshechtman/resources/software.

  10. 10.

    The sparse-Fienup from [21] requires \(\boldsymbol{A}^{\mathsf{H}}\boldsymbol{A}\) to be a (scaled) identity matrix. Although GESPAR can in principle handle generic \(\boldsymbol{A}\), the implementation provided by the authors is based on 1D and 2D Fourier \(\boldsymbol{A}\) and is not easily modified.

  11. 11.

    Here, since we used only two masks, we ensured invertibility by constructing the diagonal of \(\boldsymbol{D}_{1}\) using exactly  N∕2 unit-valued entries positioned uniformly at random and constructing the diagonal of \(\boldsymbol{D}_{2}\) as its complement, so that \(\boldsymbol{D}_{1} +\boldsymbol{ D}_{2} =\boldsymbol{ I}\).

  12. 12.

    Since each \(\boldsymbol{B}_{i}\) was a wide matrix, its nonzero band was wrapped from bottom to top when necessary.

  13. 13.

    For notational brevity, the subscript “ m” is omitted throughout this appendix.

  14. 14.

    \(\mathcal{N}(z; a,A)\mathcal{N}(z; b,B)\! =\! \mathcal{N}\Big(z; \frac{ \frac{a}{A}+ \frac{b}{B}} { \frac{1} {A}+ \frac{1} {B}}, \frac{1} { \frac{1} {A}+ \frac{1} {B}}\Big)\mathcal{N}(a; b,A\! +\! B).\)

References

  1. J.R. Fienup, Phase retrieval algorithms: a comparison. Appl. Opt.  21, 2758–2769 (1982)

    Article  Google Scholar 

  2. R.P. Millane, Recent advances in phase retrieval. Int. Soc. Opt. Eng.  6316 (2006)

    Google Scholar 

  3. O. Bunk, A. Diaz, F. Pfeiffer, C. David, B. Schmitt, D.K. Satapathy, J.F. Veen, Diffractive imaging for periodic samples: retrieving one-dimensional concentration profiles across microfluidic channels. Acta Crystallogr. Sect. A Found. Crystallogr.  63(4), 306–314 (2007)

    Article  Google Scholar 

  4. R.W. Harrison, Phase problem in crystallography. J. Opt. Soc. Am. A  10(5), 1046–1055 (1993)

    Article  Google Scholar 

  5. R.P. Millane, Phase retrieval in crystallography and optics. J. Opt. Soc. Am. A  7, 394–411 (1990)

    Article  Google Scholar 

  6. A. Chai, M. Moscoso, G. Papanicolaou, Array imaging using intensity-only measurements. Inverse Prob.  27(1), 1–16 (2011)

    Article  MathSciNet  Google Scholar 

  7. A. Walther, The question of phase retrieval in optics. Opt. Acta  10(1), 41–49 (1963)

    Article  MathSciNet  Google Scholar 

  8. J.C. Dainty, J.R. Fienup, Phase retrieval and image construction for astronomy, in  Image Recovery: Theory and Application, ed. by H. Stark, ch. 7 (Academic Press, New York, 1987), pp. 231–275

    Google Scholar 

  9. J. Miao, T. Ishikawa, Q. Shen, T. Earnest, Extending x-ray crystallography to allow the imaging of noncrystalline materials, cells, and single protein complexes. Annu. Rev. Phys. Chem.  59, 387–410 (2008)

    Article  Google Scholar 

  10. R. Balan, P.G. Casazza, D. Edidin, On signal reconstruction without phase. Appl. Comput. Harmon. Anal.  20, 345–356 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  11. L. Demanet, V. Jugnon, Convex recovery from interferometric measurements,  arXiv:1307.6864 (2013)

    Google Scholar 

  12. J.V. Corbett, The pauli problem, state reconstruction and quantum-real numbers. Rep. Math. Phys.  57(1) (2006)

    Google Scholar 

  13. T. Heinosaari, L. Mazzarella, M.M. Wolf, Quantum tomography under prior information,  arXiv:1109.5478 (2011)

    Google Scholar 

  14. B.G. Bodmann, N. Hammen, Stable phase retrieval with low-redundancy frames,  arXiv:1302.5487 (2013)

    Google Scholar 

  15. S. Gazit, A. Szameit, Y.C. Eldar, M. Segev, Super-resolution and reconstruction of sparse sub-wavelength images. Opt. Express,  17(26), 23920–23946 (2009)

    Article  Google Scholar 

  16. A. Szameit, Y. Shechtman, E. Osherovich, E. Bullkich, P. Sidorenko, H. Dana, S. Steiner, E.B. Kley, S. Gazit, T. Cohen-Hyams, S. Shoham, M. Zibulevsky, I. Yavneh, Y.C. Eldar, O. Cohen, M. Segev, Sparsity-based single-shot subwavelength coherent diffractive imaging. Nat. Mater.  11, 455–459 (2012)

    Article  Google Scholar 

  17. S. Marchesini, Ab initio compressive phase retrieval,  arXiv:0809.2006 (2008)

    Google Scholar 

  18. Y. Shechtman, E. Small, Y. Lahini, M. Verbin, Y. Eldar, Y. Silberberg, M. Segev, Sparsity-based super-resolution and phase-retrieval in waveguide arrays. Opt. Express  21(20), 24015–24024 (2013)

    Article  Google Scholar 

  19. X. Li, V. Voroninski, Sparse signal recovery from quadratic measurements via convex programming,  arXiv:1209.4785 (2012)

    Google Scholar 

  20. M.L. Moravec, J.K. Romberg, R. Baraniuk, Compressive phase retrieval, in  SPIE Conf. Series, vol. 6701, San Diego (2007)

    Google Scholar 

  21. S. Mukherjee, C.S. Seelamantula, An iterative algorithm for phase retrieval with sparsity constraints: application to frequency domain optical coherence tomography, in  Proc. IEEE Int. Conf. Acoust. Speech & Signal Process., Kyoto (2012), pp. 553–556

    Google Scholar 

  22. H. Ohlsson, A.Y. Yang, R. Dong, S.S. Sastry, CPRL – an extension of compressive sensing to the phase retrieval problem, in  Proc. Neural Inform. Process. Syst. Conf. (2012) (Full version at  arXiv:1111.6323)

    Google Scholar 

  23. E.J. Candès, T. Strohmer, V. Voroninski, Phase lift: exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math.  66, 1241–1274 (2011)

    Article  Google Scholar 

  24. P. Netrapalli, P. Jain, S. Sanghavi, Phase retrieval using alternating minimization, in  Proc. Neural Inform. Process. Syst. Conf. (2013) (See also  arXiv:1306.0160)

    Google Scholar 

  25. Y. Shechtman, A. Beck, Y.C. Eldar, GESPAR: efficient phase retrieval of sparse signals. IEEE Trans. Signal Process.  62, 928–938 (2014)

    Article  MathSciNet  Google Scholar 

  26. C. Papadimitriou, K. Steiglitz,  Combinatorial Optimization: Algorithms and Complexity (Dover, New York, 1998)

    MATH  Google Scholar 

  27. S. Rangan, Generalized approximate message passing for estimation with random linear mixing, in  Proc. IEEE Int. Symp. Inform. Thy., Saint Petersburg (2011), pp. 2168–2172. (Full version at  arXiv:1010.5141)

    Google Scholar 

  28. P. Schniter, S. Rangan, Compressive phase retrieval via generalized approximate message passing, in  Proc. Allerton Conf. Commun. Control Comput., Monticello (2012)

    Google Scholar 

  29. P. Schniter, Compressive phase retrieval via generalized approximate message passing, in  February Fourier Talks (FFT) Workshop on Phaseless Reconstruction, College Park (2013)

    Google Scholar 

  30. D.L. Donoho, A. Maleki, A. Montanari, Message passing algorithms for compressed sensing. Proc. Natl. Acad. Sci.  106, 18914–18919 (2009)

    Article  Google Scholar 

  31. D.L. Donoho, A. Maleki, A. Montanari, Message passing algorithms for compressed sensing: I. Motivation and construction, in  Proc. Inform. Theory Workshop, Cairo (2010), pp. 1–5

    Google Scholar 

  32. J. Pearl,  Probabilistic Reasoning in Intelligent Systems (Morgan Kaufman, San Mateo, 1988)

    Google Scholar 

  33. F.R. Kschischang, B.J. Frey, H.-A. Loeliger, Factor graphs and the sum-product algorithm. IEEE Trans. Inform. Theory  47, 498–519 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  34. B.J. Frey, D.J.C. MacKay, A revolution: belief propagation in graphs with cycles, in  Proc. Neural Inform. Process. Syst. Conf., Denver (1997), pp. 479–485

    Google Scholar 

  35. M. Bayati, A. Montanari, The dynamics of message passing on dense graphs, with applications to compressed sensing. IEEE Trans. Inf. Theory  57, 764–785 (2011)

    Article  MathSciNet  Google Scholar 

  36. M. Bayati, M. Lelarge, A. Montanari, Universality in polytope phase transitions and iterative algorithms, in  Proc. IEEE Int. Symp. Inf. Theory, Boston, (2012), pp. 1–5 (see also  arXiv:1207.7321)

    Google Scholar 

  37. S.O. Rice, Mathematical analysis of random noise. Bell Syst. Tech. J.  24(1), 46–156 (1945)

    Article  MathSciNet  MATH  Google Scholar 

  38. A. Dempster, N.M. Laird, D.B. Rubin, Maximum-likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc.  39, 1–17 (1977)

    MathSciNet  MATH  Google Scholar 

  39. J.P. Vila, P. Schniter, Expectation-maximization Gaussian-mixture approximate message passing. IEEE Trans. Signal Process.  61, 4658–4672 (2013)

    Article  MathSciNet  Google Scholar 

  40. G.F. Cooper, The computational complexity of probabilistic inference using Bayesian belief networks. Artif. Intell.  42, 393–405 (1990)

    Article  MATH  Google Scholar 

  41. P. Schniter, Turbo reconstruction of structured sparse signals, in  Proc. Conf. Inform. Science & Syst., Princeton (2010), pp. 1–6

    Google Scholar 

  42. M. Borgerding and P. Schniter, Generalized approximate message passing for the cosparse analysis model,  arXiv:1312.3968 (2013)

    Google Scholar 

  43. J. Vila, P. Schniter, An empirical-Bayes approach to recovering linearly constrained non-negative sparse signals, in  Proc. IEEE Workshop Comp. Adv. Multi-Sensor Adaptive Process., Saint Martin (2013), pp. 5–8 (Full version at  arXiv:1310.2806)

    Google Scholar 

  44. S. Som, L.C. Potter, P. Schniter, On approximate message passing for reconstruction of non-uniformly sparse signals, in  Proc. National Aerospace and Electronics Conf., Dayton (2010)

    Google Scholar 

  45. S. Rangan, P. Schniter, A. Fletcher, On the convergence of generalized approximate message passing with arbitrary matrices, in  Proc. IEEE Int. Symp. Inform. Thy., Honolulu (2014) (Full version at  arXiv:1402.3210)

    Google Scholar 

  46. E.J. Candès, X. Li, M. Soltanolkotabi, Phase retrieval from coded diffraction patterns,  arXiv:1310.3240 (2013)

    Google Scholar 

  47. G. Zheng, R. Horstmeyer, C. Yang, Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photon.  7, 739–745 (2013)

    Article  Google Scholar 

  48. M. Abramowitz, I.A. Stegun, eds.,  Handbook of Mathematical Functions (Dover, New York, 1964)

    MATH  Google Scholar 

  49. K.V. Mardia, P.E. Jupp,  Directional Statistics (Wiley, New York, 2009)

    Google Scholar 

  50. C. Robert, Modified Bessel functions and their applications in probability and statistics. Stat. Probab. Lett.  9, 155–161 (1990)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Philip Schniter .

Editor information

Editors and Affiliations

Appendices

Appendix A: Output Thresholding Rules

In this appendix, we derive expressions (10) and (12) that are used to compute the functions  g out,  m and  g out,  m defined in lines (D2) and (D3) of Table 1.

To facilitate the derivations in this appendix,Footnote 13 we first rewrite  p  Y |  Z y |  z) in a form different from (8). In particular, recalling that—under our AWGN assumption—the noisy transform outputs \(u = z + w\) are conditionally distributed as \(p(u\vert z) = \mathcal{N}(u;z,\nu ^{w})\), we first transform \(u = ye^{j\theta }\) from rectangular to polar coordinates to obtain

$$\displaystyle{ p(y,\theta \vert z) = 1_{y\geq 0}1_{\theta \in [0,2\pi )}\,\mathcal{N}(ye^{j\theta };z,\nu ^{w})\,y, }$$
(21)

where  y is the Jacobian of the transformation, and then integrate out the unobserved phase \(\theta\) to obtain

$$\displaystyle{ p_{Y \vert Z}(y\vert z) = 1_{y\geq 0}\,y\int _{0}^{2\pi }\mathcal{N}(ye^{j\theta };z,\nu ^{w})\,d\theta. }$$
(22)

We begin by deriving the integration constant

$$\displaystyle\begin{array}{rcl} & & C(y,\nu ^{w},\widehat{p},\nu ^{p}) \triangleq \int _{ \mathbb{C}}p_{Y \vert Z}(y\vert z)\,\mathcal{N}(z;\widehat{p},\nu ^{p})dz \\ & & = y\,1_{y\geq 0}\int _{0}^{2\pi }\int _{ \mathbb{C}}\mathcal{N}(ye^{j\theta };z,\nu ^{w})\mathcal{N}(z;\widehat{p},\nu ^{p})dzd\theta {}\end{array}$$
(23)
$$\displaystyle\begin{array}{rcl} & & = y\,1_{y\geq 0}\int _{0}^{2\pi }\mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w} +\nu ^{p})d\theta,{}\end{array}$$
(24)

where we used the Gaussian-pdf multiplication ruleFootnote 14 in (24). Noting the similarity between (24) and (22), the equivalence between (22) and (8) implies that

$$\displaystyle\begin{array}{rcl} C(y,\nu ^{w},\widehat{p},\nu ^{p}) = \frac{2y} {\nu ^{w} +\nu ^{p}}\exp \bigg(-\frac{y^{2} + \vert \widehat{p}\vert ^{2}} {\nu ^{w} +\nu ^{p}} \bigg)I_{0}\bigg( \frac{2y\vert \widehat{p}\vert } {\nu ^{w} +\nu ^{p}}\bigg)\,1_{y\geq 0}.& &{}\end{array}$$
(25)

In the sequel, we make the practical assumption that  y > 0, allowing us to drop the indicator “1  y ≥ 0” and invert  C.

Next, we derive the conditional mean

$$\displaystyle\begin{array}{rcl} \mathrm{E}_{Z\vert Y,P}\{Z\vert y,\widehat{p};\nu ^{p}\} = C(y,\nu ^{w},\widehat{p},\nu ^{p})^{-1}\int _{ \mathbb{C}}z\,p_{Y \vert Z}(y\vert z;\nu ^{w})\mathcal{N}(z;\widehat{p},\nu ^{p})dz.& &{}\end{array}$$
(26)

Plugging (22) into (26) and applying the Gaussian-pdf multiplication rule,

$$\displaystyle\begin{array}{rcl} & & \mathrm{E}_{Z\vert Y,P}\{Z\vert y,\widehat{p};\nu ^{p}\} \\ & & = C^{-1}y\int _{ 0}^{2\pi }\int _{ \mathbb{C}}z\,\mathcal{N}(z;ye^{j\theta },\nu ^{w})\mathcal{N}(z;\widehat{p},\nu ^{p})dzd\theta {}\end{array}$$
(27)
$$\displaystyle\begin{array}{rcl} & & = C^{-1}y\int _{ 0}^{2\pi }\int _{ \mathbb{C}}z\,\mathcal{N}\bigg(z; \frac{ye^{j\theta }/\nu ^{w} +\widehat{ p}/\nu ^{p}} {1/\nu ^{w} + 1/\nu ^{p}}, \frac{1} {1/\nu ^{w} + 1/\nu ^{p}}\bigg) \\ & & \quad \times \mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})dzd\theta {}\end{array}$$
(28)
$$\displaystyle\begin{array}{rcl} & & = C^{-1}y\int _{ 0}^{2\pi }\frac{ye^{j\theta }/\nu ^{w} +\widehat{ p}/\nu ^{p}} {1/\nu ^{w} + 1/\nu ^{p}} \mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})d\theta {}\end{array}$$
(29)
$$\displaystyle\begin{array}{rcl} & & = \frac{y/\nu ^{w}} {1/\nu ^{w} + 1/\nu ^{p}}C^{-1}y\int _{ 0}^{2\pi }e^{j\theta }\mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})d\theta \\ & & \quad + \frac{\widehat{p}/\nu ^{p}} {1/\nu ^{w} + 1/\nu ^{p}}C^{-1}y\int _{ 0}^{2\pi }\mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})d\theta {}\end{array}$$
(30)
$$\displaystyle\begin{array}{rcl} = \frac{y} {\nu ^{w}/\nu ^{p} + 1}C^{-1}y\int _{ 0}^{2\pi }e^{j\theta }\mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})d\theta + \frac{\widehat{p}} {\nu ^{p}/\nu ^{w} + 1}.& &{}\end{array}$$
(31)

Expanding the \(\mathcal{N}\) term, the integral in (31) becomes

$$\displaystyle\begin{array}{rcl} & & \int _{0}^{2\pi }e^{j\theta }\mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})d\theta \\ & & = \frac{1} {\pi (\nu ^{w} +\nu ^{p})}\exp \bigg(-\frac{y^{2} + \vert \widehat{p}\vert ^{2}} {\nu ^{w} +\nu ^{p}} \bigg)\int _{0}^{2\pi }e^{j\theta }\exp \bigg( \frac{2y\vert \widehat{p}\vert } {\nu ^{w} +\nu ^{p}}\cos (\theta -\psi )\bigg)d\theta {}\end{array}$$
(32)
$$\displaystyle\begin{array}{rcl} & & = \frac{1} {\pi (\nu ^{w} +\nu ^{p})}\exp \bigg(-\frac{y^{2} + \vert \widehat{p}\vert ^{2}} {\nu ^{w} +\nu ^{p}} \bigg)e^{j\psi }\int _{ 0}^{2\pi }e^{j\theta '}\exp \bigg( \frac{2y\vert \widehat{p}\vert } {\nu ^{w} +\nu ^{p}}\cos (\theta ')\bigg)d\theta '{}\end{array}$$
(33)
$$\displaystyle\begin{array}{rcl} & & = \frac{2e^{j\psi }} {\nu ^{w} +\nu ^{p}}\exp \bigg(-\frac{y^{2} + \vert \widehat{p}\vert ^{2}} {\nu ^{w} +\nu ^{p}} \bigg)I_{1}\bigg( \frac{2y\vert \widehat{p}\vert } {\nu ^{w} +\nu ^{p}}\bigg),{}\end{array}$$
(34)

where  ψ denotes the phase of \(\widehat{p}\), and where the integral in (33) was resolved using the expression in [48, 9.6.19]. Plugging (34) into (31) gives

$$\displaystyle\begin{array}{rcl} \mathrm{E}_{Z\vert Y,P}\{Z\vert y,\widehat{p};\nu ^{p}\}& & = \frac{\widehat{p}} {\nu ^{p}/\nu ^{w} + 1} + \frac{ye^{j\psi }} {\nu ^{w}/\nu ^{p} + 1} \frac{I_{1}\big(\frac{2y\vert \widehat{p}\vert } {\nu ^{w}+\nu ^{p}} \big)} {I_{0}\big(\frac{2y\vert \widehat{p}\vert } {\nu ^{w}+\nu ^{p}} \big)},{}\end{array}$$
(35)

which agrees with (10).

Finally, we derive the conditional covariance

$$\displaystyle\begin{array}{rcl} \mathrm{var}_{Z\vert Y,P}\{Z\vert y,\widehat{p};\nu ^{p}\}& & = C(y,\nu ^{w},\widehat{p},\nu ^{p})^{-1}\int _{ \mathbb{C}}\vert z\vert ^{2}\,p_{ Y \vert Z}(y\vert z;\nu ^{w})\mathcal{N}(z;\widehat{p},\nu ^{p})dz \\ & & \quad -\vert \mathrm{E}_{Z\vert Y,P}\{Z\vert y,\widehat{p};\nu ^{p}\}\vert ^{2}. {}\end{array}$$
(36)

Focusing on the first term in (36), if we plug in (22) and apply the Gaussian-pdf multiplication rule, we get

$$\displaystyle\begin{array}{rcl} & & C(y,\nu ^{w},\widehat{p},\nu ^{p})^{-1}\int _{ \mathbb{C}}\vert z\vert ^{2}\,p_{ Y \vert Z}(y\vert z;\nu ^{w})\mathcal{N}(z;\widehat{p},\nu ^{p})dz \\ & & = C^{-1}y\int _{ 0}^{2\pi }\int _{ \mathbb{C}}\vert z\vert ^{2}\,\mathcal{N}\bigg(z; \frac{ye^{j\theta }/\nu ^{w} +\widehat{ p}/\nu ^{p}} {1/\nu ^{w} + 1/\nu ^{p}}, \frac{1} {1/\nu ^{w} + 1/\nu ^{p}}\bigg)dz \\ & & \quad \times \mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})d\theta {}\end{array}$$
(37)
$$\displaystyle\begin{array}{rcl} & & = C^{-1}y\int _{ 0}^{2\pi }\bigg(\big\vert \frac{ye^{j\theta }/\nu ^{w} +\widehat{ p}/\nu ^{p}} {1/\nu ^{w} + 1/\nu ^{p}} \big\vert ^{2} + \frac{1} {1/\nu ^{w} + 1/\nu ^{p}}\bigg)\mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})d\theta {}\end{array}$$
(38)
$$\displaystyle\begin{array}{rcl} & & = C^{-1}y\int _{ 0}^{2\pi }\frac{\vert y\vert ^{2}/(\nu ^{w})^{2} + \vert \widehat{p}\vert ^{2}/(\nu ^{p})^{2} + 2y\vert \widehat{p}\vert /(\nu ^{w}\nu ^{p})\mathrm{Re}\{e^{j(\theta -\psi )}\}} {(1/\nu ^{w} + 1/\nu ^{p})^{2}} \\ & & \quad \times \mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})d\theta + \frac{1} {1/\nu ^{w} + 1/\nu ^{p}} {}\end{array}$$
(39)
$$\displaystyle\begin{array}{rcl} & & = \frac{\vert y\vert ^{2}/(\nu ^{w})^{2} + \vert \widehat{p}\vert ^{2}/(\nu ^{p})^{2}} {(1/\nu ^{w} + 1/\nu ^{p})^{2}} + \frac{1} {1/\nu ^{w} + 1/\nu ^{p}} \\ & & \quad + \frac{2y\vert \widehat{p}\vert /(\nu ^{w}\nu ^{p})} {(1/\nu ^{w} + 1/\nu ^{p})^{2}}C^{-1}y\mathrm{Re}\bigg\{e^{-j\psi }\int _{ 0}^{2\pi }e^{j\theta }\mathcal{N}(ye^{j\theta };\widehat{p},\nu ^{w}\! +\!\nu ^{p})d\theta \bigg\}{}\end{array}$$
(40)
$$\displaystyle\begin{array}{rcl} & & = \frac{\vert y\vert ^{2}/(\nu ^{w})^{2} + \vert \widehat{p}\vert ^{2}/(\nu ^{p})^{2}} {(1/\nu ^{w} + 1/\nu ^{p})^{2}} + \frac{1} {1/\nu ^{w} + 1/\nu ^{p}} \\ & & \quad + \frac{2y\vert \widehat{p}\vert /(\nu ^{w}\nu ^{p})} {(1/\nu ^{w} + 1/\nu ^{p})^{2}}C^{-1}y \frac{2} {\nu ^{w} +\nu ^{p}}\exp \bigg(-\frac{y^{2} + \vert \widehat{p}\vert ^{2}} {\nu ^{w} +\nu ^{p}} \bigg)I_{1}\bigg( \frac{2y\vert \widehat{p}\vert } {\nu ^{w} +\nu ^{p}}\bigg){}\end{array}$$
(41)
$$\displaystyle\begin{array}{rcl} & & = \frac{\vert y\vert ^{2}/(\nu ^{w})^{2} + \vert \widehat{p}\vert ^{2}/(\nu ^{p})^{2}} {(1/\nu ^{w} + 1/\nu ^{p})^{2}} + \frac{1} {1/\nu ^{w} + 1/\nu ^{p}} + \frac{2y\vert \widehat{p}\vert /(\nu ^{w}\nu ^{p})} {(1/\nu ^{w} + 1/\nu ^{p})^{2}} \frac{I_{1}\big(\frac{2y\vert \widehat{p}\vert } {\nu ^{w}+\nu ^{p}} \big)} {I_{0}\big(\frac{2y\vert \widehat{p}\vert } {\nu ^{w}+\nu ^{p}} \big)},{}\end{array}$$
(42)

where (41) used (34) and (42) used (25). By plugging (42) back into (36), we obtain the expression given in (12).

Appendix B: EM Update for Noise Variance

Noting that

$$\displaystyle\begin{array}{rcl} \ln p(\boldsymbol{y},\boldsymbol{x};\nu ^{w})& & =\ln p(\boldsymbol{y}\vert \boldsymbol{x};\nu ^{w}) +\ln p(\boldsymbol{x};\nu ^{w}) {}\end{array}$$
(43)
$$\displaystyle\begin{array}{rcl} & & =\sum _{ m=1}^{M}\ln p_{ Y \vert Z}(y_{m}\vert \boldsymbol{a}_{m}^{\mathsf{H}}\boldsymbol{x};\nu ^{w}) + \text{const} {}\end{array}$$
(44)
$$\displaystyle\begin{array}{rcl}& & =\sum _{ m=1}^{M}\ln \Big(y_{ m}\int _{0}^{2\pi }\mathcal{N}(y_{ m}e^{j\theta _{m} };\boldsymbol{a}_{m}^{\mathsf{H}}\boldsymbol{x},\nu ^{w})d\theta _{ m}\Big) + \text{const},{}\end{array}$$
(45)

where (45) used the expression for  p  Y |  Z from (22), we have

$$\displaystyle\begin{array}{rcl} & & \mathrm{E}\big\{\ln p(\boldsymbol{y},\boldsymbol{x};\nu ^{w})\big\vert \boldsymbol{y};\widehat{\nu ^{w}}[i]\big\} \\ & & \ =\int _{\mathbb{C}^{N}}p(\boldsymbol{x}\vert \boldsymbol{y};\widehat{\nu ^{w}}[i])\sum _{ m=1}^{M}\ln \Big(\int _{ 0}^{2\pi }\mathcal{N}(y_{ m}e^{j\theta _{m} };\boldsymbol{a}_{m}^{\mathsf{H}}\boldsymbol{x},\nu ^{w})d\theta _{ m}\Big)d\boldsymbol{x}.{}\end{array}$$
(46)

To circumvent the high-dimensional integral in (46), we use the same large system limit approximation used in the derivation of GAMP [27]: for sufficiently dense \(\boldsymbol{A}\), as \(N \rightarrow \infty\), the central limit theorem (CLT) suggests that \(\boldsymbol{a}_{m}^{\mathsf{H}}\boldsymbol{x} = z_{m}\) will become Gaussian. In particular, when \(\boldsymbol{x} \sim p(\boldsymbol{x}\vert \boldsymbol{y};\widehat{\nu ^{w}}[i])\), the CLT suggests that \(\boldsymbol{a}_{m}^{\mathsf{H}}\boldsymbol{x} \sim \mathcal{N}(\widehat{z}_{m},\nu _{m}^{z})\), where

$$\displaystyle\begin{array}{rcl} \widehat{z}_{m}[i]& \triangleq & \sum _{n=1}^{N}a_{ mn}\widehat{x}_{n}[i],{}\end{array}$$
(47)
$$\displaystyle\begin{array}{rcl}\nu _{m}^{z}[i]& \triangleq & \sum _{ n=1}^{N}\vert a_{ mn}\vert ^{2}\nu _{ n}^{x}[i],{}\end{array}$$
(48)

such that \(\widehat{x}_{n}[i]\) and  ν  n  xi] are the mean and variance of the marginal posterior pdf \(p(x_{n}\vert \boldsymbol{y};\widehat{\nu ^{w}}[i])\). In this case,

$$\displaystyle\begin{array}{rcl} & & \mathrm{E}\big\{\ln p(\boldsymbol{y},\boldsymbol{x};\nu ^{w})\big\vert \boldsymbol{y};\widehat{\nu ^{w}}[i]\big\} \\ & & =\sum _{ m=1}^{M}\int _{ \mathbb{C}}\mathcal{N}(z_{m};\widehat{z}_{m}[i],\nu _{m}^{z}[i])\,\ln \int _{ 0}^{2\pi }\mathcal{N}(y_{ m}e^{j\theta _{m} };z_{m},\nu ^{w})d\theta _{ m}dz_{m}.{}\end{array}$$
(49)

From (14), we see that any solution \(\widehat{\nu ^{w}}[i\! +\! 1]> 0\) is necessarily a value of  ν  w that zeros the derivative of the expected log-pdf. Thus, using the expected log-pdf expression from (49),

$$\displaystyle\begin{array}{rcl} 0& & =\sum _{ m=1}^{M}\int _{ \mathbb{C}}\mathcal{N}(z_{m};\widehat{z}_{m}[i],\nu _{m}^{z}[i])\frac{\int _{0}^{2\pi } \frac{\partial } {\partial \nu ^{w}}\mathcal{N}(y_{m}e^{j\theta _{m}};z_{m},\widehat{\nu ^{w}}[i\! +\! 1])d\theta _{m}} {\int _{0}^{2\pi }\mathcal{N}(y_{m}e^{j\theta _{m}'};z_{m},\widehat{\nu ^{w}}[i\! +\! 1])d\theta _{m}'} dz_{m}.{}\end{array}$$
(50)

Plugging the derivative expression (see [39])

$$\displaystyle\begin{array}{rcl} & & \frac{\partial } {\partial \nu ^{w}}\mathcal{N}(y_{m}e^{j\theta _{m} };z_{m},\widehat{\nu ^{w}}[i\! +\! 1]) \\ & & = \frac{\mathcal{N}(y_{m}e^{j\theta _{m}};z_{m},\widehat{\nu ^{w}}[i\! +\! 1])} {\widehat{\nu ^{w}}[i\! +\! 1]^{2}} \big(\vert y_{m}e^{j\theta _{m} } - z_{m}\vert ^{2} -\widehat{\nu ^{w}}[i\! +\! 1]\big),{}\end{array}$$
(51)

into (50) and multiplying both sides by \(\widehat{\nu ^{w}}[i\! +\! 1]^{2}\), we find

$$\displaystyle\begin{array}{rcl} \widehat{\nu ^{w}}[i\! +\! 1]& & = \frac{1} {M}\sum _{m=1}^{M}\int _{ \mathbb{C}}\mathcal{N}(z_{m};\widehat{z}_{m}[i],\nu _{m}^{z}[i]) \\ & & \quad \times \frac{\int _{0}^{2\pi }\vert y_{m}e^{j\theta _{m}} - z_{m}\vert ^{2}\mathcal{N}(y_{m}e^{j\theta _{m}};z_{m},\widehat{\nu ^{w}}[i\! +\! 1])d\theta _{m}} {\int _{0}^{2\pi }\mathcal{N}(y_{m}e^{j\theta _{m}'};z_{m},\widehat{\nu ^{w}}[i\! +\! 1])d\theta _{m}'} dz_{m}{}\end{array}$$
(52)
$$\displaystyle\begin{array}{rcl} & & = \frac{1} {M}\sum _{m=1}^{M}\int _{ \mathbb{C}}\mathcal{N}(z_{m};\widehat{z}_{m}[i],\nu _{m}^{z}[i]) \\ & & \quad \times \int _{0}^{2\pi }\vert y_{ m}e^{j\theta _{m} } - z_{m}\vert ^{2}p(\theta _{ m};z_{m},\widehat{\nu ^{w}}[i\! +\! 1])d\theta _{ m}dz_{m}{}\end{array}$$
(53)

with the newly defined pdf

$$\displaystyle\begin{array}{rcl} p(\theta _{m};z_{m},\widehat{\nu ^{w}}[i\! +\! 1])& \triangleq & \frac{\mathcal{N}(y_{m}e^{j\theta _{m}};z_{ m},\widehat{\nu ^{w}}[i\! +\! 1])} {\int _{0}^{2\pi }\mathcal{N}(y_{m}e^{j\theta _{m}'};z_{m},\widehat{\nu ^{w}}[i\! +\! 1])d\theta _{m}'}{}\end{array}$$
(54)
$$\displaystyle\begin{array}{rcl} & \propto & \exp \Big(-\frac{\vert z_{m} - y_{m}e^{j\theta _{m}}\vert ^{2}} {\widehat{\nu ^{w}}[i\! +\! 1]} \Big) {}\end{array}$$
(55)
$$\displaystyle\begin{array}{rcl}& \propto & \exp \big(\kappa _{m}\cos (\theta _{m} -\phi _{m})\big)\text{ for }\kappa _{m} \triangleq \frac{2\vert z_{m}\vert y_{m}} {\widehat{\nu ^{w}}[i\! +\! 1]},{}\end{array}$$
(56)

where  ϕ  m is the phase of  z  m (recall (5)). The proportionality (56) identifies this pdf as a von Mises distribution [49], which can be stated in normalized form as

$$\displaystyle\begin{array}{rcl} p(\theta _{m};z_{m},\widehat{\nu ^{w}}[i\! +\! 1])& & = \frac{\exp (\kappa _{m}\cos (\theta _{m} -\phi _{m}))} {2\pi I_{0}(\kappa _{m})}.{}\end{array}$$
(57)

Expanding the quadratic in (53) and plugging in (57), we get

$$\displaystyle\begin{array}{rcl} \widehat{\nu ^{w}}[i\! +\! 1]& & = \frac{1} {M}\sum _{m=1}^{M}\int _{ \mathbb{C}}\mathcal{N}(z_{m};\widehat{z}_{m}[i],\nu _{m}^{z}[i])\bigg(y_{ m}^{2} + \vert z_{ m}\vert ^{2} \\ & & \quad - 2y_{m}\vert z_{m}\vert \int _{0}^{2\pi }\cos (\theta _{ m} -\phi _{m})\,\frac{\exp (\kappa _{m}\cos (\theta _{m} -\phi _{m}))} {2\pi I_{0}(\kappa _{m})} d\theta _{m}\bigg)dz_{m} {}\end{array}$$
(58)
$$\displaystyle\begin{array}{rcl} & & = \frac{1} {M}\sum _{m=1}^{M}\int _{ \mathbb{C}}\mathcal{N}(z_{m};\widehat{z}_{m}[i],\nu _{m}^{z}[i]) \\ & & \quad \times \bigg (y_{m}^{2} + \vert z_{ m}\vert ^{2} - 2y_{ m}\vert z_{m}\vert R_{0}\bigg(\frac{2\vert z_{m}\vert y_{m}} {\widehat{\nu ^{w}}[i\! +\! 1]} \bigg)\bigg)dz_{m},{}\end{array}$$
(59)

where  R 0(⋅ ) is the modified Bessel function ratio defined in (13) and (59) follows from [48, 9.6.19]. To proceed further, we make use of the expansion \(R_{0}(\kappa ) = 1 -\frac{1} {2\kappa } - \frac{1} {8\kappa ^{2}} - \frac{1} {8\kappa ^{3}} + o(\kappa ^{-3})\) from [50, Lemma 5] to justify the high-SNR approximation

$$\displaystyle\begin{array}{rcl} R_{0}(\kappa ) \approx 1 -\frac{1} {2\kappa },& &{}\end{array}$$
(60)

which, when applied to (59), yields

$$\displaystyle\begin{array}{rcl} \widehat{\nu ^{w}}[i\! +\! 1] \approx \frac{2} {M}\sum _{m=1}^{M}\int _{ \mathbb{C}}\big(y_{m} -\vert z_{m}\vert \big)^{2}\mathcal{N}(z_{ m};\widehat{z}_{m}[i],\nu _{m}^{z}[i])dz_{ m}.& &{}\end{array}$$
(61)

Although (61) can be reduced to an expression that involves the mean of a Rician distribution, our empirical experience suggests that it suffices to assume  ν  m  zi] ≈ 0 in (61), after which we obtain the much simpler expression given in (15).

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Schniter, P., Rangan, S. (2015). A Message-Passing Approach to Phase Retrieval of Sparse Signals. In: Balan, R., Begué, M., Benedetto, J., Czaja, W., Okoudjou, K. (eds) Excursions in Harmonic Analysis, Volume 4. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-20188-7_7

Download citation

Publish with us

Policies and ethics