Abstract
In phase retrieval, the goal is to recover a signal \(\boldsymbol{x} \in \mathbb{C}^{N}\) from the magnitudes of linear measurements \(\boldsymbol{Ax} \in \mathbb{C}^{M}\). While recent theory has established that M ≈ 4 N intensity measurements are necessary and sufficient to recover generic \(\boldsymbol{x}\), there is great interest in reducing the number of measurements through the exploitation of sparse \(\boldsymbol{x}\), which is known as compressive phase retrieval. In this work, we detail a novel, probabilistic approach to compressive phase retrieval based on the generalized approximate message passing (GAMP) algorithm. We then present a numerical study of the proposed PR-GAMP algorithm, demonstrating its excellent phase-transition behavior, robustness to noise, and runtime. For example, to successfully recover K-sparse signals, approximately \(M \geq 2K\log _{2}(N/K)\) intensity measurements suffice when K ≪ N and \(\boldsymbol{A}\) has i.i.d Gaussian entries. When recovering a 6k-sparse 65k-pixel grayscale image from 32k randomly masked and blurred Fourier intensity measurements, PR-GAMP achieved 99% success rate with a median runtime of only 12. 6 seconds. Compared to the recently proposed CPRL, sparse-Fienup, and GESPAR algorithms, experiments show that PR-GAMP has a superior phase transition and orders-of-magnitude faster runtimes as the problem dimensions increase.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
\(\boldsymbol{x}\) may represent the sparse transform coefficients of a non-sparse signal-of-interest \(\boldsymbol{s} =\boldsymbol{\varPsi x}\) in a sparsifying basis (or frame) \(\boldsymbol{\varPsi }\), in which case the intensity measurements would be \(\boldsymbol{y} = \vert \boldsymbol{\varPhi s} +\boldsymbol{ w}\vert\) and \(\boldsymbol{A} \triangleq \boldsymbol{\varPhi \varPsi }\).
- 2.
- 3.
Here and elsewhere, we use \(\boldsymbol{y}\) when referring to the M measurements that are available for signal reconstruction. In the canonical (noisy) compressive sensing problem, the measurements take the form \(\boldsymbol{y} =\boldsymbol{ Ax} +\boldsymbol{ w}\), but in the (noisy) compressive phase retrieval problem, the measurements instead take the form \(\boldsymbol{y} = \vert \boldsymbol{Ax} +\boldsymbol{ w}\vert\).
- 4.
- 5.
PR-GAMP is part of the GAMPmatlab package at http://sourceforge.net/projects/gampmatlab/.
- 6.
- 7.
Since CPRL rarely gave NMSE < 10−6, we reduced the definition of “success” to NMSE < 10−4 for this subsection only.
- 8.
Although the complexity of GAMP is known to scale as O( MN) for this type of \(\boldsymbol{A}\), the values of M and N in Table 4 are too small for this scaling law to manifest.
- 9.
For GESPAR, we used the November 2013 version of the Matlab code provided by the authors at https://sites.google.com/site/yoavshechtman/resources/software.
- 10.
The sparse-Fienup from [21] requires \(\boldsymbol{A}^{\mathsf{H}}\boldsymbol{A}\) to be a (scaled) identity matrix. Although GESPAR can in principle handle generic \(\boldsymbol{A}\), the implementation provided by the authors is based on 1D and 2D Fourier \(\boldsymbol{A}\) and is not easily modified.
- 11.
Here, since we used only two masks, we ensured invertibility by constructing the diagonal of \(\boldsymbol{D}_{1}\) using exactly N∕2 unit-valued entries positioned uniformly at random and constructing the diagonal of \(\boldsymbol{D}_{2}\) as its complement, so that \(\boldsymbol{D}_{1} +\boldsymbol{ D}_{2} =\boldsymbol{ I}\).
- 12.
Since each \(\boldsymbol{B}_{i}\) was a wide matrix, its nonzero band was wrapped from bottom to top when necessary.
- 13.
For notational brevity, the subscript “ m” is omitted throughout this appendix.
- 14.
\(\mathcal{N}(z; a,A)\mathcal{N}(z; b,B)\! =\! \mathcal{N}\Big(z; \frac{ \frac{a}{A}+ \frac{b}{B}} { \frac{1} {A}+ \frac{1} {B}}, \frac{1} { \frac{1} {A}+ \frac{1} {B}}\Big)\mathcal{N}(a; b,A\! +\! B).\)
References
J.R. Fienup, Phase retrieval algorithms: a comparison. Appl. Opt. 21, 2758–2769 (1982)
R.P. Millane, Recent advances in phase retrieval. Int. Soc. Opt. Eng. 6316 (2006)
O. Bunk, A. Diaz, F. Pfeiffer, C. David, B. Schmitt, D.K. Satapathy, J.F. Veen, Diffractive imaging for periodic samples: retrieving one-dimensional concentration profiles across microfluidic channels. Acta Crystallogr. Sect. A Found. Crystallogr. 63(4), 306–314 (2007)
R.W. Harrison, Phase problem in crystallography. J. Opt. Soc. Am. A 10(5), 1046–1055 (1993)
R.P. Millane, Phase retrieval in crystallography and optics. J. Opt. Soc. Am. A 7, 394–411 (1990)
A. Chai, M. Moscoso, G. Papanicolaou, Array imaging using intensity-only measurements. Inverse Prob. 27(1), 1–16 (2011)
A. Walther, The question of phase retrieval in optics. Opt. Acta 10(1), 41–49 (1963)
J.C. Dainty, J.R. Fienup, Phase retrieval and image construction for astronomy, in Image Recovery: Theory and Application, ed. by H. Stark, ch. 7 (Academic Press, New York, 1987), pp. 231–275
J. Miao, T. Ishikawa, Q. Shen, T. Earnest, Extending x-ray crystallography to allow the imaging of noncrystalline materials, cells, and single protein complexes. Annu. Rev. Phys. Chem. 59, 387–410 (2008)
R. Balan, P.G. Casazza, D. Edidin, On signal reconstruction without phase. Appl. Comput. Harmon. Anal. 20, 345–356 (2006)
L. Demanet, V. Jugnon, Convex recovery from interferometric measurements, arXiv:1307.6864 (2013)
J.V. Corbett, The pauli problem, state reconstruction and quantum-real numbers. Rep. Math. Phys. 57(1) (2006)
T. Heinosaari, L. Mazzarella, M.M. Wolf, Quantum tomography under prior information, arXiv:1109.5478 (2011)
B.G. Bodmann, N. Hammen, Stable phase retrieval with low-redundancy frames, arXiv:1302.5487 (2013)
S. Gazit, A. Szameit, Y.C. Eldar, M. Segev, Super-resolution and reconstruction of sparse sub-wavelength images. Opt. Express, 17(26), 23920–23946 (2009)
A. Szameit, Y. Shechtman, E. Osherovich, E. Bullkich, P. Sidorenko, H. Dana, S. Steiner, E.B. Kley, S. Gazit, T. Cohen-Hyams, S. Shoham, M. Zibulevsky, I. Yavneh, Y.C. Eldar, O. Cohen, M. Segev, Sparsity-based single-shot subwavelength coherent diffractive imaging. Nat. Mater. 11, 455–459 (2012)
S. Marchesini, Ab initio compressive phase retrieval, arXiv:0809.2006 (2008)
Y. Shechtman, E. Small, Y. Lahini, M. Verbin, Y. Eldar, Y. Silberberg, M. Segev, Sparsity-based super-resolution and phase-retrieval in waveguide arrays. Opt. Express 21(20), 24015–24024 (2013)
X. Li, V. Voroninski, Sparse signal recovery from quadratic measurements via convex programming, arXiv:1209.4785 (2012)
M.L. Moravec, J.K. Romberg, R. Baraniuk, Compressive phase retrieval, in SPIE Conf. Series, vol. 6701, San Diego (2007)
S. Mukherjee, C.S. Seelamantula, An iterative algorithm for phase retrieval with sparsity constraints: application to frequency domain optical coherence tomography, in Proc. IEEE Int. Conf. Acoust. Speech & Signal Process., Kyoto (2012), pp. 553–556
H. Ohlsson, A.Y. Yang, R. Dong, S.S. Sastry, CPRL – an extension of compressive sensing to the phase retrieval problem, in Proc. Neural Inform. Process. Syst. Conf. (2012) (Full version at arXiv:1111.6323)
E.J. Candès, T. Strohmer, V. Voroninski, Phase lift: exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math. 66, 1241–1274 (2011)
P. Netrapalli, P. Jain, S. Sanghavi, Phase retrieval using alternating minimization, in Proc. Neural Inform. Process. Syst. Conf. (2013) (See also arXiv:1306.0160)
Y. Shechtman, A. Beck, Y.C. Eldar, GESPAR: efficient phase retrieval of sparse signals. IEEE Trans. Signal Process. 62, 928–938 (2014)
C. Papadimitriou, K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity (Dover, New York, 1998)
S. Rangan, Generalized approximate message passing for estimation with random linear mixing, in Proc. IEEE Int. Symp. Inform. Thy., Saint Petersburg (2011), pp. 2168–2172. (Full version at arXiv:1010.5141)
P. Schniter, S. Rangan, Compressive phase retrieval via generalized approximate message passing, in Proc. Allerton Conf. Commun. Control Comput., Monticello (2012)
P. Schniter, Compressive phase retrieval via generalized approximate message passing, in February Fourier Talks (FFT) Workshop on Phaseless Reconstruction, College Park (2013)
D.L. Donoho, A. Maleki, A. Montanari, Message passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. 106, 18914–18919 (2009)
D.L. Donoho, A. Maleki, A. Montanari, Message passing algorithms for compressed sensing: I. Motivation and construction, in Proc. Inform. Theory Workshop, Cairo (2010), pp. 1–5
J. Pearl, Probabilistic Reasoning in Intelligent Systems (Morgan Kaufman, San Mateo, 1988)
F.R. Kschischang, B.J. Frey, H.-A. Loeliger, Factor graphs and the sum-product algorithm. IEEE Trans. Inform. Theory 47, 498–519 (2001)
B.J. Frey, D.J.C. MacKay, A revolution: belief propagation in graphs with cycles, in Proc. Neural Inform. Process. Syst. Conf., Denver (1997), pp. 479–485
M. Bayati, A. Montanari, The dynamics of message passing on dense graphs, with applications to compressed sensing. IEEE Trans. Inf. Theory 57, 764–785 (2011)
M. Bayati, M. Lelarge, A. Montanari, Universality in polytope phase transitions and iterative algorithms, in Proc. IEEE Int. Symp. Inf. Theory, Boston, (2012), pp. 1–5 (see also arXiv:1207.7321)
S.O. Rice, Mathematical analysis of random noise. Bell Syst. Tech. J. 24(1), 46–156 (1945)
A. Dempster, N.M. Laird, D.B. Rubin, Maximum-likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. 39, 1–17 (1977)
J.P. Vila, P. Schniter, Expectation-maximization Gaussian-mixture approximate message passing. IEEE Trans. Signal Process. 61, 4658–4672 (2013)
G.F. Cooper, The computational complexity of probabilistic inference using Bayesian belief networks. Artif. Intell. 42, 393–405 (1990)
P. Schniter, Turbo reconstruction of structured sparse signals, in Proc. Conf. Inform. Science & Syst., Princeton (2010), pp. 1–6
M. Borgerding and P. Schniter, Generalized approximate message passing for the cosparse analysis model, arXiv:1312.3968 (2013)
J. Vila, P. Schniter, An empirical-Bayes approach to recovering linearly constrained non-negative sparse signals, in Proc. IEEE Workshop Comp. Adv. Multi-Sensor Adaptive Process., Saint Martin (2013), pp. 5–8 (Full version at arXiv:1310.2806)
S. Som, L.C. Potter, P. Schniter, On approximate message passing for reconstruction of non-uniformly sparse signals, in Proc. National Aerospace and Electronics Conf., Dayton (2010)
S. Rangan, P. Schniter, A. Fletcher, On the convergence of generalized approximate message passing with arbitrary matrices, in Proc. IEEE Int. Symp. Inform. Thy., Honolulu (2014) (Full version at arXiv:1402.3210)
E.J. Candès, X. Li, M. Soltanolkotabi, Phase retrieval from coded diffraction patterns, arXiv:1310.3240 (2013)
G. Zheng, R. Horstmeyer, C. Yang, Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photon. 7, 739–745 (2013)
M. Abramowitz, I.A. Stegun, eds., Handbook of Mathematical Functions (Dover, New York, 1964)
K.V. Mardia, P.E. Jupp, Directional Statistics (Wiley, New York, 2009)
C. Robert, Modified Bessel functions and their applications in probability and statistics. Stat. Probab. Lett. 9, 155–161 (1990)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendix A: Output Thresholding Rules
In this appendix, we derive expressions (10) and (12) that are used to compute the functions g out, m and g′ out, m defined in lines (D2) and (D3) of Table 1.
To facilitate the derivations in this appendix,Footnote 13 we first rewrite p Y | Z ( y | z) in a form different from (8). In particular, recalling that—under our AWGN assumption—the noisy transform outputs \(u = z + w\) are conditionally distributed as \(p(u\vert z) = \mathcal{N}(u;z,\nu ^{w})\), we first transform \(u = ye^{j\theta }\) from rectangular to polar coordinates to obtain
where y is the Jacobian of the transformation, and then integrate out the unobserved phase \(\theta\) to obtain
We begin by deriving the integration constant
where we used the Gaussian-pdf multiplication ruleFootnote 14 in (24). Noting the similarity between (24) and (22), the equivalence between (22) and (8) implies that
In the sequel, we make the practical assumption that y > 0, allowing us to drop the indicator “1 y ≥ 0” and invert C.
Next, we derive the conditional mean
Plugging (22) into (26) and applying the Gaussian-pdf multiplication rule,
Expanding the \(\mathcal{N}\) term, the integral in (31) becomes
where ψ denotes the phase of \(\widehat{p}\), and where the integral in (33) was resolved using the expression in [48, 9.6.19]. Plugging (34) into (31) gives
which agrees with (10).
Finally, we derive the conditional covariance
Focusing on the first term in (36), if we plug in (22) and apply the Gaussian-pdf multiplication rule, we get
where (41) used (34) and (42) used (25). By plugging (42) back into (36), we obtain the expression given in (12).
Appendix B: EM Update for Noise Variance
Noting that
where (45) used the expression for p Y | Z from (22), we have
To circumvent the high-dimensional integral in (46), we use the same large system limit approximation used in the derivation of GAMP [27]: for sufficiently dense \(\boldsymbol{A}\), as \(N \rightarrow \infty\), the central limit theorem (CLT) suggests that \(\boldsymbol{a}_{m}^{\mathsf{H}}\boldsymbol{x} = z_{m}\) will become Gaussian. In particular, when \(\boldsymbol{x} \sim p(\boldsymbol{x}\vert \boldsymbol{y};\widehat{\nu ^{w}}[i])\), the CLT suggests that \(\boldsymbol{a}_{m}^{\mathsf{H}}\boldsymbol{x} \sim \mathcal{N}(\widehat{z}_{m},\nu _{m}^{z})\), where
such that \(\widehat{x}_{n}[i]\) and ν n x[ i] are the mean and variance of the marginal posterior pdf \(p(x_{n}\vert \boldsymbol{y};\widehat{\nu ^{w}}[i])\). In this case,
From (14), we see that any solution \(\widehat{\nu ^{w}}[i\! +\! 1]> 0\) is necessarily a value of ν w that zeros the derivative of the expected log-pdf. Thus, using the expected log-pdf expression from (49),
Plugging the derivative expression (see [39])
into (50) and multiplying both sides by \(\widehat{\nu ^{w}}[i\! +\! 1]^{2}\), we find
with the newly defined pdf
where ϕ m is the phase of z m (recall (5)). The proportionality (56) identifies this pdf as a von Mises distribution [49], which can be stated in normalized form as
Expanding the quadratic in (53) and plugging in (57), we get
where R 0(⋅ ) is the modified Bessel function ratio defined in (13) and (59) follows from [48, 9.6.19]. To proceed further, we make use of the expansion \(R_{0}(\kappa ) = 1 -\frac{1} {2\kappa } - \frac{1} {8\kappa ^{2}} - \frac{1} {8\kappa ^{3}} + o(\kappa ^{-3})\) from [50, Lemma 5] to justify the high-SNR approximation
which, when applied to (59), yields
Although (61) can be reduced to an expression that involves the mean of a Rician distribution, our empirical experience suggests that it suffices to assume ν m z[ i] ≈ 0 in (61), after which we obtain the much simpler expression given in (15).
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this chapter
Cite this chapter
Schniter, P., Rangan, S. (2015). A Message-Passing Approach to Phase Retrieval of Sparse Signals. In: Balan, R., Begué, M., Benedetto, J., Czaja, W., Okoudjou, K. (eds) Excursions in Harmonic Analysis, Volume 4. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. https://doi.org/10.1007/978-3-319-20188-7_7
Download citation
DOI: https://doi.org/10.1007/978-3-319-20188-7_7
Publisher Name: Birkhäuser, Cham
Print ISBN: 978-3-319-20187-0
Online ISBN: 978-3-319-20188-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)