Skip to main content
Log in

On the Strong Convergence of Forward-Backward Splitting in Reconstructing Jointly Sparse Signals

  • Published:
Set-Valued and Variational Analysis Aims and scope Submit manuscript

Abstract

We consider the problem of reconstructing an infinite set of sparse, finite-dimensional vectors, that share a common sparsity pattern, from incomplete measurements. This is in contrast to the work (Daubechies et al., Pure Appl. Math. 57(11), 1413–1457, 2004), where the single vector signal can be infinite-dimensional, and (Fornasier and Rauhut, SIAM J. Numer. Anal. 46(2), 577613, 2008), which extends the aforementioned work to the joint sparse recovery of finite number of infinite-dimensional vectors. In our case, to take account of the joint sparsity and promote the coupling of nonvanishing components, we employ a convex relaxation approach with mixed norm penalty 2,1. This paper discusses the computation of the solutions of linear inverse problems with such relaxation by a forward-backward splitting algorithm. However, since the solution matrix possesses infinitely many columns, the arguments of Daubechies et al. (Pure Appl. Math. 57(11), 1413–1457, 2004) no longer apply. As such, we establish new strong convergence results for the algorithm, in particular when the set of jointly sparse vectors is infinite.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Adcock, B.: Infinite-dimensional compressed sensing and function interpolation. Found. Comput. Math. 18(3), 661–701 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  2. Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Mathematical Programming 137(1-2), 91–129 (2013). (en)

    Article  MathSciNet  MATH  Google Scholar 

  3. Baraniuk, R., Cevher, V., Duarte, M., Hegde, C.: Model-based compressive sensing. IEEE Trans. Inform. Theory 56(4), 1982–2001 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Bauschke, H. H., Combettes, P. L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 1st edn. Springer Publishing Company Incorporated, New York (2011)

    Book  MATH  Google Scholar 

  5. Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Universitext Springer, New York (2010)

    Google Scholar 

  6. Bruck Jr, R. E.: An iterative solution of a variational inequality for certain monotone operators in Hilbert space. Bull. Am. Math. Soc. 81(5), 890–892 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  7. Candes, E. J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chen, G. H. -G., Rockafellar, R. T.: Convergence Rates in Forward–Backward Splitting. SIAM J. Optim. 7(2), 421–444 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  9. Chen, J., Huo, X.: Theoretical results on sparse representations of multiple-measurement vectors. IEEE Transactions in Signal Processing 54 (12), 4634–4643 (2006)

    Article  MATH  Google Scholar 

  10. Chkifa, A., Cohen, A., Schwab, C.: Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs. J. Math. Pures. Appl. 103(2), 400–428 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  11. Chkifa, A., Dexter, N., Tran, H., Webster, C.: Polynomial approximation via compressed sensing of high-dimensional functions on lower sets. Math. Comp. 87(311), 1415–1450 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  12. Cohen, A., DeVore, R., Schwab, C.: Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDEs. Anal. Appl. 9 (1), 11–47 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Combettes, P., Pesquet, J.: Proximal thresholding algorithm for minimization over orthonormal bases. SIAM J. Optim. 18(4), 1351–1376 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  14. Combettes, P. L.: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53(5-6), 475–504 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  15. Combettes, P. L., Wajs, V. R.: Signal recovery by proximal forward-backward splitting. Multiscale Modeling &, Simulation 4(4), 1168–1200 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  16. Cotter, S., Rao, B., Engan, K., Kreutz-Delgado, K.: Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 53(7), 2477–2488 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  17. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57(11), 1413–1457 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  18. Davies, M., Eldar, Y.: Rank awareness in joint sparse recovery. IEEE Trans. Inf. Theory 58(2), 1135–1146 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  19. Deng, W., Yin, W., Zhang, Y.: Group sparse optimization by alternating direction method, Wavelets and Sparsity XV. In: De Ville, D.V., Goyal, V.K., Papadakis, M. (eds.) International Society for Optics and Photonics, SPIE, vol. 8858, pp 242–256 (2013)

  20. Dexter, N., Tran, H., Webster, C.: A mixed 1 regularization approach for sparse simultaneous approximation of parameterized PDEs. ESAIM Math. Model. Numer. Anal. 53, 2025–2045 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  21. Donoho, D. L.: Compressed sensing. IEEE Trans. Inf. Theory 52 (4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  22. Duarte, M. F., Sarvotham, S., Baron, D., Wakin, M. B., Baraniuk, R. G.: Distributed compressed sensing of jointly sparse signals. Conference Record of the Thirty-Ninth Asilomar Conference on Signals, Systems and Computers, 2005, pp. 1537–1541 (2005)

  23. Eldar, Y., Kuppinger, P., Bölcskei, H.: Block-sparse signals: uncertainty relations and efficient recovery. IEEE Trans. Signal Process. 58(6), 3042–3054 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  24. Eldar, Y., Mishali, M.: Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory 55(11), 5302–5316 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  25. Eldar, Y., Rauhut, H.: Average case analysis of multichannel sparse recovery using convex relaxation. IEEE Trans. Inf. Theory 56(1), 505–519 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  26. Erickson, S., Sabatti, C.: Empirical bayes estimation of a sparse vector of gene expression changes. Stat. Appl. Genet. Mol. Biol. 4(1), 1–25 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  27. Fadili, J., Malick, J., Peyré, G.: Sensitivity Analysis for Mirror-Stratifiable Convex Functions. SIAM Journal on Optimization 28(4), 2975–3000 (2018). (en)

    Article  MathSciNet  MATH  Google Scholar 

  28. Fornasier, M., Rauhut, H.: Recovery algorithms for vector-valued data with joint sparsity constraints. SIAM J. Numer. Anal. 46(2), 577–613 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  29. Garrigos, G., Rosasco, L., Villa, S.: Thresholding gradient methods in Hilbert spaces: support identification and linear convergence. ESAIM: Control, Optimisation and Calculus of Variations 26, 28 (2020)

    MathSciNet  MATH  Google Scholar 

  30. Goldstein, A. A.: Convex programming in Hilbert space. Bull. Am. Math. Soc. 70(5), 709–710 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  31. Gorodnitsky, I. F., George, J. S., Rao, B. D.: Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm. J. Electroencephalog. Clinical Neurophysiol. 95(4), 231–251 (1995)

    Article  Google Scholar 

  32. Gorodnitsky, I. F., Rao, B. D.: Sparse signal reconstructions from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 45(3), 600–616 (1997)

    Article  Google Scholar 

  33. Gribonval, R., Rauhut, H., Schnass, K., Vandergheynst, P.: Atoms of all channels, Unite! Average case analysis of multi-channel sparse recovery using greedy algorithms. J. Fourier Anal. Appl. 14(5), 655–687 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  34. Hale, E., Yin, W., Zhang, Y.: Fixed-point continuation for 1-Minimization: methodology and convergence. SIAM J. Optim. 19(3), 1107–1130 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  35. Koppel, Alec, Warnell, Garrett, Stump, Ethan, Ribeiro, Alejandro: Parsimonious online learning with kernels via sparse projections in function space. J. Mach. Learn. Res. 20(3), 1–44 (2019)

    MathSciNet  MATH  Google Scholar 

  36. Lee, K., Bresler, Y., Junge, M.: Subspace Methods for Joint Sparse Recovery. IEEE Trans. Inf. Theory 58(6), 3613–3641 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  37. Liang, J., Fadili, J., Peyré, G.: Activity identification and local linear convergence of forward–backward-type methods. SIAM J. Optim. 27(1), 408–437 (2017). (en)

    Article  MathSciNet  MATH  Google Scholar 

  38. Mishali, M., Eldar, Y. C.: Reduce and boost: recovering arbitrary sets of jointly sparse vectors. IEEE Trans. Signal Process. 56(10), 4692–4702 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  39. Nutini, J., Schmidt, M., Hare, W.: “Active-set complexity” of proximal gradient: How long does it take to find the sparsity pattern?. Optim. Lett. 13(4), 645–655 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  40. Parvaresh, F., Vikalo, H., Misra, S., Hassibi, B.: Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Topics Signal Process. 2(3), 275–285 (2008)

    Article  Google Scholar 

  41. Petrosyan, A., Tran, H., Webster, C. G.: Reconstruction of jointly sparse vectors via manifold optimization. Appl. Numer. Math. 144, 140–150 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  42. Phillips, J. W., Leahy, R. M., Mosher, J. C.: MEG-based imaging of focal neuronal current sources. IEEE Trans. Med. Imaging 16(3), 338–348 (1997)

    Article  Google Scholar 

  43. Qin, Zhiwei, Goldfarb, Donald: Structured sparsity via alternating direction methods. J. Mach. Learn. Res. 13, 1435–1468 (2012)

    MathSciNet  MATH  Google Scholar 

  44. Rauhut, H., Ward, R.: Sparse Legendre expansions via 1-minimization. J. Approximation Theory 164(5), 517–533 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  45. Rockafellar, R. T., Wets, R. J. -B.: Variational Analysis, vol. 317. Springer Science & Business Media, Berlin (2009)

    Google Scholar 

  46. Stojnic, M., Parvaresh, F., Hassibi, B.: On the reconstruction of block-sparse signals with an optimal number of measurements. IEEE Trans. Signal Process. 57(8), 3075–3085 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  47. Tran, H., Webster, C. G., Zhang, G.: Analysis of quasi-optimal polynomial approximations for parameterized PDEs with deterministic and stochastic coefficients. Numer. Math. 137(2), 451–493 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  48. Tropp, J.: Algorithms for simultaneous sparse approximation. Part II: Convex relaxation. Signal Process. 86(3), 589–602 (2006)

    Article  MATH  Google Scholar 

  49. Tropp, J., Gilbert, A., Strauss, M.: Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit. Signal Process. 86(3), 572–588 (2006)

    Article  MATH  Google Scholar 

  50. van der Berg, E., Friedlander, M.: Theoretical and empirical results for recovery from multiple measurements. IEEE Trans. Inf. Theory 56(5), 2516–2527 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  51. Wakin, M. B., Sarvotham, S., Duarte, M. F., Baron, D., Baraniuk, R. G.: Recovery of jointly sparse signals from few random projections. In: Proc. Workshop Neural Inf. Proc. Syst. (NIPS) (Vancouver, BC, Canada), pp 1433–1440 (2005)

Download references

Acknowledgements

The first author acknowledges the support of the Pacific Institute of Mathematical Sciences (PIMS). The second and third authors acknowledge support from: the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under contracts and awards ERKJ314, ERKJ331, ERKJ345, and Scientific Discovery through Advanced Computing (SciDAC) program through the FASTMath Institute under Contract No. DE-AC02-05CH11231; and by the Laboratory Directed Research and Development program at the Oak Ridge National Laboratory, which is operated by UT-Battelle, LLC., for the U.S. Department of Energy under contract DE-AC05-00OR22725.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nick Dexter.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dexter, N., Tran, H. & Webster, C.G. On the Strong Convergence of Forward-Backward Splitting in Reconstructing Jointly Sparse Signals. Set-Valued Var. Anal 30, 543–557 (2022). https://doi.org/10.1007/s11228-021-00603-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11228-021-00603-2

Keywords

Mathematics Subject Classification (2010)

Navigation