Abstract
The contrast and resolution of images obtained with optical microscopes can be improved by deconvolution and computational fusion of multiple views of the same sample, but these methods are computationally expensive for large datasets. Here we describe theoretical and practical advances in algorithm and software design that result in image processing times that are tenfold to several thousand fold faster than with previous methods. First, we show that an ‘unmatched back projector’ accelerates deconvolution relative to the classic Richardson–Lucy algorithm by at least tenfold. Second, three-dimensional image-based registration with a graphics processing unit enhances processing speed 10- to 100-fold over CPU processing. Third, deep learning can provide further acceleration, particularly for deconvolution with spatially varying point spread functions. We illustrate our methods from the subcellular to millimeter spatial scale on diverse samples, including single cells, embryos and cleared tissue. Finally, we show performance enhancement on recently developed microscopes that have improved spatial resolution, including dual-view cleared-tissue light-sheet microscopes and reflective lattice light-sheet microscopes.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$209.00 per year
only $17.42 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
Code availability
The code used in this study is available as Supplementary Software. A code description and several test datasets are also included. Users can also download the code and updates from GitHub at https://github.com/eguomin/regDeconProject; https://github.com/eguomin/diSPIMFusion; https://github.com/eguomin/microImageLib.
References
Barrett, H. H. & Myers, K. J. Foundations of Image Science (John Wiley and Sons, 2004).
Sarder, P. & Nehorai, A. Deconvolution methods for 3-D fluorescence microscopy images. IEEE Signal Process. Mag. 23, 32–45 (2006).
Richardson, W. H. Bayesian-based iterative method of image restoration. J. Opt. Soc. Am. 62, 55–59 (1972).
Lucy, L. B. An iterative technique for the rectification of observed distributions. Astron. J. 79, 745–754 (1974).
Ingaramo, M. et al. Richardson–Lucy deconvolution as a general tool for combining images with complementary strengths. ChemPhysChem 15, 794–800 (2014).
Gustafsson, M. G. L. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microsc. 198, 82–87 (2000).
Wu, Y. & Shroff, H. Faster, sharper, and deeper: structured illumination microscopy for biological imaging. Nat. Methods 15, 1011–1019 (2018).
Temerinac-Ott, M. et al. Multiview deblurring for 3-D images from light-sheet-based fluorescence microscopy. IEEE Trans. Image Process. 21, 1863–1873 (2012).
Wu, Y. et al. Spatially isotropic four-dimensional imaging with dual-view plane illumination microscopy. Nat. Biotechnol. 31, 1032–1038 (2013).
Preibisch, S. et al. Efficient Bayesian-based multiview deconvolution. Nat. Methods 11, 645–648 (2014).
Wu, Y. et al. Simultaneous multiview capture and fusion improves spatial resolution in wide-field and light-sheet microscopy. Optica 3, 897–910 (2016).
Chhetri, R. K. et al. Whole-animal functional and developmental imaging with isotropic spatial resolution. Nat. Methods 12, 1171–1178 (2015).
Wu, Y. et al. Reflective imaging improves spatiotemporal resolution and collection efficiency in light sheet microscopy. Nat. Commun. 8, 1452 (2017).
Zeng, G. L. & Gullberg, G. T. Unmatched projector/backprojector pairs in an iterative reconstruction algorithm. IEEE Trans. Med. Imaging 19, 548–555 (2000).
York, A. G. et al. Instant super-resolution imaging in live cells and embryos via analog image processing. Nat. Methods 10, 1122–1126 (2013).
Walton, T. et al. The Bicoid class homeodomain factors ceh-36/OTX and unc-30/PITX cooperate in C. elegans embryonic progenitor cells to regulate robust development. PLoS Biol. 11, e1005003 (2015).
Bao, Z. et al. Automated cell lineage tracing in Caenorhabditis elegans. Proc. Natl Acad. Sci. USA 103, 2707–2712 (2006).
Yamaguchi, Y. & Miura, M. Programmed cell death in neurodevelopment. Dev. Cell 32, 478–490 (2015).
Yeo, W. & Gautier, J. Early neural cell death: dying to become neurons. Dev. Biol. 274, 233–244 (2004).
Heiman, M. G. & Shaham, S. DEX-1 and DYF-7 establish sensory dendrite length by anchoring dendritic tips during cell migration. Cell 137, 344–355 (2009).
Costa, G. et al. Asymmetric division coordinates collective cell migration in angiogenesis. Nat. Cell Biol. 18, 1292–1301 (2016).
Shah, P. K. et al. An in toto approach to dissecting cellular interactions in complex tissues. Dev. Cell 43, 530–540 (2017).
Preibisch, S., Saalfeld, S., Schindelin, J. & Tomancak, P. Software for bead-based registration of selective plane illumination microscopy data. Nat. Methods 7, 418–419 (2010).
Klein, S., Staring, M., Murphy, K., Viergever, M. A. & Pluim, J. P. W. elastix: a toolbox for intensity based medical image registration. IEEE Trans. Med. Imaging 29, 196–205 (2010).
Modat, M. et al. Global image registration using a symmetric block-matching approach. J. Med. Imaging 1, 024003 (2014).
Haas, P. & Gilmour, D. Chemokine signaling mediates self-organizing tissue migration in the zebrafish lateral line. Dev. Cell 10, 673–680 (2006).
Nicovich, P. R. et al. Multimodal cell type correspondence by intersectional mFISH in intact tissues. Preprint at bioRxiv https://doi.org/10.1101/525451 (2019).
Glaser, A. K. et al. Multi-immersion open-top light-sheet microscope for high-throughput imaging of cleared tissues.Nat. Commun. 10, 1–8 (2019).
Renier, N. et al. Mapping of brain activity by automated volume analysis of immediate early genes. Cell 165, 1789–1802 (2016).
Renier, N. et al. iDISCO: a simple, rapid method to immunolabel large tissue samples for volume imaging. Cell 159, 896–910 (2014).
Kumar, A. et al. Using stage- and slit-scanning to improve contrast and optical sectioning in dual-view inverted light-sheet microscopy (diSPIM). Biol. Bull. 231, 26–39 (2016).
Avram, S. K. W. et al. NMDA receptor in vasopressin 1b neurons is not required for short-term social memory, object memory or aggression. Front. Behav. Neurosci. 13, 218 (2019).
Chakraborty, T. et al. Light-sheet microscopy with isotropic, sub-micron resolution and solvent-independent large-scale imaging. Preprint at bioRxiv https://doi.org/10.1101/605493 (2019).
Chen, B. C. et al. Lattice light-sheet microscopy: imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014).
LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).
Krizhevsky, A., Sutskever, I. & Hinton, G. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (eds. Pereira, F. et al.) 1097–1105 (Curran Associates, 2012).
He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).
Girshick, R. B., Donahue, J., Darrell, T. & Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition 580–587 (IEEE, 2014).
Xie, J., Xu, L. & Chen, E. Image denoising and inpainting with deep neural networks. In Advances in Neural Information Processing Systems 25 (eds Pereira, F. et al.) 341–349 (Curran Associates, 2012)
Dong, C., Loy, C. C., He, K. & Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 295–307 (2015).
Xu, L., Ren, J. S., Liu, C. & Jia, J. Deep convolutional neural network for image deconvolution. In Advances in Neural Information Processing Systems 27 (eds Ghahramani, Z. et al.) 1790–1798 (Curran Associates, 2014).
Huang, G., Liu, Z., van der Maaten, L. & Weinberger, K. Q. Densely connected convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition 4700–4708 (IEEE, 2017).
Weigert, M., Royer, L., Jug, F. & Myers, G. Isotropic reconstruction of 3D fluorescence microscopy images using convolutional neural networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention (eds Descoteaux, M. et al.) 126–134 (Springer, 2017).
Prevedel, R. et al. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat. Methods 11, 727–730 (2014).
Costantini, L. M. et al. A palette of fluorescent proteins optimized for diverse cellular environments. Nat. Commun. 6, 7670 (2015).
Pauls, S., Geldmacher-Voss, B. & Campos-Ortega, J. A. A zebrafish histone variant H2A.F/Z and a transgenic H2A.F/Z:GFP fusion protein for in vivo studies of embryonic development. Dev. Genes Evol. 211, 603–610 (2001).
Mapp, O. M., Wanner, S. J., Rohrschneider, M. R. & Prince, V. E. Prickle1b mediates interpretation of migratory cues during zebrafish facial branchiomotor neuron migration. Dev. Dyn. 239, 1596–1608 (2010).
Kucenas, S. et al. CNS-derived glia ensheath peripheral nerves and mediate motor root development. Nat. Neurosci. 11, 143–151 (2008).
Kimmel, C. B., Ballard, W. W., Kimmel, S. R., Ullmann, B. & Schilling, T. F. Stages of embryonic development of the zebrafish. Dev. Dyn. 203, 253–310 (1995).
Kaufmann, A., Mickoleit, M., Weber, M. & Huisken, J. Multilayer mounting enables long-term imaging of zebrafish development in a light sheet microscope. Development 139, 3242–3247 (2012).
Kumar, A. et al. Dual-view plane illumination microscopy for rapid and spatially isotropic imaging. Nat. Protoc. 9, 2555–2573 (2014).
Duncan, L. H. et al. Isotropic light-sheet microscopy and automated cell lineage analyses to catalogue Caenorhabditis elegans embryogenesis with subcellular resolution. J. Vis. Exp. https://doi.org/10.3791/59533 (2019).
Edelstein, A. D. et al. Advanced methods of microscope control using mManager software. J. Biol. Methods 1, e11 (2014).
Ardiel, E. L. et al. Visualizing calcium flux in freely moving nematode embryos. Biophys. J. 112, 1975–1983 (2017).
Nern, A., Pfeiffer, B. D. & Rubin, G. M. Optimized tools for multicolor stochastic labeling reveal diverse stereotyped cell arrangements in the fly visual system. Proc. Natl Acad. Sci. USA 112, E2967–E2976 (2015).
Madisen, L. et al. A robust and high-throughput Cre reporting and characterization system for the whole mouse brain. Nat. Neurosci. 13, 133–140 (2010).
Hudson, H. M. & Larkin, R. S. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans. Med. Imaging 13, 601–609 (1994).
Preibisch, S., Saalfeld, S. & Tomancak, P. Globally optimal stitching of tiled 3D microscopic image acquisitions. Bioinformatics 25, 1463–1465 (2009).
Hörl, D. et al. BigStitcher: reconstructing high-resolution image datasets of cleared and expanded samples. Nat. Methods 16, 870–874 (2019).
Legland, D., Arganda-Carreras, I. & Andrey, P. MorphoLibJ: integrated library and plugins for mathematical morphology with ImageJ. Bioinformatics 32, 3532–3534 (2016).
Pietzch, T., Preibisch, S., Tomancak, P. & Saalfeld, S. ImgLib2—generic image processing in Java. Bioinformatics 28, 3009–3011 (2012).
Miura, K., Rueden, C., Hiner, M., Schindelin, J. & Rietdorf, J. ImageJ plugin CorrectBleach v2.0.2. Zenodo https://doi.org/10.5281/zenodo.30769 (2014).
Guo, M. et al. Single-shot super-resolution total internal reflection fluorescence microscopy. Nat. Methods 15, 425–428 (2018).
Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at https://arxiv.org/abs/1412.6980 (2014).
Acknowledgements
We thank O. Schwartz and the Biological Imaging Section (RTB/NIAID/NIH) for supplying the confocal microscope platform and providing technical assistance with experiments, W.S. Young (NIMH) for providing early access to the V1b mouse line, J. Daniels (ASI) for advice on integrating the multi-immersion objectives into our cleared-tissue diSPIM, M. Anthony (ASI) for providing CAD drawings of our diSPIM assembly, J. Shaw (Bitplane) for help with Imaris and the neurite tracing plugin, A. Lauziere for his feedback and discussion on the neural network portion of the work, R. Christensen for testing aspects of the registration and deep learning pipelines, E. Ardiel for helping us to acquire the embryonic GCaMP3 muscle data with reflective diSPIM, N. Stuurman for advice in developing ImageJ-compatible software, R. Heintzmann for his critical evaluation of our methods and suggestions on improving the clarity of our manuscript, and H. Eden and G. Patterson for valuable feedback on the manuscript. This research was supported by the intramural research programs of the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Allergy and Infectious Diseases, the National Institute of Arthritis and Musculoskeletal and Skin Diseases, the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute of Mental Health and the National Cancer Institute within the National Institutes of Health, the National Natural Science Foundation of China (61525106, 61427807, U1809204) and the National Key Technology Research and Development Program of China (2017YFE0104000, 2016YFC1300302). V.P. and A.B. acknowledge support by the National Center for Advancing Translational Sciences of the National Institutes of Health through grant number UL1-TR000430, NSF award 1528911 (to V.P.) and NSF Graduate Research Fellowship Program grant number DGE-1144082 (to A.B.). A.U. acknowledges support from NSF awards PHY-1607645 and PHY-1806903. P.L.R. acknowledges support from NIH R01EB107293. H.S., D.C.-R. and P.L.R. acknowledge the Whitman and Fellows program at MBL for providing funding and space for discussions valuable to this work. D.C.-R., R.I., A.S., W.A.M. and Z.B. were supported by NIH grant number R24-OD016474, L.H.D. was supported by a Diversity Supplement to R24-OD016474 and M.W.M. was supported by F32-NS098616. Z.B. additionally acknowledges support via NIH grant number R01-GM097576 and the MSK Cancer Center Support/Core Grant (P30-CA008748). A.S. is additionally supported by grant 2019-198110 (5022) from the Chan Zuckerberg Initiative and the Silicon Valley Community Foundation. J.C.W. also acknowledges support from the Chan Zuckerberg Initiative.
Author information
Authors and Affiliations
Contributions
Conceived the project: M.G., Y.L., H.L., Y.W., H.S. Designed experiments: M.G., Y.L., Y.S., T.L., D.D.N., M.W.M., L.H.D., I.R.-S., D.G., A.B., J.C., H.V., V.P., D.C.-R., Y.W., H.S. Performed experiments: M.G., Y.L., Y.S., T.L., D.D.N., M.W.M., L.H.D., I.R.-S., D.G., A.B., J.C., H.V., S.G., T.B.U., Y.W. Prepared samples: Y.S., T.L., D.D.N., M.W.M., L.H.D., R.I., I.R.-S., A.B., J.C., H.V., T.B.U., Y.W. Built instrumentation: T.L., H.V., Y.W. Developed and tested deep learning algorithms/software: Y.L., H.L., Y.W. Developed new registration and deconvolution algorithms/software: M.G., Y.L., P.L.R., Y.W. Recognized link between medical imaging algorithms and improved deconvolution: P.L.R. Tested new registration and deconvolution algorithms/software: M.G., W.A.M., Y.W. Developed and tested big data pipeline: M.G., Y.S., Y.W. Contributed lineaging/segmentation software and expertise: D.D.N., A.S., Z.B. Contributed samples: C.M.A., M.H., A.B.C. Wrote manuscript: M.G., Y.L., Y.S., P.L.R., Y.W., H.S. with input from all authors. All authors inspected data and contributed to the drafting of the manuscript. Supervised research: V.P., J.C.W., C.M.A., M.H., W.A.M., A.B.C., A.U., T.B.U., Z.B., D.C.-R. P.L.R., H.L., Y.W., H.S. Directed research: H.S.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Supplementary Information
Supplementary Figs. 1–20, Supplementary Tables 1–7 and Supplementary Notes 1–4
Supplementary Software
Source code and user manual.
Rights and permissions
About this article
Cite this article
Guo, M., Li, Y., Su, Y. et al. Rapid image deconvolution and multiview fusion for optical microscopy. Nat Biotechnol 38, 1337–1346 (2020). https://doi.org/10.1038/s41587-020-0560-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41587-020-0560-x
This article is cited by
-
Noise learning of instruments for high-contrast, high-resolution and fast hyperspectral microscopy and nanoscopy
Nature Communications (2024)
-
Transgenic Tg(Kcnj10-ZsGreen) fluorescent reporter mice allow visualization of intermediate cells in the stria vascularis
Scientific Reports (2024)
-
Live-cell imaging powered by computation
Nature Reviews Molecular Cell Biology (2024)
-
Pro-regenerative biomaterials recruit immunoregulatory dendritic cells after traumatic injury
Nature Materials (2024)
-
Efficient 3D light-sheet imaging of very large-scale optically cleared human brain and prostate tissue samples
Communications Biology (2023)