Next Article in Journal
Coherent Perfect Absorption Laser Points in One-Dimensional Anti-Parity–Time-Symmetric Photonic Crystals
Next Article in Special Issue
Photoacoustic Tomography with a Ring Ultrasound Transducer: A Comparison of Different Illumination Strategies
Previous Article in Journal
Estimating Road Segments Using Kernelized Averaging of GPS Trajectories
Previous Article in Special Issue
Accelerated Correction of Reflection Artifacts by Deep Neural Networks in Photo-Acoustic Tomography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Gated Respiratory Motion Rejection for Optoacoustic Tomography

by
Avihai Ron
1,2,
Neda Davoudi
3,4,
Xosé Luís Deán-Ben
3,5 and
Daniel Razansky
1,2,3,5,*,†
1
Institute for Biological and Medical Imaging, Helmholtz Center Munich, 85764 Neuherberg, Germany
2
Faculty of Medicine, Technical University of Munich, 81765 Munich, Germany
3
Institute for Biomedical Engineering and Department of Information Technology and Electrical Engineering, ETH Zurich, 8093 Zurich, Switzerland
4
Department of Informatics, Technical University of Munich, 85748 Garching, Germany
5
Faculty of Medicine and Institute of Pharmacology and Toxicology, University of Zurich, 8057 Zurich, Switzerland
*
Author to whom correspondence should be addressed.
Current Address: Institute for Biomedical Engineering, University of Zurich and ETH Zurich, HIT E42.1, Wolfgang-Pauli-Strasse 27, CH 8093 Zurich, Switzerland.
Appl. Sci. 2019, 9(13), 2737; https://doi.org/10.3390/app9132737
Submission received: 29 May 2019 / Revised: 21 June 2019 / Accepted: 3 July 2019 / Published: 6 July 2019
(This article belongs to the Special Issue Photoacoustic Tomography (PAT))

Abstract

:
Respiratory motion in living organisms is known to result in image blurring and loss of resolution, chiefly due to the lengthy acquisition times of the corresponding image acquisition methods. Optoacoustic tomography can effectively eliminate in vivo motion artifacts due to its inherent capacity for collecting image data from the entire imaged region following a single nanoseconds-duration laser pulse. However, multi-frame image analysis is often essential in applications relying on spectroscopic data acquisition or for scanning-based systems. Thereby, efficient methods to correct for image distortions due to motion are imperative. Herein, we demonstrate that efficient motion rejection in optoacoustic tomography can readily be accomplished by frame clustering during image acquisition, thus averting excessive data acquisition and post-processing. The algorithm’s efficiency for two- and three-dimensional imaging was validated with experimental whole-body mouse data acquired by spiral volumetric optoacoustic tomography (SVOT) and full-ring cross-sectional imaging scanners.

1. Introduction

Motion during signal acquisition is known to result in image blurring and can further hinder proper registration of images acquired by different modalities [1,2,3,4]. Respiratory motion compensation in tomographic imaging methods is often based on a gated acquisition assisted by physiological triggers, e.g., an electrocardiogram (ECG) signal. In prospective gating, the data is acquired during a limited time window when minimal, or no motion occurs. Alternately, retrospective gating correlates between the acquired images and physiological triggers during post-processing [5]. More advanced retrospective approaches are based on self-gated methods where the physiological trigger is extracted from the image data itself [6,7,8]. An alternative solution consists in motion tracking of specific points and subsequent correction with rigid-body transformations [9]. In some parts of the body, such as the thoracic region, non-rigid motion is further produced. Thus, more sophisticated models are generally required to estimate and correct for the effects of respiratory motion [10].
High-frame-rate imaging modalities can avoid motion if sub-pixel displacements are produced during the effective image integration time. Particularly, optoacoustic tomography (OAT) can render 2D and 3D images via excitation of an entire volume with a single laser pulse [11]. This corresponds to an effective integration time in the order of the pulse duration, typically a few nanoseconds. This way, the tissue motion can be “frozen” much more efficiently than most other imaging modalities. OAT has found applicability in biological studies demanding high-frame-rate imaging, such as characterization of cardiac dynamics [12], mapping of neuronal activity [13], monitoring hemodynamic patterns in tumors [14] or visualization of freely-behaving animals [15]. Moreover, real-time imaging has been paramount in the successful translation of OAT to render motion-free images acquired in a handheld mode [16]. While motion correction in OAT might not be relevant for images rendered with a single laser pulse, acquisition of multiple frames is still required in many applications, e.g., for rendering volumetric data from multiple cross-sections or for extending the effective field of view (FOV) of a given imaging system [17]. Multiple frames are also required for multi-spectral optoacoustic tomography (MSOT) applications, where mapping of intrinsic tissue chromophores or extrinsically administered agents is achieved via spectral or temporal unmixing [18,19,20,21]. Cardiac and breathing motion could readily be captured by OAT systems running at frame rates of tens of hertz [22,23], and several approaches have been suggested to mitigate motion artefacts in applications involving multi-frame data analysis. For instance, respiratory motion gating was suggested by simultaneously capturing the animal’s respiratory waveforms [24]. Motion correction was alternatively performed with 3D rigid-body transformations [25] and with free-form deformation models [26]. Models of body motion have also been suggested for other types of scanning-based systems [27,28]. Additionally, motion suppression could be achieved by reducing the delay between consecutive pulsed light excitations [29,30], which requires dedicated laser systems.
In this work, we demonstrate that motion rejection in OAT can effectively be performed on-the-fly, before image reconstruction. The suggested approach consists in clustering a sequence of OAT frames that employs the raw time-resolved signals without involving computationally and memory extensive post-processing. This represents an important advantage over other known approaches operating in the image domain [31].

2. Materials and Methods

2.1. Pre-Reconstruction Motion Rejection Approach

The algorithm suggested in this work aims at motion rejection in OAT systems based on a multi-frame acquisition of time-resolved pressure signals with transducer arrays. Figure 1a schematically depicts two examples of transducer array configurations for 2D and 3D imaging, which are described in more detail in the following sections. The acquired signals are generally arranged into so-called sinograms, where every sinogram represents a single frame (Figure 1b). At a fixed transducer position, k frames (sinograms) are acquired. These frames consist of matrices with rows representing the m time-samples of each signal and columns corresponding to the n transducer elements (channels) of the array. Step 1 of the algorithm consists in rearranging the k frames of the sequence into columns of a 2D matrix containing m x n rows and k columns, which represent the entire sequence of frames at a fixed transducer position (Figure 1c). In the experiments performed, the number of frames acquired at each array position were chosen to adequately capture a complete breathing cycle. Step 2 of the algorithm consists in calculating the autocorrelation matrix of all pairs of frames (MATLAB (Mathworks Inc, Natick, USA) function ‘corrcoef’). An example of the calculated correlation coefficients is displayed in Figure 1d. At a fixed transducer position, time decorrelation is expected to be chiefly affected by respiratory motion. Clustering of frames is subsequently done in Step 3 by applying the second order k-means method to the correlation coefficients matrix (Figure 1e, MATLAB function ‘kmeans’). In Step 4, the k frames are then divided into two sets based on predetermined knowledge of the characteristic physiology of the animal under specific anesthesia. As a rule, motion frames are typically fewer than static frames. Notably, when scanning at multiple transducer positions, Steps 1–4 are to be repeated for each transducer position. As an example, Figure 1f displays a comparison of the 3D views of a reconstructed image from a single position of the spherical array, as obtained from the averaged selected-frames and from the averaged rejected-frames.

2.2. Spiral Volumetric Optoacoustic Tomography

The spiral volumetric optoacoustic tomography (SVOT) scanner is schematically depicted in Figure 1a (top). A detailed description of the system is available elsewhere [32]. Briefly, a spherical ultrasound array of piezocomposite elements (Imasonics SaS, Voray, France) is mounted on motorized rotating and translating stages and scanned around the animal following a spiral trajectory. The array consists of 256 elements with a central frequency of 4 MHz and −6 dB bandwidth of ~100%, arranged in a hemispherical surface with angular coverage of 90°. The excitation light beam is guided via a fiber bundle (CeramOptec GmbH, Bonn, Germany) through a cylindrical aperture in the center of the sphere. SVOT enables imaging of the entire mouse with a nearly isotropic 3D spatial resolution in the 200 μm range [31]. In the experiments, light excitation was provided with a short-pulsed laser (<10 ns duration pulses with 25 mJ per-pulse energy and up to 100 Hz pulse repetition frequency) based on an optical parametric oscillator (OPO) crystal (Innolas GmbH, Krailling, Germany). The pulse repetition frequency of the laser was set to 25 Hz and the wavelength was maintained at 800 nm, corresponding to the isosbestic point of hemoglobin. The array was scanned for 17 angular positions separated by 15° (total angular coverage in the azimuthal direction of 240°) and for 30 vertical positions separated by 2 mm (total scanning length of 58 mm, a full-body scan requires approximately 10 min). 50 frames were captured for each position of the array, for which all signals were simultaneously digitized at 40 megasamples per second with a custom-made data acquisition system (DAQ, Falkenstein Mikrosysteme GmbH, Taufkirchen, Germany) triggered with the Q-switch output of the laser. The acquired data was eventually transmitted to a PC via Ethernet.

2.3. Cross-Sectional Optoacoustic Tomography with a Ring Array

The system layout is depicted in Figure 1a (bottom) while its detailed description is available in [33]. Briefly, the ultrasound array (Imasonics SaS, Voray, France) consists of an 80 mm diameter ring having 512 ultrasound individual detection elements with 5 MHz central frequency and −6 dB bandwidth of ~80%. Each element is cylindrically focused at a distance of 38 mm to selectively capture signals from the imaged cross-section. In the experiments, light excitation was provided by a short-pulsed (<10 ns duration pulses at a wavelength of 1064 nm with ~100 mJ per-pulse energy and 15 Hz pulse repetition frequency) Nd:YAG laser (Spectra Physics, Santa Clara, California). The laser beam was guided with a fiber bundle (CeramOptec GmbH, Bonn, Germany) having 12 output arms placed around the circumference of the ring transducer with an angular separation of 60° between the arms. Much like the SVOT system, signals detected by all the array elements were simultaneously digitized at 40 megasamples per second using custom-made DAQ (Falkenstein Mikrosysteme GmbH, Taufkirchen, Germany), triggered with the Q-switch output of the laser. The data was transmitted to a PC via Ethernet. In total, 100 frames were recorded with the array positioned at two distinct regions of the mouse.

2.4. Image Reconstruction and Processing

In both scanning systems, the acquired signals were band-pass filtered (cut-off frequencies 0.25–6 MHz for SVOT and 0.5–8 MHz for cross-sectional OAT) and deconvolved with the impulse response of the array elements before reconstruction. For SVOT, tomographic reconstructions of single volumes (15 × 15 × 15 mm3) for each scanning position of the spherical array transducer were done using a 3D back-projection-based algorithm [34,35]. Volumetric images reconstructed at every transducer position were stitched together to render images from a larger field of view (whole-body scale). For cross-sectional OAT, the same back-projection algorithm was modified to account for the heterogeneous distribution of the speed of sound in the mouse versus the coupling medium (water) [36]. For this, an initial image was first reconstructed by considering a uniform speed of sound corresponding to the speed of sound in water (determined from the measured water temperature). The animal’s surface was then manually segmented, and the reconstruction was fine-tuned by assigning a different speed of sound to the segmented tissue volume in order to optimize image quality. The processing was executed with a self-developed MATLAB code. Universal image quality index (QI) was calculated for the resulting images. QI is an objective image quality index that combines three models: loss of correlation, luminance distortion and contrast distortion—a detailed description and efficient MATLAB implementation was reported in [37].

2.5. Mouse Experiments

All in-vivo animal experiments were performed in full compliance with the institutional guidelines of the Helmholtz Center Munich and with approval by the Government District of Upper Bavaria. Hairless NOD.SCID mice (Envigo, Rossdorf, Germany) were anesthetized with isoflurane. For both imaging systems, a custom-made holder was used to vertically fix the mice in a stationary position with fore and hind paws attached to the holder during the experiments. The mice were immersed inside the water tank with the animal head being kept above water. The temperature of the water tank was maintained at 34 °C with a feedback controlled heating stick. A breathing mask with a mouth clamp was used to fix the head in an upright position and to supply anesthesia and oxygen. During measurement, the anesthesia level was kept at ~2% isoflurane.

3. Results

3.1. Spiral Volumetric Optoacoustic Tomography

A whole-body (neck to hind paws) SVOT image, reconstructed using the frames selected with the proposed motion rejection approach, is displayed in Figure 2a. In these experiments, an average of 32% of frames were rejected per transducer position. The effectiveness of the motion rejection approach is demonstrated by analyzing three specific regions (dashed squares in Figure 2a). Specifically, we compare the image combining all the frames, with the one obtained by averaging the selected (static) frames as well as the image obtained by averaging the rejected (motion) frames. Note that the red square partially captures the thoracic region. It can be observed that a small vessel, clearly visible in the image rendered from the selected frames (Figure 2b, red arrow), cannot be resolved in the image reconstructed using all the frames. Furthermore, the former image features a clear motion artifact in the form of a ‘double vessel’ (red arrow), thus concealing the small vessel. The green square captures the region around the liver. A vertical vessel appears regular and complete in the selected-frames image (red arrow), whereas the same vessel appears disrupted in the all-frames image. Also here, the rejected-frames image discloses an artifact responsible for distorting the all-frames image. Finally, the blue square captures part of the abdomen. Clearly, small vessels are better resolved in the selected-frames image (red arrows). Notably, different structures appear to be blurred (yellow arrows) in the rejected-frames images with respect to the selected-frames images. A comparison between amplitude profiles of structures labeled by dashed yellow lines in Figure 2b further emphasizes the effectiveness of the motion rejection algorithm (Figure 2c) with the signal amplitude typically improved by 10% to 30% in the selected-frames images. Likewise, fine details appear more prominent in the selected-frames images, which is evinced by additional, fine peaks in the amplitudes profiles.

3.2. Cross-Sectional Optoacoustic Tomography

Effectiveness of the algorithm in cross-sectional OAT was tested by comparing the selected- and rejected-frames images taken from two distinct regions of the animal (Figure 3a). Between 20% and 31% of the frames were rejected in the top and bottom cross-sections, respectively. The rejected-frames images reveal smearing artifacts caused by a breathing motion that are evident across the entire mouse cross-section. Fine structures (red arrows) within the abdominal space appear blurred in the rejected-frames image. Moreover, some superficial structures seem to be artificially ‘doubled’ (yellow arrows) in the rejected-frames images. Minor differences were observed in the all-frames images (data not shown) with respect to the selected-frames images. Likewise, amplitude profiles from selected structures (dashed yellow line in Figure 3a) are increased by ~10% in the selected-frames images (Figure 3b). The calculated QI clearly reveals distortions at the boundaries of major structures, located mostly superficially (Figure 3c).

4. Discussion

The presented results demonstrate that motion rejection in OAT can effectively be accomplished prior to image reconstruction. This represents a significant advantage with respect to previously reported motion rejection approaches based on auto-correlation of a sequence of reconstructed images [31], which are afflicted with excessive memory and post-processing requirements. The suggested method was successfully validated with data acquired by two- and three-dimensional imaging systems. However, motion rejection was more effective in the case of volumetric SVOT scans. In particular, it benefited from both amplitude increase of 10% to 30% and improvement in the visibility of fine details, whereas images from the cross-sectional imaging system yielded a lower amplitude increase (~10%) and minor improvement in the visibility of structures. The reason behind the reduced performance of motion rejection in cross-sectional imaging may be ascribed to the fact that breathing-associated movements are not limited to a single plane, while in-plane motion is mainly detected in the signals. Yet, although the differences between selected- and all-frames cross-sectional images were minor, it was possible to quantify them by utilizing a QI based distortion measures. Notably, such distortion artifacts affect almost exclusively the edges of large structures. In spite of the fact that standard frame averaging in cross-sectional imaging may yield qualitatively comparable results, reliable rejection of 20% to 31% motion-affected frames by the algorithm may turn crucial for quantitative analyses of high resolution data, e.g., involving spectral unmixing of fine structures.
It is also important to take into account that breathing characteristics may differ from one animal to another due to age, health, size, sex or strain. All these factors affect the resilience of the animal to the experimental setup, the feasible depth of anesthesia and the overall duration of the experiment [38]. It was previously reported that mice under 2% isoflurane anesthesia have an average respiratory rate of 44 ± 9 breaths/min [39], where the breathing rhythm is characterized by pauses between breaths longer than the breaths themselves. As a result, the majority of the frames are static, i.e., not affected by motion. Herein, we relied on such prior knowledge of the characteristic respiratory rate and breathing rhythm to establish a rejection criterion for the clustered motion (rejected) frames. Likewise, other criteria independent of these factors may alternatively be implemented.
In conclusion, the developed motion rejection methodology can benefit numerous optoacoustic imaging methods relying on multi-frame image analysis, such as scanning-based tomography or spectroscopic imaging systems like the MSOT. It may also find applicability in handheld clinical imaging [40,41], where motion can hinder accurate signal quantification and interpretation of longitudinal and spectroscopic data.

Author Contributions

Conceptualization, A.R., X.L.D.-B. and D.R.; methodology, A.R.; software, A.R.; validation, A.R. and N.D.; formal analysis, A.R.; investigation, A.R.; resources, D.R.; data curation, A.R. and X.L.D.-B.; writing—original draft preparation, A.R., X.L.D.-B. and D.R.; writing—review and editing, A.R., X.L.D.-B. and D.R.; visualization, A.R.; supervision, D.R.; project administration, D.R.; funding acquisition, D.R.

Funding

This research received no external funding.

Acknowledgments

The authors wish to thank M. Reiss for his support with the measurements and handling of animals.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nehmeh, S.A.; Erdi, Y.E. Effect of respiratory gating on quantifying PET images of lung cancer. J. Nucl. Med. 2002, 43, 876–881. [Google Scholar]
  2. Chi, P.-C.M.; Mawlawi, O. Effects of respiration-averaged computed tomography on positron emission tomography/computed tomography quantification and its potential impact on gross tumor volume delineation. Int. J. Radiat. Oncol. Biol. Phys. 2008, 71, 890–899. [Google Scholar] [CrossRef] [PubMed]
  3. Liu, C.; Pierce, L.A. II The impact of respiratory motion on tumor quantification and delineation in static PET/CT imaging. Phys. Med. Biol. 2009, 54, 7345. [Google Scholar] [CrossRef] [PubMed]
  4. Nehrke, K.; Bornert, P. Free-breathing cardiac MR imaging: Study of implications of respiratory motion—Initial results. Radiology 2001, 220, 810–815. [Google Scholar] [CrossRef] [PubMed]
  5. Heijman, E.; de Graaf, W. Comparison between prospective and retrospective triggering for mouse cardiac MRI. NMR Biomed. 2007, 20, 439–447. [Google Scholar] [CrossRef] [PubMed]
  6. Zaitsev, M.; Maclaren, J. Motion artifacts in MRI: A complex problem with many partial solutions. J. Magn. Reson. Imaging 2015, 42, 887–901. [Google Scholar] [CrossRef] [PubMed]
  7. Sureshbabu, W.; Mawlawi, O. PET/CT imaging artifacts. J. Nucl. Med. Technol. 2005, 33, 156–161. [Google Scholar]
  8. Nehmeh, S.A.; Erdi, Y.E. Respiratory Motion in Positron Emission Tomography/Computed Tomography: A Review; Elsevier: Amsterdam, The Netherlands, 2008; pp. 167–176. [Google Scholar]
  9. Maclaren, J.; Herbst, M. Prospective motion correction in brain imaging: A review. Magn. Reson. Med. 2013, 69, 621–636. [Google Scholar] [CrossRef]
  10. McClelland, J.R.; Hawkes, D.J. Respiratory motion models: A review. Med. Image Anal. 2013, 17, 19–42. [Google Scholar] [CrossRef] [Green Version]
  11. Deán-Ben, X.; Gottschalk, S. Advanced optoacoustic methods for multiscale imaging of in vivo dynamics. Chem. Soc. Rev. 2017, 46, 2158–2198. [Google Scholar] [CrossRef] [Green Version]
  12. Lin, H.-C.A.; Déan-Ben, X.L. Characterization of Cardiac Dynamics in an Acute Myocardial Infarction Model by Four-Dimensional Optoacoustic and Magnetic Resonance Imaging. Theranostics 2017, 7, 4470. [Google Scholar] [CrossRef] [PubMed]
  13. Gottschalk, S.; Degtyaruk, O. Rapid volumetric optoacoustic imaging of neural dynamics across the mouse brain. Nat. Biomed. Eng. 2019, 3, 392–401. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Ron, A.; Deán-Ben, X.L. Volumetric optoacoustic imaging unveils high-resolution patterns of acute and cyclic hypoxia in a murine model of breast cancer. Cancer Res. 2019. [Google Scholar] [CrossRef] [PubMed]
  15. Özbek, A.; Deán-Ben, X.L. Optoacoustic imaging at kilohertz volumetric frame rates. Optica 2018, 5, 857–863. [Google Scholar] [CrossRef]
  16. Neuschmelting, V.; Burton, N.C. Performance of a multispectral optoacoustic tomography (MSOT) system equipped with 2D vs. 3D handheld probes for potential clinical translation. Photoacoustics 2016, 4, 1–10. [Google Scholar] [CrossRef] [PubMed]
  17. Deán-Ben, X.L.; López-Schier, H. Optoacoustic micro-tomography at 100 volumes per second. Sci. Rep. 2017, 7, 6850. [Google Scholar] [CrossRef] [PubMed]
  18. Ron, A.; Deán-Ben, X.L. Characterization of Brown Adipose Tissue in a Diabetic Mouse Model with Spiral Volumetric Optoacoustic Tomography. Mol. Imaging Biol. 2018. [Google Scholar] [CrossRef] [PubMed]
  19. Taruttis, A.; Ntziachristos, V. Advances in real-time multispectral optoacoustic imaging and its applications. Nat. Photonics 2015, 9, 219. [Google Scholar] [CrossRef]
  20. Deán-Ben, X.L.; Stiel, A.C. Light fluence normalization in turbid tissues via temporally unmixed multispectral optoacoustic tomography. Opt. Lett. 2015, 40, 4691–4694. [Google Scholar] [CrossRef]
  21. Yao, J.; Kaberniuk, A.A. Multiscale photoacoustic tomography using reversibly switchable bacterial phytochrome as a near-infrared photochromic probe. Nat. Methods 2016, 13, 67. [Google Scholar] [CrossRef]
  22. Wang, L.; Maslov, K.I. Video-rate functional photoacoustic microscopy at depths. J. Biomed. Opt. 2012, 17, 106007. [Google Scholar] [CrossRef] [PubMed]
  23. Taruttis, A.; Claussen, J. Motion clustering for deblurring multispectral optoacoustic tomography images of the mouse heart. J. Biomed. Opt. 2012, 17, 016009. [Google Scholar] [CrossRef] [PubMed]
  24. Xia, J.; Chen, W. Retrospective respiration-gated whole-body photoacoustic computed tomography of mice. J. Biomed. Opt. 2014, 19, 016003. [Google Scholar] [CrossRef] [PubMed]
  25. Gottschalk, S.; Fehm, T.F. Correlation between volumetric oxygenation responses and electrophysiology identifies deep thalamocortical activity during epileptic seizures. Neurophotonics 2016, 4, 011007. [Google Scholar] [CrossRef]
  26. Toi, M.; Asao, Y. Visualization of tumor-related blood vessels in human breast by photoacoustic imaging system with a hemispherical detector array. Sci. Rep. 2017, 7, 41970. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Schwarz, M.; Garzorz-Stark, N. Motion correction in optoacoustic mesoscopy. Sci. Rep. 2017, 7, 10386. [Google Scholar] [CrossRef] [PubMed]
  28. Chung, J.; Nguyen, L. Motion estimation and correction in photoacoustic tomographic reconstruction. SIAM J. Imaging Sci. 2017, 10, 216–242. [Google Scholar] [CrossRef]
  29. Deán-Ben, X.L.; Bay, E. Functional optoacoustic imaging of moving objects using microsecond-delay acquisition of multispectral three-dimensional tomographic data. Sci. Rep. 2014, 4, 5878. [Google Scholar] [CrossRef]
  30. Märk, J.; Wagener, A. Photoacoustic pump-probe tomography of fluorophores in vivo using interleaved image acquisition for motion suppression. Sci. Rep. 2017, 7, 40496. [Google Scholar] [CrossRef] [Green Version]
  31. Fehm, T.F.; Deán-Ben, X.L. In vivo whole-body optoacoustic scanner with real-time volumetric imaging capacity. Optica 2016, 3, 1153–1159. [Google Scholar] [CrossRef]
  32. Deán-Ben, X.L.; Fehm, T.F. Spiral volumetric optoacoustic tomography visualizes multi-scale dynamics in mice. Light Sci. Appl. 2017, 6, e16247. [Google Scholar] [CrossRef] [PubMed]
  33. Merčep, E.; Herraiz, J.L. Transmission–reflection optoacoustic ultrasound (TROPUS) computed tomography of small animals. Light Sci. Appl. 2019, 8, 18. [Google Scholar] [CrossRef] [PubMed]
  34. Xu, M.; Wang, L.V. Universal back-projection algorithm for photoacoustic computed tomography. Phys. Rev. E 2005, 71, 016706. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Ozbek, A.; Deán-Ben, X. Realtime Parallel Back-Projection Algorithm for Three-Dimensional Optoacoustic Imaging Devices; Optical Society of America: Washington, DC, USA, 2013; p. 88000I. [Google Scholar]
  36. Deán-Ben, X.L.; Özbek, A. Accounting for speed of sound variations in volumetric hand-held optoacoustic imaging. Front. Optoelectron. 2017, 10, 280–286. [Google Scholar] [CrossRef]
  37. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  38. Gargiulo, S.; Greco, A. Mice anesthesia, analgesia, and care, Part II: Anesthetic considerations in preclinical imaging studies. ILAR J. 2012, 53, E70–E81. [Google Scholar] [CrossRef]
  39. Kober, F.; Iltis, I. Cine-MRI assessment of cardiac function in mice anesthetized with ketamine/xylazine and isoflurane. Magn. Reson. Mater. Phys. Biol. Med. 2004, 17, 157–161. [Google Scholar] [CrossRef]
  40. Diot, G.; Metz, S. Multi-Spectral Optoacoustic Tomography (MSOT) of human breast cancer. Clin. Cancer Res. 2017, 23, 6912–6922. [Google Scholar] [CrossRef]
  41. Reber, J.; Willershäuser, M. Non-invasive Measurement of Brown Fat Metabolism Based on Optoacoustic Imaging of Hemoglobin Gradients. Cell Metab. 2018, 27, 689–701. [Google Scholar] [CrossRef]
Figure 1. A schematic diagram of the steps involved in the motion rejection algorithm. (A) Two- and three-dimensional scanning systems, (top) spiral volumetric optoacoustic tomography (SVOT) based on a spherical array of transducers and (bottom) cross-sectional optoacoustic tomography based on a full-ring array of cylindrically focused transducers. (B) Sequence of frames (sinograms) acquired at a single position of the scanner. (C) Rearrangement of the data corresponding to the entire sequence into a single matrix. (D) Correlation coefficients of the autocorrelation matrix of the columns in (C). (E) K-means clustering of the correlation coefficients matrix into two groups, namely, selected (static) frames and rejected (motion) frames. (F) Volumetric image of a blood vessel reconstructed with data from the selected versus the rejected-frames.
Figure 1. A schematic diagram of the steps involved in the motion rejection algorithm. (A) Two- and three-dimensional scanning systems, (top) spiral volumetric optoacoustic tomography (SVOT) based on a spherical array of transducers and (bottom) cross-sectional optoacoustic tomography based on a full-ring array of cylindrically focused transducers. (B) Sequence of frames (sinograms) acquired at a single position of the scanner. (C) Rearrangement of the data corresponding to the entire sequence into a single matrix. (D) Correlation coefficients of the autocorrelation matrix of the columns in (C). (E) K-means clustering of the correlation coefficients matrix into two groups, namely, selected (static) frames and rejected (motion) frames. (F) Volumetric image of a blood vessel reconstructed with data from the selected versus the rejected-frames.
Applsci 09 02737 g001
Figure 2. Motion rejection results for spiral volumetric optoacoustic tomography (SVOT). (A) Sagittal maximal intensity projection (MIP) of a volumetric image of the mouse reconstructed with the selected-frames (scale bar—1 cm). (B) Zoom-in of three regions marked in red, green, blue, respectively, in (A). Each image is reconstructed with (left) all-frames, (center) the selected (static) frames (right) the rejected (motion) frames (scale bar—1 mm). Structural differences are marked (yellow and red arrows). (C) Amplitude profiles marked in b (yellow dashed lines) for images reconstructed from all the frames (dashed lines) versus selected frames (solid lines).
Figure 2. Motion rejection results for spiral volumetric optoacoustic tomography (SVOT). (A) Sagittal maximal intensity projection (MIP) of a volumetric image of the mouse reconstructed with the selected-frames (scale bar—1 cm). (B) Zoom-in of three regions marked in red, green, blue, respectively, in (A). Each image is reconstructed with (left) all-frames, (center) the selected (static) frames (right) the rejected (motion) frames (scale bar—1 mm). Structural differences are marked (yellow and red arrows). (C) Amplitude profiles marked in b (yellow dashed lines) for images reconstructed from all the frames (dashed lines) versus selected frames (solid lines).
Applsci 09 02737 g002
Figure 3. Motion rejection results for cross-sectional imaging with the ring array system. (A) Reconstructed transverse slices of a mouse for two different locations rendered by considering the selected (left) versus rejected (right) frames (scale bar—1 cm). Distorted structures are marked (red and yellow arrows). (B) Amplitude profiles (of yellow dashed lines in (A)) for the images rendered with all (dashed line) versus selected (solid line) frames. (C) Distortion-based QI of the difference between the selected- and all-frames images (1 = high similarity; −1 = low similarity).
Figure 3. Motion rejection results for cross-sectional imaging with the ring array system. (A) Reconstructed transverse slices of a mouse for two different locations rendered by considering the selected (left) versus rejected (right) frames (scale bar—1 cm). Distorted structures are marked (red and yellow arrows). (B) Amplitude profiles (of yellow dashed lines in (A)) for the images rendered with all (dashed line) versus selected (solid line) frames. (C) Distortion-based QI of the difference between the selected- and all-frames images (1 = high similarity; −1 = low similarity).
Applsci 09 02737 g003

Share and Cite

MDPI and ACS Style

Ron, A.; Davoudi, N.; Deán-Ben, X.L.; Razansky, D. Self-Gated Respiratory Motion Rejection for Optoacoustic Tomography. Appl. Sci. 2019, 9, 2737. https://doi.org/10.3390/app9132737

AMA Style

Ron A, Davoudi N, Deán-Ben XL, Razansky D. Self-Gated Respiratory Motion Rejection for Optoacoustic Tomography. Applied Sciences. 2019; 9(13):2737. https://doi.org/10.3390/app9132737

Chicago/Turabian Style

Ron, Avihai, Neda Davoudi, Xosé Luís Deán-Ben, and Daniel Razansky. 2019. "Self-Gated Respiratory Motion Rejection for Optoacoustic Tomography" Applied Sciences 9, no. 13: 2737. https://doi.org/10.3390/app9132737

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop