Skip to main content

Visual-Assisted Probe Movement Guidance for Obstetric Ultrasound Scanning Using Landmark Retrieval

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (MICCAI 2021)

Abstract

Automated ultrasound (US)-probe movement guidance is desirable to assist inexperienced human operators during obstetric US scanning. In this paper, we present a new visual-assisted probe movement technique using automated landmark retrieval for assistive obstetric US scanning. In a first step, a set of landmarks is constructed uniformly around a virtual 3D fetal model. Then, during obstetric scanning, a deep neural network (DNN) model locates the nearest landmark through descriptor search between the current observation and landmarks. The global position cues are visualised in real-time on a monitor to assist the human operator in probe movement. A Transformer-VLAD network is proposed to learn a global descriptor to represent each US image. This method abandons the need for deep parameter regression to enhance the generalization ability of the network. To avoid prohibitively expensive human annotation, anchor-positive-negative US image-pairs are automatically constructed through a KD-tree search of 3D probe positions. This leads to an end-to-end network trained in a self-supervised way through contrastive learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.intelligentultrasound.com/scantrainer/.

References

  1. Arandjelovic, R., Gronat, P., Torii, A., et al.: NetVLAD: CNN architecture for weakly supervised place recognition. In: CVPR, pp. 5297–5307. IEEE (2016)

    Google Scholar 

  2. Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al.: An image is worth \(16\times 16\) words: Transformers for image recognition at scale. In: ICLR (2021)

    Google Scholar 

  3. Droste, R., Drukker, L., Papageorghiou, A.T., Noble, J.A.: Automatic probe movement guidance for freehand obstetric ultrasound. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 583–592. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_56

    Chapter  Google Scholar 

  4. El-Nouby, A., Neverova, N., Laptev, I., Jégou, H.: Training vision transformers for image retrieval. arXiv preprint arXiv:2102.05644 (2021)

  5. Grimwood, A., McNair, H., Hu, Y., Bonmati, E., Barratt, D., Harris, E.J.: Assisted probe positioning for ultrasound guided radiotherapy using image sequence classification. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12263, pp. 544–552. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59716-0_52

    Chapter  Google Scholar 

  6. Li, Y., et al.: Standard plane detection in 3D fetal ultrasound using an iterative transformation network. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 392–400. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_45

    Chapter  Google Scholar 

  7. Parmar, N., Vaswani, A., Uszkoreit, J., et al.: Image transformer. In: ICML, pp. 4055–4064. PMLR (2018)

    Google Scholar 

  8. Sattler, T., Zhou, Q., Pollefeys, M., Leal-Taixe, L.: Understanding the limitations of CNN-based absolute camera pose regression. In: CVPR, pp. 3302–3312. IEEE (2019)

    Google Scholar 

  9. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  10. Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: NeurIPS, pp. 5998–6008. Curran Associates, Inc. (2017)

    Google Scholar 

  11. Zhang, H., Xue, J., Dana, K.: Deep TEN: texture encoding network. In: CVPR, pp. 708–717. IEEE (2017)

    Google Scholar 

  12. Zhao, C., Shen, M., Sun, L., Yang, G.Z.: Generative localization with uncertainty estimation through video-CT data for bronchoscopic biopsy. IEEE Robot. Autom. Lett. 5(1), 258–265 (2019)

    Article  Google Scholar 

Download references

Acknowledgement

This paper is funded by the ERC (ERC-ADG-2015 694581, project PULSE), the EPSRC (EP/MO13774/1, EP/R013853/1), and the NIHR Biomedical Research Centre funding scheme.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cheng Zhao .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 80 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhao, C., Droste, R., Drukker, L., Papageorghiou, A.T., Noble, J.A. (2021). Visual-Assisted Probe Movement Guidance for Obstetric Ultrasound Scanning Using Landmark Retrieval. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12908. Springer, Cham. https://doi.org/10.1007/978-3-030-87237-3_64

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87237-3_64

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87236-6

  • Online ISBN: 978-3-030-87237-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics