Skip to main content

Overview of the HECKTOR Challenge at MICCAI 2021: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT Images

  • Conference paper
  • First Online:
Head and Neck Tumor Segmentation and Outcome Prediction (HECKTOR 2021)

Abstract

This paper presents an overview of the second edition of the HEad and neCK TumOR (HECKTOR) challenge, organized as a satellite event of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2021. The challenge is composed of three tasks related to the automatic analysis of PET/CT images for patients with Head and Neck cancer (H&N), focusing on the oropharynx region. Task 1 is the automatic segmentation of H&N primary Gross Tumor Volume (GTVt) in FDG-PET/CT images. Task 2 is the automatic prediction of Progression Free Survival (PFS) from the same FDG-PET/CT. Finally, Task 3 is the same as Task 2 with ground truth GTVt annotations provided to the participants. The data were collected from six centers for a total of 325 images, split into 224 training and 101 testing cases. The interest in the challenge was highlighted by the important participation with 103 registered teams and 448 result submissions. The best methods obtained a Dice Similarity Coefficient (DSC) of 0.7591 in the first task, and a Concordance index (C-index) of 0.7196 and 0.6978 in Tasks 2 and 3, respectively. In all tasks, simplicity of the approach was found to be key to ensure generalization performance. The comparison of the PFS prediction performance in Tasks 2 and 3 suggests that providing the GTVt contour was not crucial to achieve best results, which indicates that fully automatic methods can be used. This potentially obviates the need for GTVt contouring, opening avenues for reproducible and large scale radiomics studies including thousands potential subjects.

V. Andrearczyk and V. Oreiller—Equal contribution.

M. Hatt and A. Depeursinge—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.aicrowd.com/challenges/miccai-2021-hecktor, as of October 2021.

  2. 2.

    https://portal.fli-iam.irisa.fr/petseg-challenge/overview#_ftn1, as of October 2020.

  3. 3.

    The target cohort refers to the subjects from whom the data would be acquired in the final biomedical application. It is mentioned for additional information as suggested in BIAS, although all data provided for the challenge are part of the challenge cohort.

  4. 4.

    The challenge cohort refers to the subjects from whom the challenge data were acquired.

  5. 5.

    For simplicity, these centers were renamed CHGJ and CHMR during the challenge.

  6. 6.

    https://mim-cloud.appspot.com/ as of December 2021.

  7. 7.

    github.com/voreille/hecktor, as of December 2021.

  8. 8.

    github.com/voreille/hecktor, as of December 2021.

  9. 9.

    https://www.aicrowd.com/challenges/miccai-2021-hecktor/leaderboards?challenge_leaderboard_extra_id=667&challenge_round_id=879.

  10. 10.

    www.aicrowd.com/challenges/hecktor.

  11. 11.

    https://www.aicrowd.com/challenges/miccai-2021-hecktor#results-submission-format.

  12. 12.

    github.com/voreille/hecktor.

References

  1. An, C., Chen, H., Wang, L.: A coarse-to-fine framework for head and neck tumor segmentation in CT and PET images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 50–57. Springer, Cham (2022)

    Google Scholar 

  2. Andrearczyk, V., et al.: Multi-task deep segmentation and radiomics for automatic prognosis in head and neck cancer. In: Rekik, I., Adeli, E., Park, S.H., Schnabel, J. (eds.) PRIME 2021. LNCS, vol. 12928, pp. 147–156. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87602-9_14

    Chapter  Google Scholar 

  3. Andrearczyk, V., Oreiller, V., Depeursinge, A.: Oropharynx detection in PET-CT for tumor segmentation. In: Irish Machine Vision and Image Processing (2020)

    Google Scholar 

  4. Andrearczyk, V., et al.: Overview of the HECKTOR challenge at MICCAI 2020: automatic head and neck tumor segmentation in PET/CT. In: Andrearczyk, V., Oreiller, V., Depeursinge, A. (eds.) HECKTOR 2020. LNCS, vol. 12603, pp. 1–21. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67194-5_1

    Chapter  Google Scholar 

  5. Andrearczyk, V., et al.: Automatic segmentation of head and neck tumors and nodal metastases in PET-CT scans. In: International Conference on Medical Imaging with Deep Learning (MIDL) (2020)

    Google Scholar 

  6. Ashrafinia, S.: Quantitative nuclear medicine imaging using advanced image reconstruction and radiomics. Ph.D. thesis, The Johns Hopkins University (2019)

    Google Scholar 

  7. Atul Mali, S., et al.: Making radiomics more reproducible across scanner and imaging protocol variations: a review of harmonization methods. J. Pers. Med. 11(9), 842 (2021)

    Google Scholar 

  8. Bourigault, E., McGowan, D.R., Mehranian, A., Papiez, B.W.: Multimodal PET/CT tumour segmentation and prediction of progression-free survival using a full-scale UNet with attention. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 189–201. Springer, Cham (2022)

    Google Scholar 

  9. Castelli, J., et al.: PET-based prognostic survival model after radiotherapy for head and neck cancer. Eur. J. Nucl. Med. Mol. Imaging 46(3), 638–649 (2018). https://doi.org/10.1007/s00259-018-4134-9

    Article  Google Scholar 

  10. Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. CoRR abs/1606.00915 (2016)

    Google Scholar 

  11. Cho, M., Choi, Y., Hwang, D., Yie, S.Y., Kim, H., Lee, J.S.: Multimodal spatial attention network for automatic head and neck tumor segmentation in FDG-PET and CT images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 75–82. Springer, Cham (2022)

    Google Scholar 

  12. Choe, J., et al.: Deep learning-based image conversion of CT reconstruction kernels improves radiomics reproducibility for pulmonary nodule. Radiology 292(2), 365–373 (2019)

    Google Scholar 

  13. Da-ano, R., et al.: Performance comparison of modified ComBat for harmonization of radiomic features for multicentric studies. Sci. Rep. 10(1), 102488 (2020)

    Google Scholar 

  14. Davidson-Pilon, C.: lifelines: survival analysis in Python. J. Open Source Softw. 4(40), 1317 (2019)

    Google Scholar 

  15. De Biase, A., et al.: Skip-SCSE multi-scale attention and co-learning method for oropharyngeal tumor segmentation on multi-modal PET-CT images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 109–120. Springer, Cham (2022)

    Google Scholar 

  16. Fatan, M., Hosseinzadeh, M., Askari, D., Sheykhi, H., Rezaeijo, S.M., Salmanpoor, M.R.: Fusion-based head and neck tumor segmentation and survival prediction using robust deep learning techniques and advanced hybrid machine learning systems. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 211–223. Springer, Cham (2022)

    Google Scholar 

  17. Fontaine, P., et al.: Cleaning radiotherapy contours for radiomics studies, is it worth it? A head and neck cancer study. Clin. Transl. Radiat. Oncol. 33, 153–158 (2022)

    Google Scholar 

  18. Fontaine, P., et al.: Fully automatic head and neck cancer prognosis prediction in PET/CT. In: Syeda-Mahmood, T., et al. (eds.) ML-CDS 2021. LNCS, vol. 13050, pp. 59–68. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89847-2_6

    Chapter  Google Scholar 

  19. Foster, B., Bagci, U., Mansoor, A., Xu, Z., Mollura, D.J.: A review on segmentation of positron emission tomography images. Comput. Biol. Med. 50, 76–96 (2014)

    Google Scholar 

  20. Ghimire, K., Chen, Q., Feng, X.: Head and neck tumor segmentation with deeply-supervised 3D UNet and progression-free survival prediction with linear model. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 141–149. Springer, Cham (2022)

    Google Scholar 

  21. Gudi, S., et al.: Interobserver variability in the delineation of gross tumour volume and specified organs-at-risk during IMRT for head and neck cancers and the impact of FDG-PET/CT on such variability at the primary site. J. Med. Imaging Radiat. Sci. 48(2), 184–192 (2017)

    Google Scholar 

  22. Harrell, F.E., Califf, R.M., Pryor, D.B., Lee, K.L., Rosati, R.A.: Evaluating the yield of medical tests. JAMA 247(18), 2543–2546 (1982)

    Google Scholar 

  23. Hatt, M., et al.: The first MICCAI challenge on PET tumor segmentation. Med. Image Anal. 44, 177–195 (2018)

    Google Scholar 

  24. Hatt, M., Le Rest, C.C., Turzo, A., Roux, C., Visvikis, D.: A fuzzy locally adaptive Bayesian segmentation approach for volume determination in PET. IEEE Trans. Med. Imaging 28(6), 881–893 (2009)

    Google Scholar 

  25. Hatt, M., et al.: Classification and evaluation strategies of auto-segmentation approaches for PET: report of AAPM task group No. 211. Med. Phys. 44(6), e1–e42 (2017)

    Google Scholar 

  26. Huynh, B.N., Ren, J., Groendahl, A.R., Tomic, O., Korreman, S.S., Futsaether, C.M.: Comparing deep learning and conventional machine learning for outcome prediction of head and neck cancer in PET/CT. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 318–326. Springer, Cham (2022)

    Google Scholar 

  27. Iantsen, A., Visvikis, D., Hatt, M.: Squeeze-and-excitation normalization for automated delineation of head and neck primary tumors in combined PET and CT images. In: Andrearczyk, V., Oreiller, V., Depeursinge, A. (eds.) HECKTOR 2020. LNCS, vol. 12603, pp. 37–43. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67194-5_4

    Chapter  Google Scholar 

  28. Isensee, F., Jaeger, P.F., Kohl, S.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021)

    Google Scholar 

  29. Juanco-Müller, Á.V., Mota, J.F.C., Goatman, K., Hoogendoorn, C.: Deep supervoxel segmentation for survival analysis in head and neck cancer patients. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 257–265. Springer, Cham (2022)

    Google Scholar 

  30. Kim, B., Ye, J.C.: Mumford-Shah loss functional for image segmentation with deep learning. IEEE Trans. Image Process. 29, 1856–1866 (2019)

    MathSciNet  Google Scholar 

  31. Kuijf, H.J., et al.: Standardized assessment of automatic segmentation of white matter hyperintensities and results of the WMH segmentation challenge. IEEE Trans. Med. Imaging 38(11), 2556–2568 (2019)

    Google Scholar 

  32. Kumar, A., Fulham, M., Feng, D., Kim, J.: Co-learning feature fusion maps from PET-CT images of lung cancer. IEEE Trans. Med. Imaging 39, 204–217 (2019)

    Google Scholar 

  33. Lang, D.M., Peeken, J.C., Combs, S.E., Wilkens, J.J., Bartzsch, S.: Deep learning based GTV delineation and progression free survival risk score prediction for head and neck cancer patients. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 150–159. Springer, Cham (2022)

    Google Scholar 

  34. Lee, J., Kang, J., Shin, E.Y., Kim, R.E.Y., Lee, M.: Dual-path connected CNN for tumor segmentation of combined PET-CT images and application to survival risk prediction. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 248–256. Springer, Cham (2022)

    Google Scholar 

  35. Leseur, J., et al.: Pre- and per-treatment 18F-FDG PET/CT parameters to predict recurrence and survival in cervical cancer. Radiother. Oncol. J. Eur. Soc. Ther. Radiol. Oncol. 120(3), 512–518 (2016)

    Google Scholar 

  36. Li, L., Zhao, X., Lu, W., Tan, S.: Deep learning for variational multimodality tumor segmentation in PET/CT. Neurocomputing 392, 277–295 (2019)

    Google Scholar 

  37. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  38. Liu, T., Su, Y., Zhang, J., Wei, T., Xiao, Z.: 3D U-net applied to simple attention module for head and neck tumor segmentation in PET and CT images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 50–57. Springer, Cham (2022)

    Google Scholar 

  39. Liu, Z., et al.: Automatic segmentation of clinical target volume used for post-modified radical mastectomy radiotherapy with a convolutional neural network. Front. Oncol. 10, 3268 (2020)

    Google Scholar 

  40. Lu, J., Lei, W., Gu, R., Wang, G.: Priori and posteriori attention for generalizing head and neck tumors segmentation. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 134–140. Springer, Cham (2022)

    Google Scholar 

  41. Ma, B., et al.: Self-supervised multi-modality image feature extraction for the progression free survival prediction in head and neck cancer. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 202–210. Springer, Cham (2022)

    Google Scholar 

  42. Maier-Hein, L., et al.: Why rankings of biomedical image analysis competitions should be interpreted with care. Nat. Commun. 9(1), 1–13 (2018)

    Google Scholar 

  43. Maier-Hein, L., et al.: BIAS: transparent reporting of biomedical image analysis challenges. Med. Image Anal. 66, 101796 (2020)

    Google Scholar 

  44. Martinez-Larraz, A., Asenjo, J.M., Rodríguez, B.A.: PET/CT head and neck tumor segmentation and progression free survival prediction using deep and machine learning techniques. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 168–178. Springer, Cham (2022)

    Google Scholar 

  45. Meng, M., Peng, Y., Bi, L., Kim, J.: Multi-task deep learning for joint tumor segmentation and outcome prediction in head and neck cancer. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 160–167. Springer, Cham (2022)

    Google Scholar 

  46. Moe, Y.M., et al.: Deep learning for automatic tumour segmentation in PET/CT images of patients with head and neck cancers. In: Medical Imaging with Deep Learning (2019)

    Google Scholar 

  47. Murugesan, G.K., et al.: Head and neck primary tumor segmentation using deep neural networks and adaptive ensembling. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 224–235. Springer, Cham (2022)

    Google Scholar 

  48. Myronenko, A.: 3D MRI brain tumor segmentation using autoencoder regularization. In: Crimi, A., Bakas, S., Kuijf, H., Keyvan, F., Reyes, M., van Walsum, T. (eds.) BrainLes 2018. LNCS, vol. 11384, pp. 311–320. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11726-9_28

    Chapter  Google Scholar 

  49. Naser, M.A., et al.: Head and neck cancer primary tumor auto segmentation using model ensembling of deep learning in PET-CT images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 121–133. Springer, Cham (2022)

    Google Scholar 

  50. Naser, M.A., et al.: Progression free survival prediction for head and neck cancer using deep learning based on clinical and PET-CT imaging data. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 287–299. Springer, Cham (2022)

    Google Scholar 

  51. Oreiller, V., et al.: Head and neck tumor segmentation in PET/CT: the HECKTOR challenge. Med. Image Anal. 77, 102336 (2021)

    Google Scholar 

  52. Qayyum, A., Benzinou, A., Mazher, M., Abdel-Nasser, M., Puig, D.: Automatic segmentation of head and neck (H&N) primary tumors in PET and CT images using 3D-Inception-ResNet model. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 58–67. Springer, Cham (2022)

    Google Scholar 

  53. Ren, J., Huynh, B.N., Groendahl, A.R., Tomic, O., Futsaether, C.M., Korreman, S.S.: PET normalizations to improve deep learning auto-segmentation of head and neck in 3D PET/CT. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 83–91. Springer, Cham (2022)

    Google Scholar 

  54. Saeed, N., Al Majzoub, R., Sobirov, I., Yaqub, M.: An ensemble approach for patient prognosis of head and neck tumor using multimodal data. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 278–286. Springer, Cham (2022)

    Google Scholar 

  55. Salmanpour, M.R., Hajianfar, G., Rezaeijo, S.M., Ghaemi, M., Rahmim, A.: Advanced automatic segmentation of tumors and survival prediction in head and neck cancer. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 202–210. Springer, Cham (2022)

    Google Scholar 

  56. Sepehri, S., Tankyevych, O., Iantsen, A., Visvikis, D., Cheze Le Rest, C., Hatt, M.: Accurate tumor delineation vs. rough volume of interest analysis for 18F-FDG PET/CT radiomic-based prognostic modeling in non-small cell lung cancer. Front. Oncol. 292(2), 365–373 (2021)

    Google Scholar 

  57. Starke, S., Thalmeier, D., Steinbach, P., Piraud, M.: A hybrid radiomics approach to modeling progression-free survival in head and neck cancers. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 266–277. Springer, Cham (2022)

    Google Scholar 

  58. Vallières, M., et al.: Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer. Sci. Rep. 7(1), 1–14 (2017)

    Google Scholar 

  59. Wahid, K.A., et al.: Combining tumor segmentation masks with PET/CT images and clinical data in a deep learning framework for improved prognostic prediction in head and neck squamous cell carcinoma. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 300–307. Springer, Cham (2022)

    Google Scholar 

  60. Wang, G., Huang, Z., Shen, H., Hu, Z.: The head and neck tumor segmentation in PET/CT based on multi-channel attention network. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 38–49. Springer, Cham (2022)

    Google Scholar 

  61. Wang, J., Peng, Y., Guo, Y., Li, D., Sun, J.: CCUT-Net: pixel-wise global context channel attention UT-Net for head and neck tumor segmentation. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 318–326. Springer, Cham (2022)

    Google Scholar 

  62. Xie, H., Zhang, X., Ma, S., Liu, Y., Wang, X.: Preoperative differentiation of uterine sarcoma from leiomyoma: comparison of three models based on different segmentation volumes using radiomics. Mol. Imaging Biol. 21(6), 1157–64 (2019)

    Google Scholar 

  63. Xie, J., Peng, Y.: The head and neck tumor segmentation based on 3D U-Net. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 92–98. Springer, Cham (2022)

    Google Scholar 

  64. Xu, L., et al.: Automated whole-body bone lesion detection for multiple myeloma on 68Ga-pentixafor PET/CT imaging using deep learning methods. Contrast Media Mol. Imaging (2018)

    Google Scholar 

  65. Xue, Z., et al.: Multi-modal co-learning for liver lesion segmentation on PET-CT images. IEEE Trans. Med. Imaging 40, 3531–3542 (2021)

    Google Scholar 

  66. Yousefirizi, F., et al.: Segmentation and risk score prediction of head and neck cancers in PET/CT volumes with 3D U-Net and Cox proportional hazard neural networks. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 236–247. Springer, Cham (2022)

    Google Scholar 

  67. Yousefirizi, F., Rahmim, A.: GAN-based bi-modal segmentation using Mumford-shah loss: application to head and neck tumors in PET-CT images. In: Andrearczyk, V., Oreiller, V., Depeursinge, A. (eds.) HECKTOR 2020. LNCS, vol. 12603, pp. 99–108. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67194-5_11

    Chapter  Google Scholar 

  68. Yuan, Y., Adabi, S., Wang, X.: Automatic head and neck tumor segmentation and progression free survival analysis on PET/CT images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds.) HECKTOR 2021. LNCS, vol. 13209, pp. 179–188. Springer, Cham (2022)

    Google Scholar 

  69. Zhao, X., Li, L., Lu, W., Tan, S.: Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network. Phys. Med. Biol. 64(1), 015011 (2018)

    Google Scholar 

  70. Zhong, Z., et al.: 3D fully convolutional networks for co-segmentation of tumors on PET-CT images. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 228–231. IEEE (2018)

    Google Scholar 

Download references

Acknowledgments

The organizers thank all the teams for their participation and valuable work. This challenge and the winner prizes were sponsored by Siemens Healthineers Switzerland, Bioemtech Greece and Aquilab France (500€  each, for Task 1, 2 and 3 respectively). The software used to centralise the quality control of the GTVt regions was MIM (MIM software Inc., Cleveland, OH), which kindly supported the challenge via free licences. This work was also partially supported by the Swiss National Science Foundation (SNSF, grant 205320_179069) and the Swiss Personalized Health Network (SPHN, via the IMAGINE and QA4IQI projects).

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Appendices

Appendix 1: Challenge Information

In this appendix, we list important information about the challenge as suggested in the BIAS guidelines [43].

Challenge Name  

HEad and neCK TumOR segmentation and outcome prediction challenge (HECKTOR) 2021

Organizing Team  

(Authors of this paper) Vincent Andrearczyk, Valentin Oreiller, Sarah Boughdad, Catherine Cheze Le Rest, Hesham Elhalawani, Mario Jreige, John O. Prior, Martin Vallières, Dimitris Visvikis, Mathieu Hatt and Adrien Depeursinge

Life Cycle Type  

A fixed submission deadline was set for the challenge results.

Challenge Venue and Platform  

The challenge is associated with MICCAI 2021. Information on the challenge is available on the website, together with the link to download the data, the submission platform and the leaderboardFootnote 10.

Participation Policies

  1. (a)

    Task 1: Algorithms producing fully-automatic segmentation of the test cases were allowed. Task 2 and 3: Algorithms producing fully-automatic PFS risk score prediction of the test cases were allowed.

  2. (b)

    The data used to train algorithms was not restricted. If using external data (private or public), participants were asked to also report results using only the HECKTOR data.

  3. (c)

    Members of the organizers’ institutes could participate in the challenge but were not eligible for awards.

  4. (d)

    Task 1: The award was 500 euros, sponsored by Siemens Healthineers Switzerland. Task 2: The award was 500 euros, sponsored by Aquilab. Task 3: The award was 500 euros, sponsored by Bioemtech.

  5. (e)

    Policy for results announcement: The results were made available on the AIcrowd leaderboard and the best three results of each task were announced publicly. Once participants submitted their results on the test set to the challenge organizers via the challenge website, they were considered fully vested in the challenge, so that their performance results (without identifying the participant unless permission was granted) became part of any presentations, publications, or subsequent analyses derived from the challenge at the discretion of the organizers.

  6. (f)

    Publication policy: This overview paper was written by the organizing team’s members. The participating teams were encouraged to submit a paper describing their method. The participants can publish their results separately elsewhere when citing the overview paper, and (if so) no embargo will be applied.

Submission Method  

Submission instructions are available on the websiteFootnote 11 and are reported in the following. Task 1: Results should be provided as a single binary mask per patient (1 in the predicted GTVt) in .nii.gz format. The resolution of this mask should be the same as the original CT resolution and the volume cropped using the provided bounding-boxes. The participants should pay attention to saving NIfTI volumes with the correct pixel spacing and origin with respect to the original reference frame. The NIfTI files should be named [PatientID].nii.gz, matching the patient names, e.g. CHUV001.nii.gz and placed in a folder. This folder should be zipped before submission. If results are submitted without cropping and/or resampling, we will employ nearest neighbor interpolation given that the coordinate system is provided.

Task 2: Results should be submitted as a CSV file containing the patient ID as “PatientID” and the output of the model (continuous) as “Prediction”. An individual output should be anti-concordant with the PFS in days (i.e., the model should output a predicted risk score).

Task 3: For this task, the developed methods will be evaluated on the testing set by the organizers by running them within a docker provided by the challengers. Practically, your method should process one patient at a time. It should take 3 nifty files as inputs (file 1: the PET image, file 2: the CT image, file 3: the provided ground-trugh segmentation mask, all 3 files have the same dimensions, the ground-truth mask contains only 2 values: 0 for the background, 1 for the tumor), and should output the predicted risk score produced by your model.

Participants were allowed five valid submissions per task. The best result was reported for each task/team. For a team submitting multiple runs to task one, the best result was determined as the highest ranking result within these runs (see ranking description in Sect. 3.1).

Challenge Schedule  

The schedule of the challenge, including modifications, is reported in the following.

  • the release date of the training cases: June 04 2021

  • the release date of the test cases: Aug. 06 2021

  • the submission date(s): opens Sept. 01 2021 closes Sept. 10 Sept. 14 2021 (23:59 UTC-10)

  • paper abstract submission deadline: Sept. 15 2021 (23:59 UTC-10)

  • full paper submission deadline: Sept. 17 2021 (23:59 UTC-10)

  • the release date of the ranking: Sept. 27 2021

  • associated workshop days: Sept. 27 2021

Ethics Approval  

Montreal: CHUM, CHUS, HGJ, HMR data (training): The ethics approval was granted by the Research Ethics Committee of McGill University Health Center (Protocol Number: MM-JGH-CR15-50).

Lausanne: CHUV data (testing): The ethics approval was obtained from the Commission cantonale (VD) d’éthique de la recherche sur l’être humain (CER-VD) with protocol number: 2018-01513.

Poitiers: CHUP data (partly training and testing): The fully anonymized data originates from patients who consent to the use of their data for research purposes.

Data Usage Agreement  

The participants had to fill out and sign an end-user-agreement in order to be granted access to the data. The form can be found under the Resources tab of the HECKTOR website.

Code Availability  

The evaluation software was made available on our github pageFootnote 12. The participating teams decided whether they wanted to disclose their code (they were encouraged to do so).

Conflict of Interest  

No conflict of interest applies. Fundings are specified in the acknowledgments. Only the organizers had access to the test cases’ ground truth contours.

Author contributions  

Vincent Andrearczyk:

Design of the tasks and of the challenge, writing of the proposal, development of baseline algorithms, development of the AIcrowd website, writing of the overview paper, organization of the challenge event, organization of the submission and reviewing process of the participants’ papers.

Valentin Oreiller:

Design of the tasks and of the challenge, writing of the proposal, development of the AIcrowd website, development of the evaluation code, writing of the overview paper, organization of the challenge event, organization of the submission and reviewing process of the papers.

Sarah Boughdad:

Design of the tasks and of the challenge, annotations.

Catherine Cheze Le Rest:

Design of the tasks and of the challenge, annotations.

Hesham Elhalawani:

Design of the tasks and of the challenge, annotations.

Mario Jreige:

Design of the tasks and of the challenge, quality control/annotations, annotations, revision of the paper and accepted the last version of the submitted paper.

John O. Prior:

Design of the tasks and of the challenge, revision of the paper and accepted the last version of the submitted paper.

Martin Vallières:

Design of the tasks and of the challenge, provided the initial data and annotations for the training set [58], revision of the paper and accepted the last version of the submitted paper.

Dimitris Visvikis:

Design of the task and challenge.

Mathieu Hatt:

Design of the tasks and of the challenge, writing of the proposal, writing of the overview paper, organization of the challenge event.

Adrien Depeursinge:

Design of the tasks and of the challenge, writing of the proposal, writing of the overview paper, organization of the challenge event.

Appendix 2: Image Acquisition Details

HGJ: For the PET portion of the FDG-PET/CT scan, a median of 584 MBq (range: 368–715) was injected intravenously. After a 90-min uptake period of rest, patients were imaged with the PET/CT imaging system. Imaging acquisition of the head and neck was performed using multiple bed positions with a median of 300 s (range: 180–420) per bed position. Attenuation corrected images were reconstructed using an ordered subset expectation maximization (OSEM) iterative algorithm and a span (axial mash) of 5. The FDG-PET slice thickness resolution was 3.27 mm for all patients and the median in-plane resolution was 3.52 \(\times \) 3.52 mm\(^2\) (range: 3.52–4.69). For the CT portion of the FDG-PET/CT scan, an energy of 140 kVp with an exposure of 12 mAs was used. The CT slice thickness resolution was 3.75 mm and the median in-plane resolution was 0.98 \(\times \) 0.98 mm\(^2\) for all patients.

CHUS: For the PET portion of the FDG-PET/CT scan, a median of 325 MBq (range: 165–517) was injected intravenously. After a 90-min uptake period of rest, patients were imaged with the PET/CT imaging system. Imaging acquisition of the head and neck was performed using multiple bed positions with a median of 150 s (range: 120–151) per bed position. Attenuation corrected images were reconstructed using a LOR-RAMLA iterative algorithm. The FDG-PET slice thickness resolution was 4 mm and the median in-plane resolution was \(4\times 4\,\mathrm{mm}^2\) for all patients. For the CT portion of the FDG-PET/CT scan, a median energy of 140 kVp (range: 12–140) with a median exposure of 210 mAs (range: 43–250) was used. The median CT slice thickness resolution was 3 mm (range: 2–5) and the median in-plane resolution was 1.17 \(\times \) 1.17 mm\(^2\) (range: 0.68–1.17).

HMR: For the PET portion of the FDG-PET/CT scan, a median of 475 MBq (range: 227–859) was injected intravenously. After a 90-min uptake period of rest, patients were imaged with the PET/CT imaging system. Imaging acquisition of the head and neck was performed using multiple bed positions with a median of 360 s (range: 120–360) per bed position. Attenuation corrected images were reconstructed using an ordered subset expectation maximization (OSEM) iterative algorithm and a median span (axial mash) of 5 (range: 3–5). The FDG-PET slice thickness resolution was 3.27 mm for all patients and the median in-plane resolution was 3.52 \(\times \) 3.52 mm\(^2\) (range: 3.52–5.47). For the CT portion of the FDG-PET/CT scan, a median energy of 140 kVp (range: 120–140) with a median exposure of 11 mAs (range: 5–16) was used. The CT slice thickness resolution was 3.75 mm for all patients and the median in-plane resolution was 0.98 \(\times \) 0.98 mm\(^2\) (range: 0.98–1.37).

CHUM: For the PET portion of the FDG-PET/CT scan, a median of 315 MBq (range: 199–3182) was injected intravenously. After a 90-min uptake period of rest, patients were imaged with the PET/CT imaging system. Imaging acquisition of the head and neck was performed using multiple bed positions with a median of 300 s (range: 120–420) per bed position. Attenuation corrected images were reconstructed using an ordered subset expectation maximization (OSEM) iterative algorithm and a median span (axial mash) of 3 (range: 3–5). The median FDG-PET slice thickness resolution was 4 mm (range: 3.27–4) and the median in-plane resolution was 4 \(\times \) 4 mm\(^2\) (range: 3.52–5.47). For the CT portion of the FDG-PET/CT scan, a median energy of 120 kVp (range: 120–140) with a median exposure of 350 mAs (range: 5–350) was used. The median CT slice thickness resolution was 1.5 mm (range: 1.5–3.75) and the median in-plane resolution was 0.98 \(\times \) 0.98 mm\(^2\) (range: 0.98–1.37).

CHUV: The patients fasted at least 4 h before the injection of 4 Mbq/kg of (18F)-FDG (Flucis). Blood glucose levels were checked before the injection of (18F)-FDG. If not contra-indicated, intravenous contrast agents were administered before CT scanning. After a 60-min uptake period of rest, patients were imaged with the PET/CT imaging system. First, a CT (120 kV, 80 mA, 0.8-s rotation time, slice thickness 3.75 mm) was performed from the base of the skull to the mid-thigh. PET scanning was performed immediately after acquisition of the CT. Images were acquired from the base of the skull to the mid-thigh (3 min/bed position). PET images were reconstructed by using an ordered-subset expectation maximization iterative reconstruction (OSEM) (two iterations, 28 subsets) and an iterative fully 3D (DiscoveryST). CT data were used for attenuation calculation.

CHUP: PET/CT acquisition began after 6 h of fasting and \(60\pm 5\) min after injection of 3 MBq/kg of 18F-FDG (\(421\pm 98\) MBq, range 220–695 MBq). Non-contrast-enhanced, non-respiratory gated (free breathing) CT images were acquired for attenuation correction (120 kVp, Care Dose® current modulation system) with an in-plane resolution of \(0.853\times 0.853\,\mathrm{mm}^2\) and a 5 mm slice thickness. PET data were acquired using 2.5 min per bed position routine protocol and images were reconstructed using a CT-based attenuation correction and the OSEM-TrueX-TOF algorithm (with time-of-flight and spatial resolution modeling, 3 iterations and 21 subsets, 5 mm 3D Gaussian post-filtering, voxel size \(4\times 4\times 4\,\mathrm{mm}^3\)).

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Andrearczyk, V. et al. (2022). Overview of the HECKTOR Challenge at MICCAI 2021: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT Images. In: Andrearczyk, V., Oreiller, V., Hatt, M., Depeursinge, A. (eds) Head and Neck Tumor Segmentation and Outcome Prediction. HECKTOR 2021. Lecture Notes in Computer Science, vol 13209. Springer, Cham. https://doi.org/10.1007/978-3-030-98253-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-98253-9_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-98252-2

  • Online ISBN: 978-3-030-98253-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics