Next Article in Journal
Post-Developmental Roles of Notch Signaling in the Nervous System
Next Article in Special Issue
Nerve Segmentation with Deep Learning from Label-Free Endoscopic Images Obtained Using Coherent Anti-Stokes Raman Scattering
Previous Article in Journal
Unveiling the Mechanism of Action of 7α-acetoxy-6β-hydroxyroyleanone on an MRSA/VISA Strain: Membrane and Cell Wall Interactions
Previous Article in Special Issue
Deep Learning of Cancer Stem Cell Morphology Using Conditional Generative Adversarial Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Neural Networks for Dental Implant System Classification

1
Department of Oral and Maxillofacial Surgery, Kagawa Prefectural Central Hospital, 1-2-1, Asahi-machi, Takamatsu, Kagawa 760-8557, Japan
2
Department of Oral Pathology and Medicine, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama 700-8558, Japan
3
Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, Gifu 501-1193, Japan
4
Polytechnic Center Kagawa, 2-4-3, Hananomiya-cho, Takamatsu, Kagawa 761-8063, Japan
5
Department of Orthopaedic Surgery, Kagawa Prefectural Central Hospital, Takamatsu, Kagawa 760-8557, Japan
*
Author to whom correspondence should be addressed.
Biomolecules 2020, 10(7), 984; https://doi.org/10.3390/biom10070984
Submission received: 30 May 2020 / Revised: 27 June 2020 / Accepted: 29 June 2020 / Published: 1 July 2020
(This article belongs to the Special Issue Application of Artificial Intelligence for Medical Research)

Abstract

:
In this study, we used panoramic X-ray images to classify and clarify the accuracy of different dental implant brands via deep convolutional neural networks (CNNs) with transfer-learning strategies. For objective labeling, 8859 implant images of 11 implant systems were used from digital panoramic radiographs obtained from patients who underwent dental implant treatment at Kagawa Prefectural Central Hospital, Japan, between 2005 and 2019. Five deep CNN models (specifically, a basic CNN with three convolutional layers, VGG16 and VGG19 transfer-learning models, and finely tuned VGG16 and VGG19) were evaluated for implant classification. Among the five models, the finely tuned VGG16 model exhibited the highest implant classification performance. The finely tuned VGG19 was second best, followed by the normal transfer-learning VGG16. We confirmed that the finely tuned VGG16 and VGG19 CNNs could accurately classify dental implant systems from 11 types of panoramic X-ray images.

1. Introduction

Osseointegration involves the direct structural and functional connection between living bone and the surface of a load-bearing artificial implant. In dentistry, such implants provide promising prosthetic restoration alternatives [1]. The capability to provide dental implants has revolutionized dental practices worldwide, improving the lives of many patients. The widespread use of dental implants is being improved by technological innovations that mitigate long-term prognoses and the risk of poor alveolar bone conditions. Such innovations include the development of new implant surface textures [2,3] and shapes (e.g., threading [4,5] and platforms [6]) that further improve alveolar ridge augmentation and sinus lift surgeries as pre-implant procedures for alveolar bone atrophy cases [7,8,9]. The growing demand for dental implants has led many manufacturers to enter the industry. Since the year 2000, more than 220 implant brands have been available in the worldwide market [10], and the variety continues to grow.
Implants consist of fixtures, abutments, and superstructures that can vary in terms of style, structure, and tools required, rendering the classification of implant brands difficult. For example, a manufacturers’ proprietary prosthesis fixing screws directly influence implant maintenance (e.g., retightening to counter loosening) [11]. Thus, accurate identification of the implant brand is important. The types of dental implants used and their screws change over time, and different types of implants are often placed by different dentists for a single patient. The difficulty of identification is compounded when information needs to be shared across different countries or regions. With panoramic radiography, it is possible to obtain information related to the jawbone and teeth in one image [12]. Such images often provide the information needed to identify a patient’s implant brand(s). However, doing so requires a significant amount of human effort and experience. No automated method has yet been proposed for identifying implant brands from panoramic radiograph images [13]. Clinically, this capability would be useful.
Artificial intelligence (AI) requires the use of intelligent, machine-based algorithms that mimic human neurological processes. In recent decades, AI has made significant progress in enabling machines to automatically process and categorize complex data [14]. In particular, convolutional neural networks (CNNs), the latest core model of artificial neural networks and deep learning, provide computer-vision capabilities [15], including medical image classification. CNN-based computer-vision technology has produced impressive diagnostic and predictive results in radiology and pathology research and has potential for meeting dental implant-recognition needs [16,17]. AI and deep learning techniques have already been used to support dentistry [18,19].
However, collecting a large amount of image data from clinics can be difficult, and not providing sufficient data for CNNs can lead to overfitting. For these reasons, transfer learning and fine-tuning techniques have been used in recent years. In transfer learning, an existing learned model is used as a feature extractor without changing the weight data, while in fine tuning, an existing learned model is used as a feature extractor by relearning some of the weight data. These methods are powerful methods for training deep CNNs without overfitting, even when the target dataset is smaller than the base dataset [20].
In this study, we assessed the accuracy of using digital panoramic X-ray radiograph images for classification and clarification of dental implant brands via deep convolutional network transfer-learning and fine-tuning strategies.

2. Patients and Methods

2.1. Study Design

We leveraged a dataset of segmented panoramic radiographs. Several different CNNs were used to classify dental implant brands based on patient panoramic radiographs and ground truth data. We then examined the classification accuracy of a variety of CNN models based on the metric goals set forth in the Introduction.

2.2. Performance Metrics

Our primary performance variable was classification accuracy, which corresponds to the proportion of correct classifications. Secondary metrics included the visualization of CNN-focused features into related image regions. The accuracy, precision, recall, recover operating characteristic (ROC) curve, and F1 score, which consider the relationship between the data’s positive dental implant labels and those given by a classifier, are calculated as follows (Equations (1)–(4)) with the testing dataset using a confusion matrix:
Accuracy   = TP   +   TN TP   +   FP   +   FN   +   TN
Precision   = TP TP   +   FP
Recall   = TP TP   +   FN
F 1   Score   = 2 × ( Recall   ×   Precision ) Recall   +   Precision
where TP is true positive, FP is false positive, FN is false negative, and TN is true negative. The area under the ROC curve (AUC) was also calculated.

2.3. Ethics Statement

This study was approved by the institutional review board (IRB) of Kagawa Prefectural Central Hospital (Approval No. 849). The IRB waived the need for individual informed consent. Thus, written/verbal informed consent was not obtained from any participant because this study featured a non-interventional retrospective design, and all data were analyzed anonymously.

2.4. Data Preprocessing

Anonymized dental implant radiographic image datasets, acquired between January 2005 and December 2019, were obtained from the picture archiving and communication system of Kagawa Prefectural Central Hospital (HOPE Dr ABLE-GX, FUJITSU Co., Tokyo, Japan) and classified and labeled based on electronic medical records and the dental implant usage ledger of our department. Digital panoramic dental radiographs collected using AZ3000CMR (ASAHIROENTGEN IND. Co., Ltd., Kyoto, Japan) were exported as portable network graphics (PNG) images. From a collection of 6513 selected digital panoramic dental radiographs, a dataset of 8859 manually cropped image segments, each focused on a dental implant, was synthesized. Each included dental implant image was cropped for each dental panoramic radiograph taken as needed. The 11 systems mainly used at Kagawa Prefectural Central Hospital were selected as the dental implants targeted in this study. The types of dental implant systems and corresponding number of images are shown in Table 1. The size of each panoramic X-ray photograph was 2964 × 1464 pixels. Among them, images containing the following 11 types of dental implant systems were selected for this work:
  • Full OSSEOTITE 4.0: Full OSSEOTITE Tapered Certain (Zimmer Biomet, Florida, USA), diameter of 4 mm; lengths of 8.5, 10, 11, and 11.5 mm.
  • Astra EV 4.2: Astra Tech Implant System OsseoSpeed EV (Dentsply IH AB, Molndal, Sweden), diameter of 4.2 mm; lengths of 9 and 11 mm.
  • Astra TX 4.0: Astra Tech Implant System OsseoSpeed TX (Dentsply IH AB, Molndal, Sweden), diameter of 4 mm; lengths of 8, 9, and 11 mm.
  • Astra TX 4.5: Astra Tech Implant System OsseoSpeed TX (Dentsply IH AB, Molndal, Sweden), diameter of 4.5 mm; lengths of 9 and 11 mm.
  • Astra MicroThread 4.0: Astra Tech Implant System MicroThread, (Dentsply IH AB, Molndal, Sweden), diameter of 4 mm; lengths of 8, 9, and 11 mm.
  • Astra MicroThread 4.5: Astra Tech Implant System MicroThread, (Dentsply IH AB, Molndal, Sweden), diameter of 4.5 mm; lengths of 9 and 11 mm.
  • Brånemark Mk III 4.0: Brånemark System Mk III TiUnite (Nobelbiocare, Göteborg, Sweden), diameter of 4 mm; lengths of 8.5, 10, and 11.5 mm.
  • FINESIA 4.2: FINESIA BL HA TP (KYOCERA Co., Kyoto, Japan), diameter of 4.2 mm; lengths of 8 and 10 mm
  • Replace Select Tapered 4.3: Replace Select Tapered (Nobelbiocare, Göteborg, Sweden), diameter of 4.3 mm; lengths of 8, 10, and 11.5 mm.
  • Nobel Replace CC 4.3: NobelReplace Conical Connection, (Nobelbiocare, Göteborg, Sweden), diameter of 4.3 mm; lengths of 8, 10, and 11.5 mm.
  • Straumann Tissue 4.1: Standard Plus Implant Tissue Level implants (Straumann Group, Basei, Switzerland), diameter of 4.1 mm; lengths of 8 and 10 mm.
These dental implant data included implant fixtures, healing abutments, provisional settings, and final prostheses. As preparation before analysis, we used Photoshop Element (Adobe Systems, Inc., San Jose, CA, USA) to manipulate the images so that all dental implant fixtures would fit (see Figure 1 and Figure 2).

2.5. Convolutional Neural Network

We used three CNN structures: a basic CNN with three convolution layers, VGG16, and VGG19. Transfer learning and fine tuning were each performed for VGG16 and VGG19. VGG16, which was introduced by the Visual Geometry Group at Oxford University, provides a weighted-layer depth consisting of 13 and 3 fully connected layers [21]. This network was trained on over 1 million images in 1000 classes with more than 370,000 iterations to calibrate 138 million weight parameters. Notably, VGG19 won first place in a classification and localization competition at the Large Scale Visual Recognition Challenge in 2014, a global image-recognition competition. VGG19 has 19 layers with weights. In this model, 16 of 19 layers are convolutional and are divided into five blocks by the max pooling layer [21]. The basic CNN and transfer-learning VGG16/VGG19 models were set to 0.001 and fine-tuning VGG16/19 to 0.0001 for learning rates.
In total, we employed five CNN study groups as follows (Figure 3):
  • Basic CNN model with six convolutional layers (basic CNN)
  • Transfer-learning VGG16 model with pre-trained weights (VGG16 transfer)
  • Transfer-learning and fine-tuning VGG16 model with pre-trained weights (VGG16 fine tuning)
  • Transfer-learning VGG19 model with pre-trained weights (VGG19 transfer)
  • Transfer learning and fine-tuning VGG19 model with pre-trained weights (VGG19 fine tuning)
The datasets were split at the patient image level into 75% training and 25% testing for the different stages of learning performed in this study. The optimization algorithm used for the basic CNN was Adam and for the four VGG models, Momentum SGD. The datasets were trained using transfer learning. The training dataset was separated randomly into 128 batches for every epoch, the number of iterations (epochs) was set at a maximum of 700 from the behavior of validation loss. To evaluate the performance of the current method, fourfold cross-validation was used. As a method to prevent overfitting, generalization is guaranteed by this cross-validation method. This process was repeated for each architecture (i.e., basic CNN, VGG16/VGG19 transfer, and VGG16/VGG19 fine tuning). All models were trained and evaluated on a 64-bit Ubuntu 16.04.5 LTS operating system with 31.4 GB memory and an NVIDIA GeForce GTX TITAN X graphics processing unit. Building, training, and prediction of deep-learning models were performed using the Keras library (https://keras.io) and TensorFlow [22] back-end engine.

2.6. Model Visualization

CNN model visualization helps to clarify the most relevant features used for classification. To identify potential correct classifications based on incorrect features and to gain some intuition into the classification process, we identified the image pixels most relevant for classification using gradient-weighted class activation maps (Grad-CAM) [23]. Map visualizations are heatmaps of the gradients with the “hotter” colors representing the regions of more importance for classification. The heat map using Grad-CAM was reconstructed with the final convolutional layer in this study.

3. Results

3.1. Classification Performance

The CNN models used in this study were trained using the cross-entropy loss function of the selected training image dataset. The image classification performance each of the five CNN models tested in this study is shown in Table 2.
The finely tuned VGG16 model with pretrained weights achieved the best performance for all metrics, including recall, precision, accuracy, and F-measure. The next best performer was the finely tuned VGG19 model. There were no clear differences between the finely tuned VGG16 and the finely tuned VGG19, but the performance of the finely tuned VGG16 was slightly better. The next best performance was the VGG16 with transfer learning, followed by the VGG19 with transfer learning and the basic CNN. The F1 score performance per CNN model for each dental implant classification is shown in Table 3.
The finely tuned VGG16 performed best in 9 of 11 dental implant categories. However, the finely tuned VGG19 performed better overall. Straumann Tissue 4.1 implants performed well on all models, and Astra TX 4.5 and Astra MicroThread 4.5 implants performed slightly lower overall than the others.
Table 4 shows the dental implant classification performance of all tested models by AUC. Fine tuning of VGG16 and VGG19 showed a high AUC, with the basic CNN having the lowest AUC for all dental implants. The ROC curve is shown in Figure S1.

3.2. Visualization of Model Classification

Figure 4 shows images of the 11 types of dental implants classified using each CNN model, visualized using Grad-CAM. The finely tuned VGG16 and finely tuned VGG19 both showed an identification area that could be used to identify similar images. The transfer-learning VGG16 and transfer-learning VGG19 both showed the identification areas used to identify similar images. The basic CNN indicated only the outline of the implant as the identification area. From our results, the discriminative region distinguished the implant system from not only the implant fixture but also the entire circumference. In particular, we observed that both finely tuned CNNs discriminated not only using part of the fixture but also using the whole. Visualization images for each dental implant system are shown in Figure S2.

4. Discussion

We demonstrated that the five CNNs surveyed were able to classify 11 dental implant systems extracted from panoramic X-ray images with high accuracy despite mixed conditions during the implant-treatment stage. Grad-CAMs for each network were also found to understand the characteristics of each convolutional layer for each implant fixture. These results played an important role in classifying dental implant brands from panoramic radiographs via deep learning.
By applying appropriate transfer learning and fine-tuning to the pre-trained deep CNN architectures, we were able to perform image classification with high accuracy when using relatively small image datasets. The classification performance of basic CNNs with only three convolutional layers was the worst. We thus inferred that CNN models with few convolutional layers have limited machine-learning capability for approximately 10 image classifications involving a small dataset. The results of this study also showed that fine-tuning some convolutional blocks in the deep CNN layers could improve image classification performance. Generally, deep CNN models trained from pretrained deep neural networks on large natural image datasets are good for general image classification. However, they are not effective for specific classifications, such as those for medical imaging. The findings show that if a particular convolutional block of a deep CNN model is finely tuned, deep CNN can be more specialized for a particular classification task [24]. This is an important finding that shows the usefulness of fine adjustments in medical imaging.
A disadvantage of CNNs is that they are black boxes that cannot explain the characteristics of machine learning and the grounds for making decisions based on that learning [25]. Therefore, feature visualization using Grad-CAM was applied. This process helps humans understand that features or areas of images are used for classification decisions [26]. Our visualization results were also interesting. Eleven types of implants were the subject of this classification. Among them, various states were classified (e.g., only implant fixtures, fixtures with abutments, and fixtures with superstructure). Dentists typically identify fixtures from the conditions of the implants and brands from their morphologies. CNNs, as we have shown, can also be used to perform the same feature extractions. However, we found cases in which some implants were classified by treating the entire background as a feature (see Astra MicroThread 4.0 results). Notably, apart from the different feature extraction processes, there was no difference in the accuracy of classification. The features around the implant can thus be used to characterize the overall morphology of the fixture.
The main advantage of panoramic radiography is the ability to detect tooth- and jaw-related objects simultaneously [27]. Despite the plethora of images available, few studies [19,28,29,30,31] have applied CNNs to their classifications and diagnoses. Studies that used panoramic radiographs often involved diseases related to the jawbone [28,29,31] and the maxillary sinus [19]. Because panoramic radiographs have different distortions depending on the region to be photographed, periapical radiographic images have generally been used for diagnosis, whereas CNNs have been used for tooth-related classifications and diagnoses [32,33]. The same was found to be true for the classification of dental implants using CNNs [13]. From our study, we found that CNNs using panoramas lead to results comparable to the diagnostic accuracy of dental implant classification by CNNs using periapical radiographic images [13]. These results will contribute to the accuracy of CNN classification diagnoses by increasing the number of images used via preprocessing.
Compatibility is different depending on the dental implant system [34]. Some systems are not compatible with other brands, whereas others are broadly compatible. As mentioned, these factors directly affect the maintenance of implant prostheses. Nevertheless, patient implant maintenance will continue as long as the device remains in the patient’s oral cavity. The implant systems examined with this study were developed in 2020. It is important to accumulate current data and to use the learned network for the next generation of devices. Even if it is difficult to obtain information on implant systems that have been discontinued, it is necessary for dentists to prepare those systems to easily procure and respond to implant data that have been accumulated thus far.
We showed that deep neural networks are suitable for classifying the included dental implants. There is potential to apply them to more implant systems [10] and setups with different devices for image acquisition. Major dental implant systems and radiography devices differ in every region of the world. Therefore, it is first necessary to create an accurate database for each and build accurate classification by deep neural network based on the database. We hope that cross-sectional studies by other institutions in the world will help to build a stronger dental implant classification method using CNNs.
This study had three limitations. The first was the narrow selection of CNN models. The commonly used VGG16 and VGG19 models were employed. Deep-learning algorithms (e.g., ResNet and CapsNet) with deep and wide layers or those with modified stratification methods are being continuously developed [35]. It will thus be necessary to study these and various other CNN models in the future. Second, the X-ray images used for classification in this study were taken with the same panoramic X-ray equipment. With different panoramic X-ray equipment, the image quality and magnification provided will vary. Therefore, in future work, it will be necessary to estimate the results in a large cross-sectional study involving various panoramic radiographs and image qualities. Third, we evaluated image segments manually cropped from panoramic radiographs. Creating a network that can detect implants using uncropped panoramic images or those that can apply techniques to detect multiple implants simultaneously would be a more valuable direction for future research.

5. Conclusions

In our study, we demonstrated that the deep CNNs surveyed were able to classify 11 dental implant systems extracted from digital panoramic X-ray images with high accuracy, despite mixed conditions at the implant-treatment stage. In particular, VGG16 and VGG19 finely tuned CNNs showed excellent classification performance. Grad-CAM for each network was also shown to understand the characteristics of each convolutional layer for each implant fixture. These results will play an important role in determining dental implant brands from panoramic radiographs using deep learning.

Supplementary Materials

The following are available online at https://www.mdpi.com/2218-273X/10/7/984/s1, Figure S1: Mean ROC curves of each CNN models for 11 types of dental implant classification, Figure S2: Visualization of Dental Implant Classification.

Author Contributions

Conceptualization, S.S. and T.H.; methodology, S.S. and T.H; software, T.H. and K.Y. (Kazumasa Yoshii); validation, S.S and K.Y. (Katsusuke Yamashita); formal analysis, S.S. and K.Y. (Kazumasa Yoshii); investigation, S.S. and K.Y. (Katsusuke Yamashita); data curation, S.S. and K.Y. (Kazumasa Yoshii); writing—original draft preparation, S.S.; writing—review and editing, T.H., K.N., H.N., N.Y., and Y.F.; visualization, K.Y. (Kazumasa Yoshii); supervision, T.H.; project administration, Y.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by JSPS KAKENHI, grant number JP19K19158.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schimmel, M.; Srinivasan, M.; McKenna, G.; Müller, F. Effect of advanced age and/or systemic medical conditions on dental implant survival: A systematic review and meta-analysis. Clin. Oral Implants Res. 2018, 29, 311–330. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Mastrangelo, F.; Quaresima, R.; Abundo, R.; Spagnuolo, G.; Marenzi, G. Esthetic and physical changes of innovative titanium surface properties obtained with laser technology. Materials (Basel) 2020, 13, 66. [Google Scholar] [CrossRef] [Green Version]
  3. Saghiri, M.A.; Asatourian, A.; Kazerani, H.; Gutmann, J.L.; Morgano, S.M. Effect of thermocycling on the surface texture and release of titanium particles from titanium alloy (Ti6Al4V) plates and dental implants: An in vitro study. J. Prosthet. Dent. 2020. [Google Scholar] [CrossRef] [PubMed]
  4. Guarnieri, R.; Di Nardo, D.; Di Giorgio, G.; Miccoli, G.; Testarelli, L. Clinical and radiographics results at 3 years of RCT with split-mouth design of submerged vs. nonsubmerged single laser-microgrooved implants in posterior areas. Int. J. Implant Dent. 2019, 5, 44. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Makary, C.; Menhall, A.; Zammarie, C.; Lombardi, T.; Lee, S.Y.; Stacchi, C.; Park, K.B. Primary stability optimization by using fixtures with different thread depth according to bone density: A clinical prospective study on early loaded implants. Materials (Basel) 2019, 12, 2398. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Farronato, D.; Pasini, P.M.; Manfredini, M.; Scognamiglio, C.; Orsina, A.A.; Farronato, M. Influence of the implant-abutment connection on the ratio between height and thickness of tissues at the buccal zenith: A randomized controlled trial on 188 implants placed in 104 patients. BMC Oral Health 2020, 20, 1–11. [Google Scholar] [CrossRef] [Green Version]
  7. Sukegawa, S.; Kawai, H.; Nakano, K.; Kanno, T.; Takabatake, K.; Nagatsuka, H.; Furuki, Y. Feasible advantage of bioactive/bioresorbable devices made of forged composites of hydroxyapatite particles and poly-L-lactide in alveolar bone augmentation: A preliminary study. Int. J. Med. Sci. 2019, 16, 311–317. [Google Scholar] [CrossRef] [Green Version]
  8. Sukegawa, S.; Kawai, H.; Nakano, K.; Takabatake, K.; Kanno, T.; Nagatsuka, H.; Furuki, Y. Advantage of Alveolar Ridge augmentation with bioactive/bioresorbable screws made of composites of unsintered hydroxyapatite and poly-L-lactide. Materials (Basel) 2019, 12, 3681. [Google Scholar] [CrossRef] [Green Version]
  9. Meloni, S.M.; Lumbau, A.; Spano, G.; Baldoni, E.; Pisano, M.; Tullio, A.; Tallarico, M. Sinus augmentation grafting with anorganic bovine bone versus 50% autologous bone mixed with 50% anorganic bovine bone: 5 years after loading results from a randomised controlled trial. Int. J. Oral Implantol. 2019, 12, 483–492. [Google Scholar]
  10. Jokstad, A.; Braegger, U.; Brunski, J.B.; Carr, A.B.; Naert, I.; Wennerberg, A. Quality of dental implants*. Int. Dent. J. 2003, 53, 409–443. [Google Scholar] [CrossRef]
  11. Lee, K.-Y.; Shin, K.S.; Jung, J.-H.; Cho, H.-W.; Kwon, K.-H.; Kim, Y.-L. Clinical study on screw loosening in dental implant prostheses: A 6-year retrospective study. J. Korean Assoc. Oral Maxillofac. Surg. 2020, 46, 133–142. [Google Scholar] [CrossRef] [PubMed]
  12. Molander, B. Panoramic radiography in dental diagnostics. Swed. Dent. J. Suppl. 1996, 119, 1–26. [Google Scholar] [PubMed]
  13. Kim, J.-E.; Nam, N.-E.; Shim, J.-S.; Jung, Y.-H.; Cho, B.-H.; Hwang, J.J. Transfer learning via deep neural networks for implant fixture system classification using periapical radiographs. J. Clin. Med. 2020, 9, 1117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 530–531. [Google Scholar] [CrossRef] [PubMed]
  15. Suzuki, K. Overview of deep learning in medical imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [Google Scholar] [CrossRef] [PubMed]
  16. Coudray, N.; Ocampo, P.S.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.L.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef] [PubMed]
  17. Fischer, A.M.; Yacoub, B.; Savage, R.H.; Martinez, J.D.; Wichmann, J.L.; Sahbaee, P.; Grbic, S.; Varga-Szemes, A.; Schoepf, U.J. Machine learning/deep neuronal network: Routine application in chest computed tomography and workflow considerations. J. Thorac. Imaging 2020, 35, S21–S27. [Google Scholar] [CrossRef]
  18. Onishi, Y.; Teramoto, A.; Tsujimoto, M.; Tsukamoto, T.; Saito, K.; Toyama, H.; Imaizumi, K.; Fujita, H. Investigation of pulmonary nodule classification using multi-scale residual network enhanced with 3DGAN-synthesized volumes. Radiol. Phys. Technol. 2020. [Google Scholar] [CrossRef] [PubMed]
  19. Murata, M.; Ariji, Y.; Ohashi, Y.; Kawai, T.; Fukuda, M.; Funakoshi, T.; Kise, Y.; Nozawa, M.; Katsumata, A.; Fujita, H.; et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol. 2019, 35, 301–307. [Google Scholar] [CrossRef]
  20. Lee, K.-S.; Jung, S.-K.; Ryu, J.-J.; Shin, S.-W.; Choi, J. Evaluation of transfer learning with deep convolutional neural networks for screening osteoporosis in dental panoramic radiographs. J. Clin. Med. 2020, 9, 392. [Google Scholar] [CrossRef] [Green Version]
  21. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  22. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  23. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  24. Nogueira, K.; Penatti, O.A.B.; dos Santos, J.A. Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recognit. 2017, 61, 539–556. [Google Scholar] [CrossRef] [Green Version]
  25. Francolini, G.; Desideri, I.; Stocchi, G.; Salvestrini, V.; Ciccone, L.P.; Garlatti, P.; Loi, M.; Livi, L. Artificial intelligence in radiotherapy: State of the art and future directions. Med. Oncol. 2020, 37, 50. [Google Scholar] [CrossRef]
  26. Schwendicke, F.; Elhennawy, K.; Paris, S.; Friebertshäuser, P.; Krois, J. Deep learning for caries lesion detection in near-infrared light transillumination images: A pilot study. J. Dent. 2020, 92, 103260. [Google Scholar] [CrossRef]
  27. Ribeiro, A.; Keat, R.; Khalid, S.; Ariyaratnam, S.; Makwana, M.; do Pranto, M.; Albuquerque, R.; Monteiro, L. Prevalence of calcifications in soft tissues visible on a dental pantomogram: A retrospective analysis. J. Stomatol. Oral Maxillofac. Surg. 2018, 119, 369–374. [Google Scholar] [CrossRef] [Green Version]
  28. Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Muramatsu, C.; Fukuda, M.; Kise, Y.; Nozawa, M.; Kuwada, C.; Fujita, H.; Katsumata, A.; et al. Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2019, 128, 424–430. [Google Scholar] [CrossRef] [PubMed]
  29. Lee, J.-S.; Adhikari, S.; Liu, L.; Jeong, H.-G.; Kim, H.; Yoon, S.-J. Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system: A preliminary study. Dentomaxillofac. Radiol. 2019, 48, 20170344. [Google Scholar] [CrossRef]
  30. Krois, J.; Ekert, T.; Meinhold, L.; Golla, T.; Kharbot, B.; Wittemeier, A.; Dörfer, C.; Schwendicke, F. Deep learning for the radiographic detection of periodontal bone loss. Sci. Rep. 2019, 9, 1–6. [Google Scholar] [CrossRef]
  31. Chu, P.; Bo, C.; Liang, X.; Yang, J.; Megalooikonomou, V.; Yang, F.; Huang, B.; Li, X.; Ling, H. Using octuplet Siamese network for osteoporosis analysis on dental panoramic radiographs. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 2579–2582. [Google Scholar]
  32. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J. Periodontal Implant Sci. 2018, 48, 114–123. [Google Scholar] [CrossRef] [Green Version]
  33. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef] [PubMed]
  34. Karl, M.; Irastorza-Landa, A. In vitro characterization of original and nonoriginal implant abutments. Int. J. Oral Maxillofac. Implants 2018, 33, 1229–1239. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, A.; Wang, M.; Wu, H.; Jiang, K.; Iwahori, Y. A novel LiDAR data classification algorithm combined capsnet with resnet. Sensors (Switzerland) 2020, 20, 1151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Cropping of dental implant imagery to include single fixtures.
Figure 1. Cropping of dental implant imagery to include single fixtures.
Biomolecules 10 00984 g001
Figure 2. Eleven types of dental implant systems cropped from panoramic radiographs. The images of each system include implant fixtures, dental implants with healing abutments, dental implants with provisional settings, and implants with final prostheses.
Figure 2. Eleven types of dental implant systems cropped from panoramic radiographs. The images of each system include implant fixtures, dental implants with healing abutments, dental implants with provisional settings, and implants with final prostheses.
Biomolecules 10 00984 g002
Figure 3. Schematic of the five convolutional neural networks (CNN) architectures.
Figure 3. Schematic of the five convolutional neural networks (CNN) architectures.
Biomolecules 10 00984 g003
Figure 4. Example of the class activation maps of the five CNN networks for the eleven dental implant systems.
Figure 4. Example of the class activation maps of the five CNN networks for the eleven dental implant systems.
Biomolecules 10 00984 g004
Table 1. Types of dental implant systems and corresponding number of images.
Table 1. Types of dental implant systems and corresponding number of images.
Dental Implant SystemFull OSSEOTITE 4.0Astra EV 4.2Astra TX 4.0Astra MicroThread 4.0Astra MicroThread 4.5Astra TX 4.5
CompanyBiometDentsplyDentsplyDentsplyDentsplyDentsply
Diameter (mm)4.04.24.04.04.54.5
Length (mm)8.58.08.08.09.09.0
10.09.09.09.011.011.0
11.011.011.011.0
11.5
Number of images42742525211088698387
Implant fixture2782011416512332226
Implants with healing abutment251525061568094
Prostheses1247259942028667
Dental implant systemBrånemark Mk III 4.0FINESIA 4.2Replace Select Tapered 4.3Nobel CC 4.3Straumann Tissue 4.1
CompanyNobelbiocareKYOCERANobelbiocareNobelbiocareStraumann
Diameter (mm)4.04.24.34.34.1
Length (mm)8.58.08.08.08.0
10.010.010.010.010.0
11.5 11.511.5
Number of images4232334861681490
Implant fixture2551052021073199
Implants with healing abutment146101145155211
Prostheses222713945380
Table 2. Dental implant classification accuracy of CNN models.
Table 2. Dental implant classification accuracy of CNN models.
RecallPrecisionAccuracyF-measure
Basic CNN0.8020.8420.8600.819
VGG16-transfer0.8640.8880.8990.874
VGG16-fine tuning0.9070.9280.9350.916
VGG19-transfer0.8400.8730.8800.853
VGG19-fine tuning0.8940.9130.9270.902
Table 3. Dental implant classification performance by F1 score.
Table 3. Dental implant classification performance by F1 score.
Full OSSEOTITE 4.0Astra EV 4.2Astra TX 4.5Astra MicroThread 4.0Astra MicroThread 4.5Astra TX 4.0
Basic CNN0.8490.7010.6580.7780.7460.930
VGG16-transfer0.8990.7990.7390.8790.8150.938
VGG16-fine tuning0.9550.8600.7700.9280.8660.969
VGG19-transfer0.8740.7650.7050.8370.8190.918
VGG16-fine tuning0.9530.8310.7400.9170.8900.961
Brånemark
Mk III 4.0
FINESIA 4.2Replace Select Tapered 4.3Nobel CC 4.3Straumann Tissue 4.1
Basic CNN0.8710.8310.8050.9330.905
VGG16-transfer0.9100.9310.8010.9440.962
VGG16-fine tuning0.9350.9660.8760.9690.986
VGG19-transfer0.8790.8980.7970.9210.970
VGG16-fine tuning0.9400.9150.8360.9610.983
Table 4. Dental implant classification performance by the area under the recover operating characteristic curve (AUC).
Table 4. Dental implant classification performance by the area under the recover operating characteristic curve (AUC).
Full OSSEOTITE 4.0Astra EV 4.2Astra TX 4.5Astra MicroThread 4.0Astra MicroThread 4.5Astra TX 4.0
Basic CNN0.9860.9590.9580.9780.9690.986
VGG16-transfer0.9990.9910.9870.9960.9930.998
VGG16- fine tuning0.9970.9790.9810.9890.9870.994
VGG19-transfer0.9990.9920.9870.9950.9920.998
VGG16- fine tuning0.9930.9750.9800.9870.9840.991
Brånemark
Mk III 4.0
FINESIA 4.2Replace Select Tapered 4.3Nobel CC 4.3Straumann Tissue 4.1
Basic CNN0.9880.9940.9810.9930.993
VGG16-transfer0.9980.9990.9950.9981.000
VGG16- fine tuning0.9960.9990.9840.9960.999
VGG19-transfer0.9971.0000.9900.9981.000
VGG16- fine tuning0.9970.9970.9810.9930.999

Share and Cite

MDPI and ACS Style

Sukegawa, S.; Yoshii, K.; Hara, T.; Yamashita, K.; Nakano, K.; Yamamoto, N.; Nagatsuka, H.; Furuki, Y. Deep Neural Networks for Dental Implant System Classification. Biomolecules 2020, 10, 984. https://doi.org/10.3390/biom10070984

AMA Style

Sukegawa S, Yoshii K, Hara T, Yamashita K, Nakano K, Yamamoto N, Nagatsuka H, Furuki Y. Deep Neural Networks for Dental Implant System Classification. Biomolecules. 2020; 10(7):984. https://doi.org/10.3390/biom10070984

Chicago/Turabian Style

Sukegawa, Shintaro, Kazumasa Yoshii, Takeshi Hara, Katsusuke Yamashita, Keisuke Nakano, Norio Yamamoto, Hitoshi Nagatsuka, and Yoshihiko Furuki. 2020. "Deep Neural Networks for Dental Implant System Classification" Biomolecules 10, no. 7: 984. https://doi.org/10.3390/biom10070984

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop