1 Introduction

At the end of December 2019, a virus named severe acute respiratory syndrome coronavirus 2(SARS-CoV-2) was discovered (Chan et al. 2020; Guan et al. 2020; Deng 2020). The disease caused by the virus is called COVID-19, and the symptoms of the disease are shown in Table 1. In the following months, the virus rapidly spread across the country and evolved into a serious national public health crisis. The World Health Organization (WHO) also declared the global coronavirus outbreak as a public health emergency of international concern on January 30, 2020 (McKibbin and Fernando 2020; Timeline of WHO’s response to COVID-19. 2020). To date (January 15, 2021), 191 countries and regions have been infected with novel Coronavirus, with a total of 93,129,104 confirmed cases and 1,994,440 deaths. The number of infections and deaths in China and the top 10 infected countries is shown in Fig. 1 (Coronavirus Resource Center 2021).

Table 1 Symptoms related to the novel coronavirus (COVID-19) (Huang et al. 2020a)
Fig. 1
figure 1

Number of infections and deaths in the top 10 infected countries and China (January 15, 2021)

As can be seen from Fig. 1, the virus has a large spread and a large number of infected people. At the same time, as shown in Fig. 2, the virus has many potential natural hosts, intermediate hosts, and final hosts (Tomar and Gupta 2020). Currently, many scientists and researchers are seeking various technologies to suppress the coronavirus. Among these, artificial intelligence technology has played an important role in the fight against the epidemic, because it has many advantages in suppressing the coronavirus (Park et al. 2020; Vaishya et al. 2020; Bansal et al. 2020). First of all, artificial intelligence technology must be researched based on a large amount of data. Due to the global outbreak of the coronavirus, a large amount of data can be generated. Secondly, artificial intelligence technology is highly efficient and its accuracy is easy to verify, which reduces a lot of time for epidemic prevention. Finally, artificial intelligence technology has a wide range of applications, such as medical image processing, disease tracking, predictive results, computational biology, and medicine, etc. (Kumar et al. 2020).

Fig. 2
figure 2

How COVID-19 is spread (Tomar and Gupta 2020)

According to the research of McCall (McCall 2020), many scholars agree with the role of AI technology in epidemic prevention and control. Rasheed et al. (Rasheed et al. 2020) listed many AI technologies in suppressing COVID-19 to help virologists fight COVID-19. Albahri et al. (2020) discussed the application methods and challenges of artificial intelligence technology in COVID-19 medical imaging in terms of evaluation and benchmarking, and they emphasized the importance of classification standards. Jalaber et al. (2020) analyzed symptoms based on CT images of the lungs and explained the application of artificial intelligence technology in medical imaging. Ryan et al. (2020) conducted a retrospective study on the mortality prediction model of COVID-19 and other pneumonia symptoms and believed that the ML algorithm is an effective prediction tool that can be used to predict patient mortality 72 h in advance. Tayarani-N (2020) summarized the methods of AI technology to suppress COVID-19 and made a detailed classification.

It can be seen that there are many review articles on AI technology and COVID-19, most of which are unilateral technical reviews or a summary of multiple methods. In addition, in our survey, artificial intelligence technology is still mainly used to solve diseases directly related to COVID-19, while ignoring many peripheral issues, such as healthcare conditions and healthcare systems (Gironi et al. 2021; Bragazzi et al. 2020a; Damiani et al. 2021a, 2021b). Damiani et al. (2021c) studied mask use and facial dermatitis during COVID-19 to promote awareness of mask safety among the general population. Cristaudo et al. (2020) noted a higher incidence of irritant dermatitis during the COVID-19 pandemic due to the use of soap and water or alcohol-based sanitizers. These problems can not be ignored, and how to achieve continuous innovation and improvement of healthcare conditions and healthcare systems through artificial intelligence technology is also worth thinking about.

Based on the above analysis, we think that it is necessary to systematically review the role of AI technology in epidemic prevention and control from the overall application. Due to the diversity of AI technologies (especially the combination of technologies), we cannot list all the methods. More important is the research logic and research direction of scholars, which not only allows readers to grasp the latest research fields and research methods, but also makes it easier to discover existing problems and new research directions from a macro perspective. Based on this, the motivation and contribution of this article are as follows:

  1. 1.

    A more comprehensive review of the application of artificial intelligence technology in the suppression of COVID-19;

  2. 2.

    Comprehensive literature research and practical perspectives give out the advantages and challenges of artificial intelligence technology in epidemic prevention and control;

  3. 3.

    Based on the existing other new technologies to look at the development direction of artificial intelligence in the process of fighting against covid-19.

The rest of the article is arranged as follows: In the second part, the general overview of this paper is given. In the third part, we reviewed and summarized the application of AI technology in restraining COVID-19 from different fields. The fourth part gives the main advantages and challenges of current artificial intelligence technology according to the above literature research and puts forward development suggestions in the fifth part. Finally, the sixth part summarizes the full text.

In this paper, literature retrieval was conducted through 6 channels, namely, Web of Science, ScienceDirect, IEEE Xplore Digital Library, China national knowledge infrastructure, Ei Compendex, and Google Scholar. Firstly, the literature was read with the keywords of COVID-19, artificial intelligence (AI), and other larger directions, and the article framework as shown in Fig. 3 was summarized. Then classify and retrieve other articles and write the rest of the article.

Fig. 3
figure 3

Overall framework of the article

2 Article overview

In order to better understand the application of artificial intelligence technology in restraining covid-19, we give the overall framework of this paper, as shown in Fig. 3:

In Fig. 3, this paper can be divided into three aspects: literature research, literature summary, and development suggestions.

Through literature research, existing literature research can be summarized into three parts: prediction, symptom recognition, and development. The main role of prediction is to predict unknown changes based on existing data. The propagation prediction and survival prediction mainly focus on the spread of the epidemic and the survival rate of patients, which is conducive to the timely decision of the government and hospitals. Drug prediction is a quick and effective way to look for drugs that can suppress COVID-19 among existing drugs. Symptom recognition is mainly based on medical images (such as CT images and X-rays), using artificial intelligence technology to distinguish COVID-19 from other lung diseases. The main advantage of this technology is that it can greatly improve the efficiency of medical workers, and there are many studies. The research content of development is very extensive. In this paper, we summarize drug development, vaccine development, and application development. Among them, it is more difficult to use artificial intelligence technology for drug development and vaccine development because it takes a lot of time, while application development is easier to achieve.

After the classification of literature research fields, we summarized the advantages and challenges of existing AI technologies in fighting COVID-19. The biggest advantage of artificial intelligence technology lies in its strong processing capacity, which can accelerate the diagnosis speed of medical staff and save medical resources. At the same time, artificial intelligence technology has high precision in processing results and has good visual effects. However, artificial intelligence also faces certain challenges. Due to a large number of lung diseases, the ability of artificial intelligence technology to classify and identify diseases is insufficient, and it is also difficult in drug development.

Generally speaking, artificial intelligence technology is developing rapidly, but a single method also has limitations. The combined application of artificial intelligence technology and other technologies may become an effective way to inhibit COVID-19. In the following description, we provide a detailed view and literature survey. Through the introduction of this paper, we hope to provide some help to the world's epidemic prevention work.

3 Literature research

3.1 Prediction

According to the literature (Giamberardino and Iacoviello 2020), it took about 18 months for an infectious disease to spread throughout the world in the nineteenth century. In recent years, it took less than 36 h, which is shorter than the incubation period of most diseases. Also, nearly 400 million people go to another country or region every year, which undoubtedly accelerates the spread of the virus. The global outbreak of COVID-19 is a non-linear and dynamic process. For areas that may be infected, there is great pressure on both the government and the people (Tobias 2020; Liestol and Andersen 2002). Therefore, the means of predicting the spread of the disease is extremely important. Early detection of the disease and separating the patient from the healthy are the best means of disease prevention (Ai et al. 2020). At present, there are three common prediction directions: propagation prediction, drug prediction, and survival prediction.

3.1.1 Spread prediction

At present, the epidemic situation in many countries has not been controlled. If certain technical means can be used to predict the development trend of COVID-19, and early development of relevant epidemic prevention measures will reduce the number of deaths due to COVID-19. Fortunately, big data can monitor disease outbreaks in real-time, and all kinds of data on COVID-19 are widely available (Bragazzi et al. 2020b). At the same time, combined with artificial technology, it is possible to predict the spread of COVID-19 to a certain extent.

Many scholars have now discussed this issue. Among them, using the LSTM network to predict the spread of coronavirus is one of the most popular methods, because LSTM can perform better in longer sequences than ordinary RNNs. Chimmula et al. (2020) established a long and short-term memory (LSTM) network. They innovatively considered the recovery rate in the model and made short-term predictions for Canada, with an accuracy rate of 93.4%. Tomar and Gupta (2020) used LSTM and curve-fitting methods to estimate the number of recovered cases, the number of daily positive cases, and the number of death cases, and proposed corresponding preventive measures. Meanwhile, Zheng (2020) and Ayyoubzadeh (Ayyoubzadeh et al. 2020) also used LSTM to predict the spread of coronavirus. Over time, many other artificial intelligence methods for predicting the spread of COVID-19 have now emerged, as shown in Table 2.

Table 2 Other models for predicting the spread of COVID-19

3.1.2 Drug prediction

Currently, it may take a long time to develop a new drug or vaccine. Generally speaking, drug development usually requires 5 stages: discovery and development, pre-clinical research, clinical research, FDA review, and FDA post-market safety monitoring and development. However, there are only four steps for drug reuse: compound identification, compound acquisition, clinical research, and FDA post-market safety monitoring and development (Xue et al. 2018), its development and prediction process is shown in Fig. 4.

Fig. 4
figure 4

Drug development and reuse process (Xue et al. 2018)

The drug prediction proposed in this paper is to reuse old drugs to explore new therapies, which are more effective, more economical, and low-risk. In many documents, the application of artificial intelligence technology to predict drugs for the treatment of new coronary pneumonia has become an important method (Alimadadi et al. 2020). Many scholars have researched this area. Ke et al. (2020) established an artificial intelligence system based on two different data sets to predict drugs that may affect the coronavirus. After several times of artificial intelligence learning and prediction, they identified 80 marketing drugs with potential. Ge et al. (2020) used a relationship extraction method based on deep learning, called BERE (Hong et al. 2019), to search for potential drug candidates for SARS-COV-2 from large-scale literature. They calculated and verified in the laboratory that a poly ADP ribose polymerase 1 (PARP1) inhibitor, CVL218, is currently in phase I clinical trial and may be reused to treat COVID-19. Before we finished the writing of this paper, some scholars have found a variety of drugs that can inhibit COVID-19 by using artificial intelligence technology. The summary is shown in Table 3.

Table 3 Available drugs predicted by artificial intelligence technology

In Table 3, we only summarized some of the drugs with known names. Of course, many other research institutions have researched this field. In conclusion, using AI technology to quickly identify useful existing drugs is the main way to combat COVID-19, which can save a lot of time.

3.1.3 Survival prediction

With novel Coronavirus rapidly moving around the world, the ability to categorize frail patients is critical, but severity scoring systems routinely used are often underdiagnosed, so a clinical predictive model of mortality is critical. Besides, there have been studies (Hultstrom et al. 2020; Renu and Prasanna 2020; Zhu et al. 2020) that the new coronavirus has adverse effects on kidney function, liver, and other organs. Therefore, after the patient recovers, whether the new coronavirus still affects his life is also a matter of great concern. At present, many scholars have predicted the survival rate of patients. In this section, we call them survival prediction.

There are also many studies on survival prediction. Xie et al. (2020) developed and internally verified a multivariate logistic regression model to predict the hospital mortality of COVID-19-positive patients. Nine variables were considered in the model development, including age, biomarkers, and comorbidities. This study provides a new predictive model to identify patients with COVID-19. Caramelo et al. (2020) also conducted a similar study. They proposed a method to determine the risk of each specific patient based on patient characteristics. The results showed that age was a variable representing higher risk of death from COVID-19 and that men were more likely to die from COVID-19. Yan et al. (2020a) used a database of blood samples from 404 infected patients in Wuhan, China, to determine key predictive biomarkers of disease severity to support decision-making and logistics planning in the healthcare system. They selected three biomarkers(Lactate dehydrogenase (LDH), lymphocytes, and highly sensitive C-reactive protein (hs-CRP)) in the machine learning process to predict the survival rate of a single patient with an accuracy of more than 90%.

Although research on survival prediction cannot directly combat COVID-19 like drugs, nor can it fundamentally solve COVID-19, it can allow us to see the severity of the virus infection and make corresponding policies early. More importantly, in this process, factors related to COVID-19 infection can be found (Caramelo et al. 2020).

3.1.4 Other prediction directions


In the many documents we have collected, in addition to the above three prediction classifications, there are some other prediction studies. Although there are many professional issues that we cannot discuss in-depth, the huge application of artificial intelligence technology can also be seen from the research of others.

  1. 1.

    Protein structure prediction

Proteins are large and complex molecules that are essential for life, and what any given protein can do depends on its unique 3D structure. But finding the 3D shape of a protein from its genetic sequence is a complex task, and scientists have found it challenging for decades. Due to the rapid reduction in the cost of gene sequencing, there are abundant data in the field of genomics. In the research of Senior et al. (2020a), they proposed two new methods for constructing protein structure prediction: The first method is based on commonly used techniques in structural biology, and repeatedly replace protein structure fragments with new protein fragments, and use trained neural networks to continuously improve the score of the proposed protein structure. The second method is to use gradient descent for optimization, which is used to make small incremental improvements, resulting in a highly accurate structure. There are also studies on protein structure prediction such as Senior et al. (2020b), Kryshtafovych et al. (2019), and Schaarschmidt et al. (2018).

  1. 2.

    Machine learning based on media channels

In today's society, with the continuous development and progress of the Internet, all kinds of social media can provide fast and effective means to monitor public health on a large scale with low cost, which saves a lot of time and improves efficiency in collecting information of infectious diseases. Choi et al. (2017) collected relevant data from 153 news media in South Korea, based on natural language processing technology to understand the public's response to the national virus infection outbreak. The result will help alleviate the public’s unnecessary fear and over-reaction to future infectious diseases. Shen et al. (2020) collected and analyzed COVID-19-related posts on Weibo and used machine learning methods to identify and predict. It can be seen that public social media data can be effectively used to predict infection cases and respond on time.

3.2 Symptom recognition

For the screening of COVID-19, reverse transcription-polymerase chain reaction (RT-PCR) has always been regarded as the main standard, but the shortage of equipment and strict requirements on the testing environment limit the rapid and accurate screening of suspicious objects (Fan et al. 2020). As an important supplement to RT-PCR detection, X-ray and computer tomography (CT) have also shown effectiveness in current diagnosis (Rubin et al. 2020; Shi et al. 2020). In order to help doctors improve the efficiency and accuracy of image recognition, artificial intelligence technology has important applications in this part.

3.2.1 CT recognition

The research of some scholars (McCall 2020) mentioned the use of AI technology to process a large number of CT images, which can quickly detect the lesions caused by COVI-19. For example, reading a CT scan manually may take up to 15 min, but AI can be completed in 10 s. At present, the research on this aspect is relatively extensive. Through literature research, the recognition of CT images is generally inseparable from neural networks. We can roughly divide it into three research directions: a. A comparative study of different neural networks; b. Improving the pre-trained neural network; c. Building a neural network.


  1. (a)

    A comparative study of different neural networks

In this regard, the research of Ardakani et al. (2020) is very representative. They collected 510 COVID-19 and 510 non-COVID-19 CT photos and adjusted them to 60 × 60 pixel grayscale images. As shown in Fig. 5, the study used 10 classic pre-trained convolutional neural networks, namely: 1-AlexNet, 2-VGG-16, 3-VGG-19, 4-SqueezeNet, 5-GoogleNet, 6-MobileNet-V2, 7-ResNet-18, 8-Resnet-50, 9-ResNet-101, and 10-Xception. To evaluate the performance of the network, they selected five performance indexes, including Accuracy, Sensitivity, Specificity, PPV, and NPV.

Fig. 5
figure 5

Ten pre-trained network architectures used by Ardakani et al. (2020)

Among the recognition results of these 10 networks, ResNet-101 and Xeption have better results and the highest accuracy. Meanwhile, Ardakani et al. also let radiologists recognize the same CT data, but its accuracy is far lower than the recognition results of neural networks. In the subsequent study, Singh et al. (2020) also used similar research methods, but instead of comparing multiple models of CNN, they compared multiple methods such as CNN, ANN, and ANFIS models, and the results showed that the use of CNN model for CT recognition was better.


  1. (b)

    Improving the pre-trained neural network

At present, the main research direction is to improve a pre-trained neural network and then to recognize CT images. This kind of research will always take a certain efficient neural network as the main framework, and then use new methods to optimize its parameters or combine with this framework for research. The study of Pathak et al. (2020) is a good case. They discussed the ResNet-50 network in the article and gave its specific framework. The network can extract the potential features of CT images, and then use transfer learning to adjust the initial parameters of each layer, making the optimization results more accurate. Besides, we also summarize the main framework and improvement methods of some researchers as shown in Table 4.

Table 4 Some existing main frameworks and improvement methods

  1. (c)

    Building a neural network

Among all the documents we have investigated, few documents build and train a neural network model from scratch to identify symptoms in CT images. However, in the literature on X-ray recognition, only the research of Stephen et al. (2019) satisfies this approach (Note: Although this is the recognition of X-ray images, it is also very similar to the research route used for CT recognition when using AI technology for image recognition).

Stephen et al. built a convolutional neural network model from scratch, extracting features from a given chest X-ray data set, and classifying them to determine whether a person is infected with pneumonia. The model framework is shown in Fig. 6.

Fig. 6
figure 6

The neural network model framework proposed by Stephen et al. (2019)

3.2.2 X-ray recognition

The principle of X-ray is partly the same as that of CT, and both use X-rays to present phases. The difference is that X-ray squashes a real object and is an overlapping image, but CT slices the real object layer by layer to observe the structure of each layer. In comparison, the advantages of CT scans are greater than X scans, but X-rays are relatively cheap and have a wider range of medical applications. The CT and X-ray images are shown in Fig. 7.

Fig. 7
figure 7

CT and X-ray images

After our literature research, X-ray recognition technology is very similar to the CT film recognition technology, and most of the methods mentioned in the literature are related to deep learning. However, even if the models of other scholars apply the pre-trained model framework, they have made related improvements or comparisons of various models. For example, Togacar et al. (2020) used the Social Mimic optimization method to process the feature set obtained by the deep learning model (MobileNetV2, SqueezeNet) and used a support vector machine (SVM) to combine and classify effective features. Abiyev and Ma'aitah (2018) considered the application of various neural networks such as CNN, BPNN, and CPNN in the monitoring of chest diseases. Therefore, we do not discuss each model in detail here. We have made a statistical table for the investigation of many documents, as shown in Table 5.

Table 5 Research on symptom recognition based on X-ray

In a word, AI technology can provide a better solution for both CT image recognition and X-ray identification. From the current literature survey of many databases, many models can obtain high-accuracy recognition results. The methods cannot be discussed here, but the research methods generally conform to the solution process shown in Fig. 8.

Fig. 8
figure 8

Schematic diagram of the research route

3.3 Development

Although few achievements have been achieved in drug development and vaccine development for COVID-19 by using artificial intelligence technology in the current research, many scholars are still making efforts in these aspects. Besides, some applications based on artificial intelligence technologies should not be ignored, which can help curb the spread of COVID-19 in many ways, such as patient tracking and data collection.

3.3.1 Drug development

The use of artificial intelligence technology for drug development is not been long ago and the progress is not smooth. In 2014 and 2015, many pharmaceutical industries were skeptical about the early progress of deep learning in drug development (Zhavoronkov 2018). It was not until 2017 that many pharmaceutical companies began to cooperate with artificial intelligence startups and academia to initiate internal R&D projects. At present, the productivity of the pharmaceutical industry is declining, and the rapid development of artificial intelligence (AI) can accelerate and improve pharmaceutical research and development. Due to the global outbreak of COVID-19, many scientists now propose to use artificial intelligence technology for drug research and development.

Through related literature research, we found that there are still few COVID-19 drugs that have been developed through artificial intelligence because the development of new drugs usually takes several years (Mak and Pichika 2019). Many scholars believe that the best strategy is drug repositioning (that is, finding new uses for old drugs) (Ashburn and Thor 2014; Pushpakom et al. 2019; Zhou et al. 2020), which is discussed in detail in the second section of Part Two. However, in Ho's (Dean 2020) report, a combination therapy based on artificial intelligence was proposed, which we believe can be used as a rapid drug development method. In simple terms, it is a combination of drugs to treat COVID-19. Since the combined dose of each drug may vary in a larger space, the use of artificial intelligence technology can quickly optimize the drug combination. At the same time, artificial intelligence-driven combination therapy design may also avoid the use of unsatisfactory clinical treatments, unnecessary patient mortality and morbidity, and may even play a role in preventing drug shortages.

3.3.2 Vaccine development

Existing evidence indicates that the application of artificial intelligence and systems biology in vaccine design and development can change healthcare, accelerate the process of clinical trials, and reduce the cost and time of drug development (Russo et al. 2020). Vaccine design strategies take into account how protective antibodies work, and why specific antigens may or may not cause a broad protective response (Anasir and Poh 2019; Kaufmann et al. 2014). However, because different pathogens use different strategies to evade protective immune responses, there is no single “best” solution to design immune drugs for any disease (Fraga et al. 2018; Thakur et al. 2019). Today, artificial intelligence (AI) and systems biology for smart vaccine design are becoming more and more powerful and can be used to accelerate new vaccine candidates in the development process, design general vaccine prototypes, and drive and predict the response of the system's immune system. Through literature research, there is almost no research on the use of artificial intelligence technology to complete the development of COVID-19 vaccines, and we will only briefly mention it here.

3.3.3 Application development

To prevent the continued epidemic of new coronary pneumonia, many scholars are exploring ways to treat new coronary pneumonia, and related application development is also a technical means. The application development of artificial intelligence and deep learning can generally be used as auxiliary tools for the treatment and diagnosis of new coronary pneumonia (Liang et al. 2019; Rao and Diamond 2020), which can speed up disease detection and provide relevant health data (Neill 2013; Arabasadi et al. 2017; Bastawrous and Armstrong 2013; Paolotti et al. 2014). Rao and Vazquez (2020) put forward a good suggestion. They recommend the use of mobile phone-based online surveys to collect data, which can be used for preliminary screening and diagnosis of COVID-19 infected individuals. Thousands of data points can be processed by the artificial intelligence framework, and the conclusion of no risk, medium risk, and high risk of infection can be given.

The above-mentioned mobile phone-based artificial intelligence framework can improve the identification efficiency of virus-infected persons. This is also the current main application. Besides, there are many directions for the application development of artificial intelligence technology. Qiu et al. (2020) To maintain the mental health of the public, isolated patients and frontline medical workers, a comprehensive system for monitoring crowd mental state was established based on various technological means, such as online emergency psychological intervention based on artificial intelligence, community science communication and social bond enhancement, virtual reality and neuromodulation. They are currently developing an artificial intelligence-based interface to answer most questions online, and provide targeted responses to intervention guidance, and scientific knowledge related to COVID-19 will be made available to the public on time. Of course, there are many similar studies. For example, Wang et al. (2020b) developed a WeChat small program, which can analyze the data from all users and track the close contact of all patients, to detect and isolate potential sources of infection as soon as possible. Besides, artificial intelligence technology has more applications in the development of software and programs against COVID-19 (Yanisky-Ravid and Jin 2020; Zhang et al. 2020a; Belfiore et al. 2020; Pu et al. 2020), as shown in Fig. 9.

Fig. 9
figure 9

Software or program development of artificial intelligence technology in the process of fighting against COVID-19

In the above chapter, we mainly introduce the main application fields of artificial intelligence technology in epidemic prevention and control. Due to the wide range of discussions and the complexity and changeful models of current artificial intelligence, we did not make a detailed analysis. However, we summarize the corresponding method classification as shown in Table 6 below:

Table 6 Method classification of artificial intelligence technology in the process of inhibiting COVID-19

It can be seen from Table 6 that the neural network is the most widely used in the research of COVID-19 inhibition, which is mainly due to the large amount of research data, which establishes a good application foundation for the neural network. Secondly, the neural network also has advantages in image recognition, especially convolutional neural network, so the neural network is most widely used in symptom recognition. In addition to the above methods, we also summarize a large number of research data sets in the appendix, hoping to be helpful to more scholars.

4 Advantages and challenges

4.1 Main advantage


In the previous article, we investigated the main applications of current artificial intelligence technology in suppressing COVID-19, and we can see that its applications are extensive, mainly due to the following advantages:

  1. 1.

    Processing capacity and processing speed

At present, the number of people infected with new coronary pneumonia is gradually increasing in the world. Taking symptom recognition as an example, during the outbreak of the epidemic, a large number of patients, including suspected cases, confirmed cases, and follow-up cases, need to be examined by chest CT to observe the changes and severity of pneumonia, which has been imposed a huge burden on the professional medical staff. Besides, the shortage of professional medical staff is also a major challenge. For chest CT, small areas of ground-glass opacity (GGO) can be found in high-resolution CT images (HRCT). However, in the initial stage of new coronary pneumonia, the lung manifestations of chest CT may show small, subpleural and surrounding GGO (Pan et al. 2020), which will consume more time and energy. Therefore, this brings great challenges to our radiologists, who not only do a lot of work but also have a high diagnostic accuracy rate. At the same time, the visual fatigue of radiologists will increase the potential risk of missed diagnosis of some small lesions (Wang et al. 2020a).

Artificial intelligence technology can just overcome the above-mentioned difficulties. The process of prediction and symptom recognition requires a large amount of data for training and verification. At the same time, its processing speed is fast, which can undoubtedly save a lot of time for doctors. In drug development, even if it takes a lot of time to use artificial intelligence technology, it can greatly shorten the development time compared to traditional methods (Liang et al. 2020; Simoes et al. 2020).


  1. 2.

    Higher accuracy

High accuracy is an important manifestation of the use of artificial intelligence technology, especially in the field of deep learning. In the prediction and symptom identification part of the previous article, almost all researches cannot do without the support of high accuracy. However, the existing accuracy rate is generally for the training set and the verification set. In terms of prediction, there is also uncertainty about what will happen in the future, but the results of high-accuracy prediction can also provide certain guidelines for the future.

In the study of Ardakani et al. (2020), they used a neural network to identify the corresponding data set, and also let doctors identify the same data set, and compared the accuracy rate, as shown in Fig. 10.

Fig. 10
figure 10

Radar chart of diagnostic indicators for radiologists and different networks (Ardakani et al. 2020)


  1. 3.

    Good visualization

Visualization is mainly reflected in the process of symptom recognition. For example, a general technique called class activation diagram (CAM) is commonly used for visualization. It refers to the thermal diagram of input image generation class activation, which has been applied in many pieces of literature. Harrison et al. (2020) performed class activation mapping on the CT data set they studied. As shown in Fig. 11, the CT image can be focused on some abnormal areas after neural network recognition.

Fig. 11
figure 11

ah Images show representative slices corresponding to gradient-weighted class activation mapping images on the test set (Harrison et al. 2020)

In Fig. 11, it can be seen that after the image is processed by the corresponding model, the abnormal area of the lung photo (especially the red area in the figure) can be displayed according to the different colors. The benefit is that it saves time and energy by providing the radiologist with a recommended lesion area. Similar studies have been conducted by Li et al. (2020) and Yan et al. (2020b).


  1. 4.

    Saving resources

At present, the number of people infected with COVID-19 in the world has exceeded 36 million (Coronavirus Resource Center 2020), and so many confirmed people cannot do without COVID-19 testing tools. Normally, nucleic acid testing is considered the basic basis for diagnosis. However, the strict requirements for the transportation and storage of COVID-19 nucleic acid kits may pose insurmountable challenges to many existing transportation and hospital facilities in crisis (Yu et al. 2020). At the same time, the stage of disease development and the method of sample collection will also affect the results of nucleic acid testing (Wang et al. 2020d). Therefore, if an efficient AI recognition system is developed to quickly diagnose patients with new coronary pneumonia, this will greatly improve the work efficiency of doctors while reducing the use of virus kits and saving resources.

4.2 Challenges


Although we introduced the main application and advantages of artificial intelligence technology in the process of inhibiting coronavirus, there are still many challenges from the literature research.

  1. 1.

    Forecast accuracy of artificial intelligence technology

Even though the prediction accuracy of most scholars' research models is quite high in the existing research, it is still worth mentioning here. As for the improvement of the prediction accuracy of artificial intelligence technology, it is also the driving force and direction of many scholars' research (Park et al. 2020). Patel (Patel et al. 2019) and his colleagues demonstrated a deep learning artificial intelligence model for diagnostic imaging, in which a radiologist can enhance it at key checkpoints where the algorithm is difficult.

To suppress the outbreak of COVID-19, it is necessary to make improvements from existing practical problems. Missed infection cases are an important existing problem, and some existing false-negative cases make the problem more complicated (Zhang et al. 2020b). Vaid et al. (2020) paid attention to this problem earlier. The model they proposed was an improvement of the VGG-19 model trained on ImageNet, and the model identified 3 cases of false-positive and 1 case of false-negative on the corresponding data set. However, through literature research, there are few research results about using artificial intelligence technology to identify false negative or false positive, which is not taken into account by most scholars in symptom recognition. Recently, many asymptomatic infected persons have appeared in the world. Huang et al. (2020b) also showed that individuals infected with SARS-COV-2 may be highly contagious in the absence of symptoms, so it is necessary to improve the accuracy and accuracy of symptom recognition of COVID-19.


  1. 2.

    The importance and limitations of data.

Data is extremely important for the construction of the AI model, especially in the process of neural network construction, a large amount of data is required for training and verification. In the era of modern information technology, there are various channels for data acquisition, which has indeed provided great help for artificial intelligence technology in the process of fighting against the epidemic. However, the novel Coronavirus broke out in late 2019, and on 11 March 2020, the World Health Organization (WHO) declared it a pandemic (Rajaraman and Antani 2020). The data collection time was short and there should be a large amount of invalid data. Besides, we have collected information from Hopkins on the current world epidemic spread, as shown in Fig. 12a–d.

Fig. 12
figure 12

COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (Coronavirus Resource Center 2021)

According to Fig. 12, it can be seen that the current areas of the world's new coronary pneumonia infections are mainly concentrated in North America, South America and Europe, while the number of infections in Asia and Africa is relatively small. And the incidence and fatality rates of COVID-19 are different in different regions. This shows that the spread of the new coronavirus is different in different regions. Even if the data from different places can be used for model establishment in the process of symptom identification, it should be difficult to establish a general model in the process of spreading prediction.


  1. 3.

    Inadequate ability to classify diseases

In the current research, many scholars have visualized the results of lung picture recognition, but because the activated heat map has similar colors in abnormal areas, it may show similar effects for different lung diseases. This was mentioned in the study by Yan et al. (2020b), who compared COVID-19 with common pneumonia diseases, as shown in Fig. 13. It is difficult to distinguish COVID-19 infected subjects only from images identified by the AI model.

Fig. 13
figure 13

AI system recognition results. (COVID-19: Coronavirus disease 2019; CP: Common pneumonia.) (Yan et al. 2020b)

Regarding such issues, Li et al. also mentioned in the study that CT may show imaging features similar to other types of pneumonia, which is difficult to distinguish (Li et al. 2020). Note that the problem mentioned here does not conflict with the good visualization effect mentioned above, because even though the AI model cannot accurately determine the type of disease, it can help radiologists find abnormal areas in CT or X-rays.


  1. 4.

    Challenges of artificial intelligence technology in drug development

At present, artificial intelligence technology has great application prospects in the field of drug development, and many scholars use artificial intelligence technology to research this area. Zhavoronkov (2018), for example, systematically summarized the process of drug development and the potential of deep learning in drug development and collected many corresponding models. Ekert et al. (2020) also made a good evaluation on the application of artificial intelligence technology in drug r&d, and he believed that the combination of computational modeling, artificial intelligence, machine learning (AI/ML), and complex in vitro model (CIVM) would increase in the future. But for now, drug development for COVID-19 still poses great challenges.

In the research of Zhavoronkov (2020), it was mentioned that it takes 5–6 years and millions of dollars to develop drugs using modern AI technology to be accepted by the pharmaceutical industry. The global outbreak of COVID-19 has only been a few months, aside from economic problems, there are great difficulties in the development time. At the same time, the development and verification of advanced artificial intelligence systems cost a lot of money, and relevant business organizations may also feel risky.

5 Direction of development


Although artificial intelligence technology has significant advantages, a single research method also has limitations. Ozdemir (2020) expressed in his research that he is eager for someone to write articles about related technologies and tools such as artificial intelligence, blockchain, and the Internet of Things. In the course of our research, there are few documents on the joint application of artificial intelligence technology and other technologies to suppress COVID-19, but this can be used as some new research directions.

  1. 1.

    Artificial Intelligence Technology and Internet of Things

The Internet of Things refers to the connection of any object to the network through information sensing equipment according to an agreed protocol, and the object exchanges and communicates through information media to realize intelligent identification, positioning, tracking, supervision and other functions. In the fight against COVID-19, wearable personal devices are being used to collect valuable data, such as personal temperature and geographic location. The combined application of the Internet of Things technology and artificial intelligence technology was mentioned in the research of Mohamed et al. (2020b), as shown in Fig. 14.

Fig. 14
figure 14

A COVID-19 monitoring and control model based on the Internet of Things (Mohamed et al. 2020b)

In Fig. 14, human health conditions, including body temperature, blood pressure, heart rate, and respiration data are monitored by sensors and sent to the cloud platform for processing and analysis. There is an expert system that supports artificial intelligence to classify the condition. If it is diagnosed as a suspected case or an infected case, the doctor will give a treatment plan based on the severity. In a word, it is very difficult to identify the infected people in the population (Krishna 2020), while the Internet of Things is mainly used in the monitoring process of the population. The collected data will be transmitted through the network by sensors, and the data will be analyzed by artificial intelligence technology, which will improve the identification efficiency of symptoms.


  1. 2.

    Artificial intelligence and cloud computing

Cloud computing is a kind of distributed computing, which refers to the network "cloud" to break up huge data computing processing programs into countless small programs, and then process and analyze these small programs through a system composed of multiple servers to get the results and return to users. Tuli et al. (2020a) deployed a ML improved model on the cloud computing platform. In the cloud-based environment, government hospitals and medical centers will continuously send various data such as patient information, weather conditions, and health facilities. Compared with the baseline method, cloud computing can improve the real-time prediction of epidemic situations, showing higher prediction accuracy. Besides, cloud computing is also characterized by fast computing speed and low consumption (Tuli et al. 2020b).


  1. 3.

    Intelligent robot

The intelligent robot here refers to a robot that supports artificial intelligence technology, which can be used to collect information about the surrounding environment without additional assistance (Mohamed et al. 2020b). The outbreak of COVID-19 has promoted the use of intelligent robots in the field of healthcare. This is mainly due to its many advantages (Clipper et al. 2019; Smith 2020; Azevedo 2020; Bonnie 2020): (a) Intelligent robots can communicate with people and help patients stabilize their emotions; (b) Intelligent robots can replace medical staff to contact patients, protect doctors while reducing the chance of virus spread. (c) Intelligent robots can provide many humanized services, including functions such as body temperature monitoring, cleaning and disinfection. Of course, due to different majors, we cannot discuss the main technologies involved. Here we mainly illustrate a development direction.


  1. 4.

    Drone technology

Drone technology has a wide range of processes in suppressing COVID-19, including surveillance, alerting, thermal scanning, drug treatment, food supply, and so on. The collection, concentration and analysis of data in their applications is a major challenge. Recently, Kumar et al. (2021) proposed a system to deal with the COVID-19 pandemic in different scenarios based on the drone system and artificial intelligence technology. As shown in Fig. 15, this architecture can realize drone data collection, data sharing, data analysis, and other functions.

Fig. 15
figure 15

Architecture for drone-based COVID-19 monitoring, control, and analytics in the smart healthcare system (Kumar et al. 2021)

In their study, eight algorithms involved in the system were given, and six cases were used to simulate and verify the system. The results showed that the system could give good suggestions, including the determination of the disinfection area and the calculation of the distance between people. Currently, the Australian Department of Defense is also exploring the establishment of drone-based COVID-19 health and respiratory surveillance platform for the monitoring of infectious diseases and respiratory conditions. In short, the combination of artificial intelligence and drone technology is of great research significance in the fight against COVID-19.


  1. 5.

    Blockchain technology

Originating from Bitcoin (Mashamba-Thompson and Crayton 2020), blockchain is a new application mode of distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm, and other computer technologies. Blockchain technology has five important characteristics: decentralization, openness, independence, security, and anonymity (Yaqoob et al. 2019). After analysis, during the COVID-19 pandemic, the combination of blockchain technology and artificial intelligence technology can further improve the existing epidemic prevention and control capacity.

The main advantage of blockchain technology in epidemic prevention and control is that it can ensure the reliability of data. Because many of the current data are collected from hospitals, the public, or the media, they are not protected, which may cause COVID-19 data to be modified. The transaction data recorded in the blockchain (such as covid-19 data) cannot be modified or changed by any entity (Pham et al. 2020; Jabarulla and Lee 2021). In addition, the blockchain has distributed characteristics. When either party fails due to its decentralization, the blockchain can work well. This traceability and decentralization is the key unique feature of blockchain, which can not be found in other traditional security technologies (Garg et al. 2020; Baz et al. 2022). In conclusion, these characteristics of blockchain contribute to correct data collection and promote the reliability analysis of COVID-19.

At present, the reliability of data is a major disadvantage of artificial intelligence technology against COVID-19, and blockchain technology can solve this problem. The combination of blockchain technology helps to establish the corresponding artificial intelligence model. At present, there are still few similar case studies.

6 Conclusion

Through a large amount of literature research, this article introduces the main applications of artificial intelligence technology in the process of suppressing the coronavirus from three main aspects: prediction, symptom recognition, and development. This shows that artificial intelligence technology plays a huge role in suppressing the coronavirus, but no model or artificial intelligence technology has an absolute advantage in a certain aspect. It can be seen from the summary of the literature:

  1. 1.

    At present, COVID-19 is still spreading, and the use of artificial intelligence technology to predict and identify symptoms is an effective preventive measure. In all the models we investigated, artificial intelligence technology can reach more than 93% in the propagation prediction of covid-19, and the accuracy of symptom recognition can reach more than 91%.

  2. 2.

    According to the results of the survey show that the development of new drugs takes 12 years, so drug predictions have more important advantages compared to drug development. Artificial intelligence technology can speed up the identification of targeted drugs to suppress COVID-19.

  3. 3.

    Although AI technology is fast and accurate in symptom recognition, it still has shortcomings such as insufficient disease classification ability and misjudgment of symptoms (such as false-negative patients), and the role of doctors is still dominant.

  4. 4.

    It is recommended that AI technology be combined with multiple technologies to suppress the spread of COVID-19. For example, the combination of the Internet of Things technology and artificial intelligence technology can realize services such as remote monitoring, data collection, and telemedicine. The combination of artificial intelligence technology and cloud computing can improve the accuracy of results and quickly provide medical solutions and other functions.

7 Appendix

7.1 Research data set

Data sources

Chief application

References

https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/cases-in-us.html

Propagation prediction

Chen et al. (2020a)

https://github.com/CSSEGISandData/COVID-19

Propagation prediction

Dong et al. (2020)

https://github.com/JieYingWu/COVID-19_US_County-level_Summaries

Propagation prediction

Killeen et al. (2020)

https://github.com/aakhmetz/COVID19SerialInterval

Propagation prediction

Nishiura et al. (2020)

https://github.com/MeyersLabUTexas/COVID-19

Propagation prediction

Du et al. (2020)

https://github.com/WellsRC/Coronavirus-2019

Propagation prediction

Wells et al. (2020)

http://www.mdpi.com/2077-0383/9/2/601/s1

Propagation prediction

Anzai et al. (2020)

https://github.com/adamkucharski/2020-ncov/

Propagation prediction

Kucharski et al. (2020)

https://github.com/Emergent-Epidemics/covid19_npi_china

Propagation prediction

Kraemer et al. (2020)

https://github.com/ImperialCollegeLondon/covid19model/releases/tag/v1.0

Propagation prediction

https://github.com/cheongsa/Coronavirus-COVID-19-statistics-in-China

Propagation/Survival prediction

Liu et al. (2020)

https://github.com/beoutbreakprepared/nCoV2019/tree/master/latest_data

Survival prediction

Xu et al. (2020)

https://covid19.who.int/

Survival prediction

NazSindhu et al. (2021)

https://www.worldometers.info/coronavirus/country/china/

Survival prediction

NazSindhu et al. (2021)

https://doi.org/10.17632/38nyzyt9bz.1

Drug prediction

Mutlu Ece et al. (2020)

https://github.com/ben-aaron188/covid19worry

Measuring emotions

Kleinberg and van der V egt I, Mozes M, (2020)

https://arxiv.org/abs/2003.11597

X-ray recognition

Cohen et al. (2020)

https://data.mendeley.com/datasets/pysxmjpkr4/1

Vaccine/social survey

Riham et al. (2020)

https://github.com/mtogacar/COVID_19

X-ray recognition

Togacar et al. (2020)

https://github.com/drkhan107/CoroNet

X-ray recognition

Khan et al. (2020)

https://github.com/muhammedtalo/COVID-19

X-ray recognition

Ozturk et al. (2020)

https://github.com/ShahinSHH/COVID-CAPS

X-ray recognition

Afshar et al. (2020)

https://github.com/lindawangg/COVID-Net

X-ray recognition

Wang and Wong (2020)

https://github.com/ieee8023/covid-chestxray-dataset

X-ray recognition

Apostolopoulos and Mpesiana (2020)

https://arxiv.org/abs/2003.13865

CT recognition

Yang et al. (2020b)

https://github.com/ieee8023/covid-chestxray-dataset

CT and X-ray recognition

Cohen et al. (2020)

https://github.com/UCSD-AI4H/COVID-CT

CT recognition

Zhao et al. (2020)

https://zenodo.org/record/3757476

CT recognition

Jun et al. (2020)

https://coronacases.org/

CT recognition

http://medicalsegmentation.com/covid19/

CT recognition

https://github.com/thepanacealab/covid19_twitter

Social media

Banda et al. (2021)

https://github.com/echen102/COVID-19-TweetIDs

Social media

Chen et al. (2020b)

https://github.com/SarahAlqurashi/COVID-19-Arabic-Tweets-Dataset

Social media

Alqurashi et al. (2020)

https://github.com/narcisoyu/Institutional-and-news-media-tweet-dataset-for-COVID-19-social-science-research

Social media

Yu (2020)