Development of innovative designs, new applications, new technologies and heavier investment in AI are continued to be seen every day. However, with the sudden impact of COVID19, so severe and urgent around the world, adoption of AI is propelled to an unprecedent level, because it helps to fight the virus pandemic by enabling one or more of the following possibilities: (1) autonomous everything, (2) pervasive knowledge, (3) assistive technology and (4) rational decision support. The deployment of AI into these aspects may be bold and experimental, but it is in full force, all scales, unanimous and swift in timing which may otherwise take years. The four aspects which are enabled by AI changed our lifestyle, so did the coronavirus. It is almost like a revolution in speeding up the technologies and their adoption in a short time. The following showcases a series of examples of technologies which are infused with AI for provision of one or more of the four benefits, with the aim of fighting the coronavirus and of course saving lives. In particular, the examples show how AI as a technological enabler enhances the existing process for fulfilling one or more of the four benefits.

2.1 Infection Risk Identification

As the first line of defence against COVID-19 pandemic, in-home risk assessment is a protocol by which anybody can check for himself or somebody else at home whether he/she has contracted the coronavirus through some basic tests. The assessment involves a dialogue of questions to which the subject has to answer using a questionnaire based on how he feels and where he has visited. The responses of the questionnaire are taken to some medical experts for analysis, deciding the infection risk level of that person. Using ICT and AI, however, this assessment can be fully digitized.

A mobile app is being developed by the Laboratory for Theory and Mathematical Modelling in the Division of Infectious Diseases at Augusta University [1] that allows users to DIY the risk assessment at home. AI is applied for replacing the human expert judgement on deciding the risk level based on the answers received from the mobile app. The app queries the user-related information to possible infection of coronavirus, such as common symptoms (fever, headache, dry cough, breathing difficulty and fatigue) and their duration and severity, travel history, work and residential information and demographics. Some sample screenshots of such mobile app are shown in Fig. 2.1 as an example.

Fig. 2.1
figure 1

Illustration of mobile app which checks the well-being of the user for identifying infection

The information will be processed by AI algorithm which computes the risk level and classifies the user to be one of the following groups: high risk, moderate risk, low risk, no risk, etc. Although it is unknown exactly which AI algorithm was used in any particular mobile app which probably is a commercial secret especially for non-government organizations, the logic behind is usually a set of decision rules. These decision rules will take a similar form as those presented in Fig. 2.2. The decision rules can be predefined by the developer while they could be updateable by the vendor, or learnt over time by AI, or a hybrid of expert tuning and automated machine learning. In machine learning, which is one of the main disciplinaries of AI, this is typical task of classification by supervised learning, where some historical samples are used to induce a representative model which remembers the mapping between the attributes and the prediction classes. The supervised learning algorithms [2] for building a classification model range from simple Bayesian network, Decision tree, Support vector machine to sophisticated neural network and deep learning, just to name a few. Once the decision rules are induced from the classification model, they are ready to divert a new set of survey sample which is inputted into the app, to one of the specific class. A series of conditional tests are performed at the intermediate nodes in the decision rules, over the input survey data. An outcome is generated at the end of the decision rules, which would feedback to the user via the app.

Fig. 2.2
figure 2

Example of decision rules that are generated by AI algorithm

Further from individually classifying users into risk groups based on their submitted information, AI plays a role in something bigger at the back-end server. The information collected from many users will be pooled together for piecing into a full picture of the regional virus outbreak using big data analytics. Some useful insights such as the epicentres, how the virus circulated, propagated and the risk levels in each suburb, can be known through this collective information from many users. Clusters of hot groups could be identified over the densities of infested cases, from. Technically, collecting this information and analysing it are possible, assuming privacy concerns are taken care of (anonymizing the data). There are three challenges that need to be conquered here: (1) AI smart algorithms which automatically identify the past and current high-risk areas and possibly predict the future nearby regions; at the same time, the risk levels and the propagation rate would also be computed. (2) the cloud processing and big data infrastructure that are required to support thousands of users’ information submission and requests for the latest collective updated results. (3) visualization for large-scale geographical data.

In this scenario, AI algorithms related to spatial–temporal modelling [3] would be useful because the epidemic indeed spreads and travels in time, from town to town and city to city. The risk assessment data collected from a population of users are prefect ingredients feeding big data analytics which may embrace spatial–temporal modelling and prediction. In particular, a classical measure called Moran’s I index [4] which computes spatial autocorrelation by simultaneously considering both feature values and their locations has been popularly used for predicting the risk levels of the next areas. Moran’s I index is named after an Australian statistician, Emeritus Professor Patrick Alfred Pierce Moran. Moran found that spatial autocorrelation can be modelled by a complex spatial correlation among close proximity in space, which can be extended beyond 1-, 2-, 3-dimension and to however large multi-dimensional and multi-directional space. The Moran’s I index for spatial autocorrelation is given as

$$ Moran I = \frac{n}{{S_{0} }} \times \frac{{\mathop \sum \nolimits_{i = 1}^{n} \mathop \sum \nolimits_{j = 1}^{n} w_{i,j} \times z_{i} \times z_{j} }}{{\mathop \sum \nolimits_{i = 1}^{n} z_{i}^{2} }}, $$
(2.1)

where zi is the deviation of an attribute for feature i from its mean (xi—X), wi,j is the spatial weight between feature i and j, which is the number of confirmed cases in [i,j] coordinates of a 2D space, n is equal to the total number of features (confirmed cases) and S is the aggregate of all the spatial weights:

$$ S_{0} = \mathop \sum \limits_{i = 1}^{n} \mathop \sum \limits_{j = 1}^{n} w_{i,j} $$
(2.2)

The zi-score for the statistic is computed as

$$ z_{I} = \frac{I - E\left[ I \right]}{{\sqrt {V\left[ I \right]} }}, $$
(2.3)

where

$$ E\left[ I \right] = \frac{ - 1}{{\left( {n - 1} \right)}} $$
(2.4)
$$ V\left[ I \right] = E\left[ {I^{2} } \right] - E\left[ I \right]^{2} . $$
(2.5)

The 3D concept of Moran’s I theory has been widely used for modelling the autocorrelation of feature values (which could be interpreted as number of confirmed cases) across a two-dimensional space or map, plus the time dimension. Figure 2.3 shows an example of 3D concept in Moran’s I index where the features in terms of number of confirmed cases, depicted in different colours are projected in time as heights of the columns on a 2D map.

Fig. 2.3
figure 3

Illustration of how the seasonality of virus outbreaks is incorporated into the spatial–temporal computation model [5]

In the literature, there have been many prediction models formulated which are based on Moran’s I theory for predicting the spread of virus outbreak. Moran’s I index is often used in conjunction of AI algorithms, such as clustering which allows similar data objects to merge into segments or clusters automatically and time-series forecasting which projects future values by modelling the current trend. Figure 2.4 shows the risk levels of nearby regions as an example illustrating the potential neighbouring areas from the epicentre can be computed by Moran’s I index and clustering algorithm. Readers are referred to [5] for more details.

Fig. 2.4
figure 4

Clusters of predicted next outbreak locations with the size of cluster representing the severity

2.2 Smart Screening for High Body Temperature

Mobile app with AI that assesses the risk level of users at home or anywhere else is a thin line of defence relying on the honesty of the users who answer the questions truthfully. In some busy public places, like airports, train stations, office buildings, schools and hospitals, mass screening in an effort of detecting visitors who carry symptoms of COVID-19 disease is necessary. High body temperature is one of the most common symptoms in COVID-19. Decades ago, traditional thermal scanning technology checked users one by one, moving sequentially in a queue and stopping by in front of the camera for a second or so, for accurate detection. Recent Infrared Thermal Image Scanners (ITIS) technology has been widely deployed at border controls for the mass screening of travellers for fever symptoms. ITIS was put under test by a team of researchers from University of Otago, Dunedin, New Zealand, for measuring their front-of-face performance at airports. It was found that ITIS in detecting fever performed moderately well, with the area under Receiver Operating Characteristic (ROC) curves at 0.86 which is approximately at accuracy level of 95% with confidence intervals 0.75–0.97 [6].

A new generation of screening is needed for speeding up the mass screening process and improving the accuracy in fever detection. The state-of-the-art ITIS technologies often are equipped with AI functions which automatically pinpoint each human face from a crowd and focus on just the right facial point for measuring the body temperature. This not only saves time by filtering out the unwanted regions of the whole image, but also can focus and better analyse those small areas of interest. Therefore, the accuracy could be improved, and false alarm rate is reduced with the appropriate optical equipment and the AI functions. For example, a cup of hot coffee that is being held in a traveller’s hand will be excluded in the body temperature measurement. By this way, the screening system is able to handle mass detection over a crowd of walking travellers. Figure 2.5 shows a snapshot of thermal image captured by ITIS over multiple targets. Fast speed and strong processing power are needed should many people on the move are needed to be measured simultaneously.

Fig. 2.5
figure 5

A snapshot of thermal image by ITIS empowered by AI for multi-targets monitoring

In a nutshell, the core of this AI function is rapid detection and tracking in thermal infrared imagery, powered by computer vision algorithms. The algorithms are largely divided into two groups: detection and tracking, which first identifies the areas of interest (human faces) over a background and follows their movement until out of view for avoiding multi-triggered detection, respectively. Detecting body temperature on human face requires outlining the human body shape from the background and then finding the face area which is relatively the warmest surface because it is not covered by clothes. To this end, thresholding technique and extracting human body shape information from images [7, 8] would be useful. For enhancing accuracy performance, some research works are geared towards a hybrid use of both thermal imagery and visual of the same scene [9].

On the other hand, tracking is about locating the trajectory of a moving person over time. One of the common ways is to start with an initial human face detection and then performs the same detection by recognition of nearby detection areas is repeated over time. The underlying AI algorithms are known as two broad categories—appearance-based methods and point-based methods. The former group of methods induces a model of the tracking object which allows continuous updates as it moves. It can be done based on the contour [10] called active contour [11] coupled with energy minimization function for shape matching or based on the template which is a frame outlined from the object at the initial detection. Template tracking is then a matter of matching regions of the scene that best match the frame and updating the template iteratively as the new appearance may gradually change in time [12]. For example, the displacement between the object and the fixed angle of the camera distorts the template away from the original template. In some probabilistic template-based tracking model, a prediction of where the next frame is most likely to be located based on the trajectory helps speed-up the search of the next matching frame [13].

The other groups of methods, namely, point-based methods are using pixels or points in a scene which represents the current coordinate of the moving object with reference to the coordinates of other objects in a scene. In this case of multi-target tracking, several popular image filters can be used, ranging from the simplest to the sophisticated, such as Linear Kalman filter, Global Nearest Neighbour filter and Joint Probabilistic Data Association filter, just to name a few. The concept is to associate each detection in each temporal frame (which are generated at high speed for high-resolution thermal and/or visual camera) to a trajectory, while the trajectory is monitored and remember along the way. Readers are referred to [14] for more information about multi-target tracking along with detecting anomalies in body temperature by ITIS. In general, multi-target detection and tracking is a broad topic. Many research endeavours have been committed to this topic which can be categorized according to a taxonomy as shown in Fig. 2.6. The research is still ongoing worldwide, with the prime objective of increasing the screening accuracy, coverage and speed.

Fig. 2.6
figure 6

Taxonomy of multi-targets detection and tracking for mass thermal screening

2.3 Deep Learning and Radiological Image Analysis

Checking body temperature and risk assessment by questionnaire are the only entry level in COVID-19 diagnosis. By far the most common diagnostic pathway for COVID-19 currently is collecting and analysing RNA specimen of body fluid collected from the testee. The principle is to observe the outcome of adding a special enzyme called ‘Reverse Transcriptase’ (RT-PCA) to the RNA specimen, producing a two-stranded DNA. Under fluorescent dye, a tester can tell whether the test turns positive for COVID-19 when the DNA multiply upon adding the nucleotides. If no multiplication of the DNA is observed, the testee is free from COVID-19 infection. Based on this RT-PCA principle, quick paper-kit test within half an hour is made possible for the frontline healthcare staff. The paper-kit is a mini incubator for the observation of fusing the viral proteins of a sputum specimen to the antibodies. However, the accuracy level of such paper test-kit could be as low as 60% while false alarm rate remains high [15].

An alternative method for COVID-19 test is medical imaging-based diagnosis, which is more accurate in general but requires expensive equipment operated by radiologists. A Computerized Tomography (CT) scan of a patient’s chest which is a series of X-ray images from cross sections of the lung at different depths. Should the patient be infected with COVID-19, the CT scan of the lung reveals tissues of irregularity. The anomalies are subtle and hard to be distinguished between that of COVID-19 and that of viral pneumonia. The characteristics of COVID-19 as appeared on CT scan include but not limited to air space contraction, murky paving shades, peripheral ground glass opacities, traction bronchiectasis and bronchovesicular thickening [15]. Overall, CT-based COVID-19 diagnosis is found to be better than RT-PCR test, having sensitivity measure between 80 and 90% [16].

AI is an epitome of how leading-edge technology can help analyse CT scans of COVID-19 patients. It is accepted worldwide that AI is a good assistant along with CT assessment, not only for COVID-19 but in many cancers related diagnosis. Subsequently, prototypes of AI algorithms especially deep learning which has excellent ability of non-linear modelling have been built by a number of research teams for aiding COVID-19 detection on chest CT scans. Research lab funded by Alibaba claims that their AI-based image recognition tool yields an accuracy level of 96% in distinguishing pneumonia caused COVID-19 from that of other causes [17]. A latest medical report, by Gozes, O. et al., claims that their new thoracic AI algorithm powered by deep learning over CT scans is reaching 98.2% sensitivity and 92.2% specificity, it is fast as well [18].

How exactly AI in the name of deep learning help improve the CT scan diagnosis for COVID-19 patients? AI may neither be replacing the radiologist when it comes to operating a professional CT scanning nor taking over the final decision in confirming the diagnosis verdict. However, AI has been applied for decades as a supplementary decision support tool, used by radiologist for doing fast analysis, in addition to experts’ visual inspection. The AI diagnosis module usually inputs a set of CT scans manually or semi-automatically by wiring the workstation that connects to the imaging peripheral. Often the output is a diagnosis report indicating the likelihood of each possible outcome, based on the computed results through the AI algorithm. The diagnosis report would be used as a reference for the doctors to make a final decision. This is how the AI computerized process works, as shown in Fig. 2.7. Firstly, the multi-sliced spiral set of CT scans is loading into a computer where AI is installed. The raw images would be pre-processed, mainly by subtracting the background, correcting the orientation of the images and placing the lung cross-sectional image from each scan in a prominent position showing the lesions and organs. Other data transformation may apply, as some researchers prefer to apply some image/signal processing filters for improving the clarity of the image. In the second step, data segmentation algorithm apply here for identifying and distinguishing the areas of interest from the lung image. Extracting the so-called salient features is one of the most important parts of the process; the efficacy of machine learning hence the accuracy of the final output, depends very much on the good quality image feeds prior to the neural network. In the earlier days, image segmentation was a separate process requiring edge detection and data clustering methods. Areas of interest were manually highlighted over the CT scan images, cropping the shapes and their textures within, then feeding them into machine learning model for recognition and classification. Since millennia, deep learning has become a promising approach for its image processing capability to effectively mark out and segment the areas of interest from the CT scans. Each area of interest would be evaluated for the likelihood of belonging to a COVID-19 class or something else. A final merger collectively decides based on the likelihood of each region whether the patient is suffering from COVID-19 disease or other pneumonia. This could be implemented by some probabilistic reasoning tools such as Bayesian network which generates a final outcome with probability associated. All the possible outcomes and their probabilities serve as good reference for doctors to make final conclusions in a timely manner.

Fig. 2.7
figure 7

Computer-aided COVID-19 diagnosis from CT scan

How does deep learning help in the process as in Fig. 2.7? There have been many hypes and praises on the efficacy of deep learning, tautologically known as Convolution Neural Network (CNN). CNN is one of the latest variants of neural network inheriting the basic neural structure, like neurons and their connectivity among intermediate layers, learnable weights associated with each link, activation function and bias. A neuron receives an input value from its preceding links, multiplies these values with their weights, applies an activation function over the weighted values and responds with a new value which in turn passes on to its subsequent layer of neurons. The number of neurons required at the input layer is equal to the number of pixels on a CT scan which typically is in DICOM format. DICOM is abbreviation for Digital Imaging and Communications in Medicine, commonly for archiving and transferring high-resolution medical images among Picture Archiving And Communication Systems (PACS). Behind the input layer of neurons, there are several intermediate layers of neurons called feature maps. The feature maps also are commonly known as convolutional layers because they convolve the outputs from the input layers via some filters. The purpose of convolution is to extract important features; the convolution proceeds through several layers of feature maps, filtering out the unimportant and retaining the important features out in each layer. The feature maps collectively are known as kernels; their size is to be arbitrarily chosen by the designer. As shown in Fig. 2.8, subsampling and convolution happen across the kernels, condensing the whole CT scan image to specific areas of interest from the input layer to the latter layer. This is done by the efforts of each neuron responding at each feature map to an interesting area of the previous feature map. How does it define interestingness? It depends on the types and number of convolutional filters chosen, usually the filters are magnifying up the salient features over a large area and suppressing down the major and ordinary areas. After a feature map is ‘convoluted and sub-sampled’, the selected output has become concise which is known as activation map where the effect of applying the filter has been highlighted. Between the activation maps, there is some activation function controlling how much or how little convolution should happen in a non-linear fashion, based on the weights at the links between neurons across successive feature maps. Near the output there is some pooling layer which is designed to consolidate the dimensionality of the activation’s map out, preventing the dimensionality becomes too large. This could be done optionally by either max-pooling or average-pooling strategy. At the end, high-level abstractions can be achieved from the fully connected set of kernels (feature maps) and pooling layers. The key features are the essential details that characterize each area of interest which are quality ingredients for supervised learning. Then the whole CNN can continue to run until it reaches some equilibrium. The whole process is iterative similar to the learning cycles by backpropagation neural network. The weights at the neuron links are continuously updated while training samples are fed, until a stage when no longer error reduction can be observed or the discriminant between two successive cycle falls below a predefined threshold. Readers are referred to [19] for more information as there are many variants of CNN; almost a new or modified model is proposed and published by academic journals every week.

Fig. 2.8
figure 8

A typical CNN structure for COVID-19 diagnosis from CT scan

Since January 2019, medical staff at radiology department of Zhongnan Hospital, China started using AI software to screen typical or partial visual signs of lung pneumonia manifested by COVID-19 from CT scans. The speed and convenience gained by the AI software relieve frontline healthcare workers as well as hospital radiologists from their overloaded duties. The speed-up of the process helps not only in diagnosis, but the efficiency extends to deciding who to isolate, who are confirmed, what treatments should be appropriate, etc. Even the slightest speed-up means a lot in saving lives as the hospitals in Wuhan were quickly overwhelmed by increasingly large number of patients like tsunami in a very short time, as the virus hit hard in Wuhan in January and February. The AI software alone, however, does not confirm whether a person has contracted the disease or not, but it offers good indication with reasonable accuracy on recognizing a pneumonia condition from the lung images of CT scans. The indication is useful for diagnosis that would have to be followed up with other lab tests and further observations. InferVision is one of the pioneer developers of AI-based COVID-19 diagnosis software, whose product was intensively used by 34 hospitals in China, examined lung images of CT scans from over 32,000 suspected cases. The processing time was reported to have reduced from typically 15 min to 3 min. InferVision which is a Beijing company backed by Google Sequoia Capital is an example of how AI software was deployed in such large scale at the soonest possible timing since the early days of outbreak. A screenshot of the AI-based COVID-19 software is shown in Fig. 2.9. It can be seen that the GUI is carefully design with a futuristic calming blue and black theme which causes minimum distraction; the output of the AI diagnosis is abstract and stands out in the center, conveying critical information to the users at an easy glance. Secondary information and details are at the sides. Allowing medical staff at the crucial time to stay focused, calm and be well informed by AI diagnosis is very important. On 11 March 2020, it is reported that InferVision AI diagnosis software was exported to Japan, as a global effort to help medical staff doing screening and to stop virus spreading as early as possible.

Fig. 2.9
figure 9

The GUI of an AI software for COVID-19 diagnosis (image courtesy by Beijing InferVision Technology Co., Ltd.)

This is also another example of technology adoption at critical time, where it may not be ideal to have developed and tested an AI software in such a short time. But the mourning of victims suffered from COVID-19 and snowballing death tolls indeed forced researchers and developers to push ahead of what the best efforts that they could do on hand, even as a contingency plan but with great urgency. It is believed that future works on further enhancing the AI algorithms are ongoing, in terms of speed and accuracy. Other aspects of enhancement on Infervision’s GUI would be NLP and remote gesture recognition which could further speed-up the human and computer interaction time.

2.4 AI-Driven Unmanned Technologies

AI has so far helped health risk assessment, detection of potential patients at gantries and fast radiological diagnosis in hospitals, in the form of AI software running at a workstation or smartphone, separating the sick from the healthy people. At this time of crisis, how possible is it that AI will be augmenting or even replacing healthcare workers or caregivers for the sake of reducing the human contacts? As a safety measure of social distancing, the less intensity of human touch means lower infection risk for the human workers. This time of COVID-19 spells out a sad irony—nobody community nor government alone can fight this invisible enemy of the century, but borders are sealed and physical teamwork is impossible; healthcare workers know the best about the importance of social distancing, but their duties put them at great risk by working most closely with the patients and the suspected; elders need most interactions and emotional supports during this gloomy pandemic, but they are the priority group to be strictly isolated because they are the vulnerable to the virus; and police officers should be patrolling against looting and burglary, but they are tasked to chase every civilian to stay at home. All these mentioned tasks have one thing in common which must be either eliminated or minimized at all costs—reduction of human interaction. One apparent solution is robotics, which has a long history in providing human with automation and abilities of performing tasks in all walks of life, varying from ATM machines, supermarket kiosks to sophisticated surgical assistance on operating tables. In this section, the applications of robotics that are powered by AI at hospital, home and public places are explored.

2.4.1 Robotics with AI at Hospital

Hospitals can be hotspots of contagion due to frequent visits of patients confirmed or suspected. Regular sanitization is of upmost importance to ensure the safety of workers, patients and visitors. Robots are the best candidates for this risky job because they are machines and naturally impossible for virus infection. Similar to floor-cleaning bots which have been commercially available as consumers products, autonomous robots can do more than cleaning in hospitals. There have been robots deployed during the COVID-19 pandemic in hospital for doing the following tasks where humans are suspectable to contagion (Fig. 2.10).

Fig. 2.10
figure 10

Mobile UV lampworking in action killing germs in hospital

  1. (1)

    Robots that are equipped with germicidal lamps able to roam around indoor while disinfecting surfaces with specific types of UV light—UV-C radiation which is also known as Ultraviolet Germicidal Irradiation (UVGI). UV-C/UVGI has been commonly used in waste treatment plants and laboratories for hospital grade of strong disinfection. The unique features which are enabled by AI include self-navigation, computer vision, optimization of shortest-path, maximum cleaning areas, object avoidance, etc.

  2. (2)

    Caregiver robots that mimic behaviours of nurses and health workers in performing basic housekeeping operations. For example, food and drug deliveries, waste collection, measuring patients’ vital signals, serving meals, etc. A humanoid serving robot called Amigo prototyped by RoboEarth which is a European-funded project is developed with collaboration of five European universities. Amigo is able to perform simple task to patient like handing over a glass of drink with its pincer hand and aware of the objects and patient around him. Far from being able to accomplish sophisticated tasks like a human nurse interacting with patients, robots are not affected by fatigues due to heavy workload and more importantly virus infection, making it a perfect machine for field work of contagion and for long hours between battery charging. Due to the emergency of the COVID-19 outbreak and sky-rocketing death toll, Spanish authorities committed to purchasing four robots that are able to automate testing processes with the suspected patients. The robots which are designed to perform COVID-19 tests and simple vital monitoring tasks, without sophisticated abilities to do complex interactions with the patients, can increase the COVID-19 tests from 20,000 a day to 80,000 with the aid of AI algorithms. The four robots indeed help reduce the risk of exposing human healthcare workers at the frontline to the virus. The AI functions would include those mentioned above, plus localization, fine-tuning of motor movements (e.g. picking up a glass and handing it gently to a patient) and perhaps specialized motor skills pertaining to COVID-19 tests such as throat swab in cooperation with computer vision (Fig. 2.11).

    Fig. 2.11
    figure 11

    Patient-care robots: Amigo robot in the lab at Eindhoven Technical University (image courtesy of Bart van Overbeeke/Tech United Eindhoven). A robot providing interactive dialogues and information with ICU patients in an Italian hospital (image courtesy of Associate Press)

  3. (3)

    A blood sampling robot called Automated Venipuncture Device (AVD) in response to the overwhelming workload of medical staff especially nurses in hospitals since the outbreak of coronavirus was created, by a joint research team of Rutgers University and Robert Wood Johnson University Hospital. The new device can automatically locate the best location of an insertion point on a blood vein and draws blood sample quickly and accurately—on par or better than a human can do. It is known that drawing blood from obviously visible veins called palpable veins is relatively with just one shot even by rookie nurses. Failing the first attempt means the need of second needle insertion or more, adding extra pain to the patients and consuming precious time for the nurses. During the COVID-19 crisis, AVD is called to the frontline, aiding nurses to take blood samples efficiently. The efficiency is reported from experimental results that were published in [20]. Success rate of 87% was achieved over 31 testing volunteers, in comparison to manual insertion by human at success rate between 27% and 60%. Human has a relatively large variation for it depends on the experience of the nurse, the difficulty of access to the veins, lighting, conditions of the skins and muscles, etc.

However, AVD is still a machine prototype though it has been catalyzed to speed-up by the pandemic for relieving the workloads of the medical personnel. There is still room for improvement of the accuracy rate especially for cases involved difficult-to-access veins and unstable environment, for example, ambulance ride or airborne hospital transport. The underlying technologies for AVD are threefold. Firstly, it needs a robotic arm with a precision engineered system of motors which positions and inserts a needle at the right spot, right force and right time on the arm. Secondly, the placement of the needle is guided by a combined signals of Near-Infrared (NIR) light and ultrasound imaging, and AI algorithm which chooses one of the most suitable veins and the best part of the vein for cannulation. The choice of location is selected by algorithm based on the 3D reconstructs of the imaging signals from both NIR and ultrasound. And the depth of insertion is carefully calculated from the 3D images, so once the needle pierces through the skin it can accurately and swiftly penetrate right into the centre of the vein lumen. Thirdly, the AVD is an integrated system that combines the imaging and AI functions mentioned above. It has a built-in refrigerated sample storage and centrifuge which analyses blood samples and generating reports on the spot (Fig. 2.12).

Fig. 2.12
figure 12

AVD prototype that draws and analyses blood with a built-in centrifuge (image courtesy of Rutgers)

  1. (4)

    Robots at the hospital triage—Cruzr is a model of service robot developed by a Chinese company UBTECH and deployed in use during COVID-19 epidemic in People’s Hospital of Shenzhen. It is designed for high efficiency, for example, real-time tracking of 200 patients’ body temperature per minute; nurses would be notified immediately should anybody be detected feverish. Cruzr and its team are now working in Royal Brisbane and Women’s Hospital and Princess Alexandra Hospital as service assistant at triage, directing patients to different sections. The robots are equipped with AI for working autonomously and independently, connected to 5G cloud network as a team in hospital. Human control is totally spared without the need to control or command the robots. A new batch of Cruzr will be working at hospital in Melbourne in April 2020. They are part of the medical crew without getting tired or infected during COVID-19. When they were working in Shenzhen back to January and February this year, they were dispatched to spray disinfectants inside and around the hospital areas in self-driving mode (Fig. 2.13).

    Fig. 2.13
    figure 13

    Cruzr working in action—a patrolling, b serving patients at triage, c information kiosk, and d spraying disinfectant. (image courtesy of Current Affair)

2.4.2 Robotics with AI at Home

Fear arose as the epidemic first emerged, causing much panic and changing everybody’s lifestyle at the time of crisis. The outbreak of COVID-19 has grown into a global pandemic killing hundred of thousands lives. Governments of many countries ordered for social distancing and stay-home lockdown curfew as a means to slow down the virus spread. At the same time, fear of death is instilled in everybody’s mind. News of mass infection and death suddenly have overwhelmed us from almost every country around the world. In this darkest period, from bad to worse, we were told to stay home, isolated, and cut off physical contacts from our social circles. This psychological trauma has impacted us a lot, especially the elders group who are most vulnerable to the virus. Under self-isolation, the elders may suffer from loneliness, fear and depression (Fig. 2.14).

Fig. 2.14
figure 14

Robear—a robot bear which is designed as both an assistive machine and NLP chatbot by a Japanese company Riken

Research team from Heriot-Watt University, led by Professor Oliver Lemon, dedicated to design a pioneer robot for accompanying elders at home, which is particularly useful during the outbreak of coronavirus. The main feature of the robot is its provision of conversational AI human–robot interaction. Put simply, it is a speech interface built into a machine which can converse with the elders naturally like how a human do. A specific name for this type of robots is Socially Assistive Robots (SARS) which is capable of caregiving and performing simple tasks in addition to conversing with elders. SARS indeed relieve loneliness and stress, therefore, improving psychological well-being to the elders with 24/7 companionship. The techniques under the hood of SARS are an integration of computer vision, human–robot interaction which can be optimized by machine learning algorithms, localization, human activity recognition and analysis—through which SARS robot can know what the elder is doing, detecting whether he/she is in danger and figuring out what assistance he/she may need. To make the robot more human-like; for speech, Natural Language Processing (NLP) is applied; and for empathy, a new branch of AI called Emotional Intelligence (EI) has been exploited. Emotional intelligence is the capability to understand the elder’s emotions from facial recognition, know about the responding emotions by sensing the elder’s tune and words he/she used in the dialogue, and understand the effect of a conversation made with the elder. Amazon Alexa, a leading commercial company in NLP mobile App, released a conversational AI-based social robot, namely, Alana. A Beijing-based company called Turning specialized in designing chatbots which is a collection of NLP software programmes that can be embedded into any shell of humanoid or robot, giving it the ability to naturally and interactively chatting with elderly.

2.4.3 Robotics with AI at Public Places

Flying drones, Unmanned Aerial Vehicles (UAV) are being used during the pandemic to partially replace human activities in air, yet trying to meet or fulfil the need of societal activities. As a general rule of thumb, any human even law enforcer is put at risk of exposure to virus, when he has to do his field duty outdoor. Although nobody is allowed to roam on the street, certain orders and operations must still take place—these activities include but not limited to patrolling for detecting anybody who violates the social distancing rules, delivering foods and essential items, providing virtual tours of places of interest in lieu of human visits, and even walking your dogs on your behalf. All these activities if they had to be undertaken by merely mechanical flying drones, certain AI must be added to it, so to breathe life into those machines making them intelligent. The following is a list of examples where UAV is tasked to serve homebound residents during COVID-19.

Lockdown patrol—under the lockdown decree—pockets of people still irresponsibly roam at the streets ignoring the law and neglecting their infection risk. Authorities deploy squads of policemen to patrol around the public places, warning or fining them penalty. More effectively than human patrol, aerial surveillance using UAV through a bird eye view detects any human presence in the streets that are supposed to be empty. Patrol drones are not something totally new. The fundamental functions of UAV’s which enable them to fly by specific scheduled routes are upgraded with extra useful functions for COVID-19. For instance, the UAV would need to tell whether the detected human is singular, in pair, trio or a gang; whether the person is wearing mask or not, whether the person is showing signs of illness, etc. and whether those people are getting too close to each other for social distancing. All these new functions specifically for COVID-19 need to be programmed by AI algorithms. The Chinese government was the pioneer in using UAVs coupled with thermal cameras to check out walking pedestrians who are sick from the sky. This technique was proven successful in scanning crowds to pick out COVID-19 carriers. Variants of such surveillance drones have emerged which have add-on capabilities of two-way audio—it can listen to sounds and allows a law enforcer to speak through the drone to the target people remotely. Other add-ons are navigation by lidar, object recognition, human activity recognition, video analytics, autonomous flying abilities, obstacle avoidance, etc. Although AI was the centrepiece of the UAV applications, human is still needed in some scenarios. For instance, AI and computer vision detail that there are crowds congregating and violating the social distancing rule; a human operator may need to yell through the loudspeaker to verbally persuade the crowd to disperse. The tone and the skills of negotiation, sometimes a sense of humour, are still done better by human. British police also adopted using UAV for chasing off crowds in the streets during COVID-19. But they took it a little further by taking photos of the violators and shame them on Tweeter, tweeting out their faces and times and places of violation for public display (Fig. 2.15).

Fig. 2.15
figure 15

UAV was warning pedestrians as law enforcement to wear masks in Beijing during COVID-19 lockdown

Other than enforcing social distancing, some UAVs nick-named ‘Tour Drones’ are tasked for benevolent purposes with unique video recording and broadcasting functions and wide connectivity. Tour drones fly around well-known city centres, attractions, major transport hubs where used to be frequent by travellers. The drones with high-resolution video capture eerie scenes and real-time broadcast to viewers who are isolated at home. This serves as a good cabin fever remedy to anybody who is suffering from claustrophobic distress. The open view of quiet places including those which were used to be crowded or over-crowded provides a serene sensation that soothes restlessness and irritability. DJDrones is one of the pioneers in providing free footages of virtual tours over quarantined cities that could be watched online by anybody. Other benevolent and practical applications of UAV during COVID-19 are delivery of food, medicine and essential supplies, even toilet paper! (Fig. 2.16).

Fig. 2.16
figure 16

UAVs are used for benevolent purposes during COVID-19 lockdown; a virtual tour of deserted streets in Chicago; b delivering medical samples; c delivering toilet paper in time of urgency and d delivering car keys in car rental business

2.4.4 AI Beneath the Surface of Robots

Unmanned technologies have a long history since its inception to be powered by AI. The intelligent controls by AI transformed the mechanical automation largely to autonomous machines which embrace all the advantages of machinery (24/7, precision, and free from biohazard etc.). AI has added extra dimension of delicate functions. They are the ‘intelligence’ that comes from local knowledge from online analytics and sensing ability at the working environment, and global knowledge which offers deep insights from a macro-view by processing a massive amount of so-called big data. COVID-19 just served as a catalyst along the timeline of fusing AI into robotics, accelerating the pace of using the best of both at the time of crisis as a solace. It is anticipated that heavy investment and funding will continue to enhance the functionalities of AI robots in the near future. Current prototypes will become better, latest prototypes will mature, and new hybrid use of AI functions on robots would be attempted and tested. As a review, Table 2.1 summarizes some prominent AI functions and their references that would have been applied in AI robotics during the COVID-19 crisis. The robotics applications do share common AI functions them. Ground robots generally require AI algorithms in localization [21], path finding [22] and computer vision [23] for autonomous navigation [24]. Medical robots would have higher requirements in image processing [25] for precision. UAVs require the on-par navigation capabilities to those ground robots, but with additional advances, such as balancing in the midst of wind turbulence [26].

Table 2.1 Prominent AI functions found across robotic operations during COVID-19 lockdown