Next Article in Journal
Federated-Access Management System and Videoconferencing Applications: Results from a Pilot Service during COVID-19 Pandemic
Next Article in Special Issue
A Bra Monitoring System Using a Miniaturized Wearable Ultra-Wideband MIMO Antenna for Breast Cancer Imaging
Previous Article in Journal
Unsupervised Outlier Detection: A Meta-Learning Algorithm Based on Feature Selection
Previous Article in Special Issue
Hybrid Workload Enabled and Secure Healthcare Monitoring Sensing Framework in Distributed Fog-Cloud Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discrete Human Activity Recognition and Fall Detection by Combining FMCW RADAR Data of Heterogeneous Environments for Independent Assistive Living

1
Research Centre for Intelligent Healthcare, Coventry University, Coventry CV1 5FB, UK
2
School of Computing, Engineering and Built Environment, Glasgow Caledonian University, Glasgow G4 0BA, UK
3
School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK
4
Department of Science and Technology, College of Ranyah, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
5
Faculty of Science, Northern Border University, Arar 91431, Saudi Arabia
6
School of Computing, Engineering and Physical Sciences, University of the West of Scotland, Paisely PA1 2BE, UK
7
School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
8
James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(18), 2237; https://doi.org/10.3390/electronics10182237
Submission received: 13 August 2021 / Revised: 8 September 2021 / Accepted: 10 September 2021 / Published: 12 September 2021

Abstract

:
Human activity monitoring is essential for a variety of applications in many fields, particularly healthcare. The goal of this research work is to develop a system that can effectively detect fall/collapse and classify other discrete daily living activities such as sitting, standing, walking, drinking, and bending. For this paper, a publicly accessible dataset is employed, which is captured at various geographical locations using a 5.8 GHz Frequency-Modulated Continuous-Wave (FMCW) RADAR. A total of ninety-nine participants, including young and elderly individuals, took part in the experimental campaign. During data acquisition, each aforementioned activity was recorded for 5–10 s. Through the obtained data, we generated the micro-doppler signatures using short-time Fourier transform by exploiting MATLAB tools. Subsequently, the micro-doppler signatures are validated, trained, and tested using a state-of-the-art deep learning algorithm called Residual Neural Network or ResNet. The ResNet classifier is developed in Python, which is utilised to classify six distinct human activities in this study. Furthermore, the metrics used to analyse the trained model’s performance are precision, recall, F1-score, classification accuracy, and confusion matrix. To test the resilience of the proposed method, two separate experiments are carried out. The trained ResNet models are put to the test by subject-independent scenarios and unseen data of the above-mentioned human activities at diverse geographical spaces. The experimental results showed that ResNet detected the falling and rest of the daily living human activities with decent accuracy.

1. Introduction

Radio Detection and Ranging (RADAR)-based technologies are often associated with military and defence applications, including detecting and tracking of planes and ships. While travelling, many of us have probably seen massive RADARs near airports’ runways spinning to monitor the surrounding area for incoming and departing aircraft [1]. Nevertheless, in recent years, RADAR-based technology has attracted the attention of a wide range of disciplines outside of military and air traffic control [2]. Automotive RADAR, for instance, is a relatively recent use of RADAR sensing technology that aids cars in navigating around other modes of transportation and objects [3,4,5]. Recently, RADAR-based technology has been proposed in the healthcare field to track everyday living activities at home and to monitor patients’ vital signs, including breathing rate and heart rate [6,7,8]. RADAR technology is also used in the human gesture recognition system to monitor and identify complex movements made by individuals to interact with objects without pressing buttons or touching screens [9,10,11].
RADAR is no longer just of interest to a limited group of specialists and users but also to a wide variety of students, researchers, and businesses [12]. RADAR sensing spans a broad range of abilities and disciplines, from the design of frequency-dependent microprocessors and components to electromagnetic wave propagation to RADAR signal processing through smart Artificial Intelligence (AI) techniques [13]. As a consequence, researchers will very likely be required to deal with aspects of RADAR sensing as part of the design and development of a bigger system, whether it is smart vehicles, smart homes, smart phones, smart factory, or an array of RADAR sensors for future healthcare applications [14,15].
This article focuses on the healthcare application of RADAR systems, which is one of the most novel and distinctive domains from the conventional defence-oriented technologies. The integration of RADAR sensing and other technological innovations into healthcare services is attributable to the changing social and personal healthcare requirements linked to the global ageing population. The World Health Organisation (WHO) and the United Nations recently projected that by the year 2050, over a third of the world’s population will be the elderly (age 65 and above), and according to the United Kingdom’s Office for National Statistics, the country’s population over the age of 85 is expected to nearly double in the next two decades [16,17]. As a result, due to ageing, people are more likely to develop several chronic health conditions (also known as multimorbidity) and crucial life threatening events such as collapsing or falls [18,19,20].
Due to the emergence of AI and advanced sensing techniques [21,22,23,24,25], there has been a lot of interest in utilising cutting-edge techniques to offer effective healthcare in private home settings [26,27,28]. There are two primary aims involved in this. Firstly, maintaining the privacy and autonomy of individuals in their own familiar surroundings as well as preventing hospitalisations and the disruption of established routines and daily habits, which are critical towards individuals well-being. Secondly, encouraging a proactive approach to healthcare in which technology can offer ongoing reliable monitoring systems and early detection of subtle signs linked to deteriorating health problems rather than responding only when there are severe health conditions.
A question that commonly arises is what can RADAR technology provide in the healthcare domain? Recent research regarding RADAR-based approaches has proved that it is a reliable approach for detecting the presence of individuals, tracking their movements in a certain space, and characterisation of several body movements, which may range from large-scale movements (such as limbs or head) to small-scale movements such as motions of the chest and abdomen while breathing [29,30,31]. In regard to healthcare, research involving RADAR technology has mainly evolved in two ways. Firstly, utilising RADAR systems and acquired data to estimate and monitor vital signs such as heart rate. Secondly, observing daily activity routines and examining the regularities in it. The second point involves the certainty that individuals practice everyday living activities such as food intake, preparation, or personal hygiene. Apart from this, detecting abnormalities in the regular daily pattern and predicting any possible risk, in particular, a fall, which requires an immediate response.
In this paper, we have utilised a Frequency-Modulated Continuous-Wave (FMCW) RADAR system with Micro-Doppler (MD) signatures to capture fall/collapse especially for elderly people [32,33]. Other than this, to discriminate from fall activity, we also captured daily living human activities such as sitting, standing, walking, drinking, and bending. This research article offers a deep learning-based approach to timely recognise the aforementioned recurring everyday human activities. Figure 1 presents a complete framework of the proposed fall recognition scheme.
The remainder of this paper is organised as follows: Section 2 presents the state-of-the-art literature on contactless technology for healthcare domains. The information about the proposed system model is given in Section 3. The experimental findings are presented in Section 4, and lastly, Section 5 provides closing remarks as well as future research studies.

2. Literature Review

This section examines state-of-the-art research in different types of human activity recognition as well as applications of Machine Learning (ML). According to the articles [34,35], a variety of human activities were observed and recorded while the participants wore wearable sensors such as an accelerometer. Once the data were gathered via distinct activities, it was treated with cutting-edge ML algorithms, for instance, Support Vector Machine (SVM), k-Nearest Neighbors (K-NN), and an ensemble learning approach called Random Forest (RF). The SVM classifier produced the leading results with a score of 91.5 per cent.
The research of [36,37] employed the FMCW RADAR scheme to acquire data on fall and related activities such as running, walking, jumping, stepping, and squatting from multiple subjects. The temporal changes, cross-sections, and Doppler were measured using the FMCW RADAR system. Subsequently, the data were processed by a cross-validation technique over the K-NN algorithm, yielding a 95.5 per cent accuracy score. This research reveals how the frequency variations in wireless signals can be utilised to identify distinct human activities. A similar study on the extraction of a multi-channel was published in [38,39].
Authors in [40] utilised a standard dataset of fourteen indoor activities. The data were gathered with the use of a triaxial accelerometer sensor. Distinguishing static and dynamic activities were among the objectives of the study. After data wrangling, the RF algorithm was used to carry out a classification task. The static outcomes received a higher accuracy score of 92.16 per cent, whereas the dynamic actions received a score of 80.0 per cent and an average result of 85.17 per cent.
The study in [41] identified five distinct arm motions using Channel State Information (CSI) on Wi-Fi Orthogonal Frequency-Division Multiplexing (OFDM) signals. Different arm motions were performed by participants while standing between a computer and a Wi-Fi router transmitting wireless signals. The CSI was then recorded and the data were trained through the deep learning algorithm. The Long Short-Term Memory approach was selected since it achieved an accuracy rate of 96 per cent. Authors in [42,43,44] performed a similar study in terms of healthcare applications.
In [45], the authors utilised the CSI information to identify a particular individual. Different subjects walked through two devices while the data in the form of CSI were sent and stored. Throughout the experiment, the CSI data were acquired when an individual walked across the radio-frequency signals. Afterwards, the data were fed into the ML classifiers: Decision Trees (DT) and RF. The research study discovered that when only two individuals participated in the binary classification task, the algorithms performed better.
In [46], wearable smart watches were utilised to track the motions of table tennis players. The smart watch collected data on how the participants moved the table tennis racket in eight distinct motions, for instance, forehanded flick, forehanded attack, backhanded flick, and so forth. Following that, seven ML algorithms were employed to analyse the data, including DT, RF, SVM, and K-NN. With an accuracy score of 97.80 per cent, the RF approach was determined to be the dominant classifier in this study.

3. System Model

All conventional RADAR signal processing approaches employed for healthcare systems aim to describe the signatures of concern in three primary aspects. First: Time—this exhibits how the subject’s position and location have changed over time. Second: Range—this exhibits the physical space between the target and the RADAR. Third: Velocity—this determines the large/small motions of the target by assessing the induced frequency change and the Doppler effect. The aforementioned three major characteristics of the RADAR technology are often abbreviated as the “RADAR Cube” [47,48]. With the growing adoption of compact RADAR technology with various receiver channels, which is mainly driven by the automotive industry, a fourth related dimension has recently been added: The angle of arrival or angular orientation. This can be deduced by comparing the received signal by various receiving channels.

3.1. Frequency-Modulated Continuous-Wave (FMCW) RADAR

The FMCW RADAR is a type of sensor that, such as a basic Continuous-Wave (CW) RADAR, emits continuous transmission power. Unlike CW RADAR, FMCW RADAR may vary its operational frequency during observations, which means that the broadcast signal is modulated in phase or in frequency. RADAR observations via runtime measures are only theoretically feasible due to shifts in frequency. Basic CW RADAR systems without frequency modulation have the drawback of being unable to estimate target range since they lack the timing mark required to precisely clock the transmit and receive cycle and transform this to range. This kind of time reference for measuring the distance between stationary objects may be created by modulating the broadcast signal’s frequency. In this technique, a signal is sent that periodically rises or lowers in frequency. When an echoed signal is obtained, the shift in frequency is delayed by Δ t , by a change in runtime, which is referred to as the pulse RADAR approach. Nevertheless, with pulse RADAR, the runtime must be determined directly. Instead, frequency discrepancies between the emitted and received signals are assessed in FMCW RADAR [49].
The following points are the fundamental characteristics of FMCW RADAR. First, the ability to assess both concurrently, the range of the target and its relative velocity. Second, safe against the lack of high-power pulse radiation. Third, targeting ability that can assess minor ranges, compared to the transmitted wavelengths, the minimum measured range is equivalent. Fourth, the signal processing is performed at a lower frequency following mixing, which makes the attainment of processing circuits more simple. Fifth, high precision of range assessment.
The structure of the FMCW RADAR system is illustrated in Figure 2a. This system comprises of a waveform-generator (WG), voltage-controlled oscillator (VCO), transmission antenna (Tx), frequency-mixer (FM), receiving antenna (Rx), low pass filter (LPF), analogue to digital converter (ADC), and digital signal processor (DSP). The WG produces the FMCW RADAR signal with the baseband, which varies frequency over time as presented in Figure 2b. A single waveform here is also known as the “Chirp Signal” in one period.
In this setup, the baseband signal transmitted over the ADC can be stated as
x [ n ] = l = 1 L α l exp j 2 π 2 Δ F c Δ T d l + 2 f c c v l n T s + 2 f c c d l
where l ( l = 1 , 2 , 3 , . . . , L ) denotes the object’s index and α l is the baseband signal’s amplitude. v l stands for the lth object’s relative velocity, and d l is the distance to the lth object. As shown in Figure 2b, Δ T is each chirp’s sweep time and Δ F represents the bandwidth. Lastly, a single baseband chirp signal is modelled N times in the ADC with a T s sampling period [51].
In the frequency-domain, we can acquire a baseband signal by employing the “Fourier Transform” to the Equation (1), which can be stated as
X [ k ] = n = 0 N 1 x [ n ] exp j 2 π n N k
where k ( k = 0 , 1 , 2 , . . . , K 1 ) denotes the index’s frequency. The spectrogram can be obtained by collecting the frequency-domain baseband signal, which depicts the variation in the range of an object with respect to time. Stating alternatively, the outcome in the frequency-domain in the FMCW RADAR scheme can be regarded as that in the range domain. The gathered spectrogram of the baseband signal over N p intervals can be stated as
X N p = X ( 1 ) , x ( 2 ) , , X N p
where X ( i ) = X ( i ) [ 0 ] , X ( i ) [ 1 ] , , X ( i ) [ K 1 ] T denotes the baseband signal’s frequency-domain as a vector, and i signifies the interval’s index.
Moreover, Figure 3 depicts the FMCW RADAR operating principle, in which the radio-frequency signals are sent and received when it comes into contact with any target within the range. Every motion of the human body generates a unique MD signature that can be used to differentiate between diverse daily activities.

3.2. Residual Neural Network (ResNet) for Classification

In the past, ML-based approaches have been effectively used on a variety of classification problems [52,53,54,55,56]. We used a deep learning-based scheme called ResNet in this work to identify different human activities and detect falling using the generated spectrograms. The usage of skip connections is required to train such a deep neural network. The input used to feed a layer is also used to feed the output of a layer higher up the stack. The goal of training a deep neural network is to get it to model a target function h ( x ) . If the network’s output is linked to the input, for example, summing a skip connection, the network would be constrained to model f ( x ) = h ( x ) x instead of h ( x ) [57]. This procedure is characterised as residual learning. Moreover, for training the ResNet algorithm, the optimal parameters were obtained utilising a grid search cross-validation technique with C V = 5 . This technique utilises the principles of fit and score to obtain suitable hyperparameters for training the ML models. The procured primary hyperparameters used in this study are provided as follows.
  • e p o c h s = 15 ( e x p e r i m e n t 1 ) / 50 ( e x p e r i m e n t 2 )
  • a c t i v a t i o n = r e l u
  • o p t i m i z e r = a d a m
  • l o s s = c a t e g o r i c a l c r o s s e n t r o p y

4. Experimental Results and Discussion

4.1. Data Collection

Every body motion creates a distinct pattern on the MD signatures that can be utilised to distinguish between various sorts of human body motions. When the FMCW RADAR encounters any activity or motion such as falling, the radio-frequency signal is broadcast and received within the range. In this paper, we have employed the available datasets from the newly completed project titled “Intelligent Radio-Frequency Sensing for Falls and Health Prediction (INSHEP)” supported by “Engineering and Physical Sciences Research Council (EPSRC)” [58]. http://researchdata.gla.ac.uk/848/ (accessed on 13 June 2021).
Figure 4 shows various spaces/rooms where experiments were conducted to acquire data. As shown in the figure, the datasets were obtained using the FMCW RADAR operating at 5.8 GHz C-band over 400 MHz bandwidth and +18 dBm output power. An antenna with a gain of +17 dBm was attached to the FMCW RADAR. A total of 99 participants, ranging in age from 21 to 99 years, participated in the whole experimental study in different spaces. The data of walking activity were collected for 10 s, and for rest of the activities, the data were recorded for 5 s. Each activity was repeated three times. Lastly, MD signatures were generated from the acquired data utilising “Short-Time Fourier Transform”.
In this paper, we have performed simulations on six distinct human activities, which were recorded by FMCW RADAR technology in various locations. The recorded activities are: (1) falling on a safe surface, (2) sitting on a chair, (3) standing from a chair, (4) walking back and forth, (5) drinking from a cup while standing, and (5) bending/leaning down to pick up an item. To gather data, each person was instructed to repeat the same activity three times. Following that, using the acquired data, spectrograms of each activity were generated. The samples of spectrograms are shown in Figure 5.

4.2. Experimental Results

The ResNet method utilised in this study to classify six various human activities was developed in Python, with the TensorFlow and NumPy libraries being used extensively. The performances of the trained models were evaluated by using the metrics: precision, recall, F1-score, classification accuracy, confusion matrix, and model accuracy/loss against number of epochs. Classification accuracy in this research study can be expressed as the fraction of human activities accurately recognised to the total number of human activities.
In this research work, two different experimental studies were performed using the acquired MD signatures of different human activities. In the first experiment, exclusively space 1 data are used for simulations, while in the second experiment, data of various spaces are merged to perform simulations. However, both experiments were carried out by subject-independent scenario, which means that the training and testing datasets are from dissimilar participants. This will reveal the generalisation ability of the ResNet classifier, which is significant for ML-based approaches. Moreover, the reason behind merging data of different spaces in the second experiment is to train an environment-aware ML model that is reliable and more robust. In total, 1026 MD signatures were used to conduct both experimental studies. These experiments are discussed in detail as follows.

4.2.1. Experiment 1

This study comprises six human activities, and to conduct the first experiment, 6 × 40 (≈66%) MD signatures were used for training and 6 × 20 (≈34%) for testing. Overall, 360 MD signatures were employed in the first experiment. Due to the extent of the acquired dataset, the number of epochs to train the model was set at 15. The performance was assessed using several metrics typically used for ML once the model is trained. Figure 6a presents a confusion matrix of the first experiment. As can be noted, most of the activities including falling have no misclassifications except bending and standing, which have three and two misclassified points, respectively. Figure 6b presents the ResNet classifier accuracy, whereas Figure 6c presents loss against the number of epochs. As the number of epochs enhanced, the ML model was able to achieve an accuracy rate between 0.9 and 1.0 and model loss was recorded between 0.1 and 0.3 . Moreover, an entire classification report of the ResNet algorithm is exhibited in Table 1. As can be noted under Experiment 1, the majority of the activity classes unveiled a precision, recall, and F1-score of 100 % , which led to procuring an overall accuracy up to 96 % .

4.2.2. Experiment 2

The second experiment was conducted to further test the robustness of the ResNet classifier. As a result, the MD signatures employed for the first experiment of space 1 were integrated randomly with the MD signatures of various spaces. In this experiment, 6 × 71 (≈64%) MD signatures were used for training and 6 × 40 (≈36%) for testing. Overall, 666 MD signatures were exploited. As the extent of the dataset enhanced for the second experiment, the number of epochs to train the model was increased from 15 to 50. Figure 7a exhibits a confusion matrix of the second experiment. As can be observed, the targeted activity (falling) was detected by the classifier with 100 % accuracy, whereas the rest of the activities have few misclassifications. This is due to the fact that in several spaces, data were arbitrarily merged in order to construct an intricate ML model. Furthermore, Figure 7b shows the ResNet model accuracy, and Figure 7c reveals the model’s loss. With several epochs, the model accuracy and loss were recorded consistently between 0.9 and 1.0 and 0.1 and 0.2 , respectively. Lastly, Table 1 lists a complete classification report of all the activities trained through the ResNet algorithm. As can be seen in Experiment 2, the falling activity class unveiled a precision rate of 100 % , recall 90 % , and F1-score 95 % . Meanwhile, the walking activity class attained a precision, recall, and F1-score of 100 % in both experiments. However, an overall accuracy dropped from 96 % to 85 % since the rest of the activities, such as bending, drinking, sitting, and standing, revealed slightly lower percentages. The reasons behind this are that these classes have highly similar data points between each other, and the second experiment was signified to be more intricate.

5. Conclusions and Future Work

This research aims to exploit an existing Frequency-Modulated Continuous-Wave (FMCW) RADAR dataset to detect/monitor activities of daily life. The data were collected in various spaces for 99 participants (young and elderly). In this paper, we presented preliminary findings for a scheme that utilises the FMCW RADAR to recognise several human activities, including falling, sitting, standing, walking, drinking, and bending. Participants were asked to perform the six aforementioned daily living activities in various geographic locations as part of the experimental investigation. The micro-Doppler signatures from the RADAR system were employed as primary data and subsequently validated, trained, and tested using ResNet. The ResNet is a prominent deep learning technique that assists in minimising the overfitting/underfitting problem with a unique skip connection approach. Two different experiments were conducted in this work to investigate the robustness of the applied scheme. The trained deep learning models were tested on the unseen data of discrete human activities. The unseen data were intended to be subject-independent as well as geographically diverse. The experimental findings revealed that ResNet detected falling activity with 100 % accuracy in both experiments. Moreover, in experiment 1 and experiment 2, the overall accuracy rate was 96 % and 85 % , respectively. In experiment 2 findings, the accuracy dropped slightly due to the additional complexity of entirely unseen data of various geographical spaces along with subject-independent instances. Nonetheless, in future work, we aim to experiment with different cutting-edge wireless sensing techniques such as software-defined radio as well as advanced deep learning algorithms such as generative adversarial networks to enhance the performance of the proposed scheme.

Author Contributions

Conceptualization, U.S., S.A.S., J.A. and Q.H.A.; formal analysis, U.S., S.Y.S., S.A.S., J.A., A.A.A., T.A., N.R., A.A. and Q.H.A.; funding acquisition, S.A.S.; investigation, S.A.S., J.A., A.A.A., T.A., N.R., A.A. and Q.H.A.; methodology, U.S.; project administration, S.A.S.; resources, S.Y.S.; software, U.S.; supervision, S.A.S.; validation, J.A., A.A.A., T.A., N.R. and A.A.; writing—original draft preparation, U.S.; writing—review and editing, U.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Taif University, Taif, Saudi Arabia, through the Taif University Research Grant under Project TURSP-2020/277.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zyweck, A.; Bogner, R.E. Radar target classification of commercial aircraft. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 598–606. [Google Scholar] [CrossRef]
  2. Shah, S.A.; Fioranelli, F. RF sensing technologies for assisted daily living in healthcare: A comprehensive review. IEEE Aerosp. Electron. Syst. Mag. 2019, 34, 26–44. [Google Scholar] [CrossRef] [Green Version]
  3. Major, B.; Fontijne, D.; Ansari, A.; Teja Sukhavasi, R.; Gowaikar, R.; Hamilton, M.; Lee, S.; Grzechnik, S.; Subramanian, S. Vehicle Detection with Automotive Radar Using Deep Learning on Range-Azimuth-Doppler Tensors. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
  4. El-Hassan, F.T. Experimenting with sensors of a low-cost prototype of an autonomous vehicle. IEEE Sens. J. 2020, 20, 13131–13138. [Google Scholar] [CrossRef]
  5. Subramanian, V.; Burks, T.F.; Arroyo, A. Development of machine vision and laser radar based autonomous vehicle guidance systems for citrus grove navigation. Comput. Electron. Agric. 2006, 53, 130–143. [Google Scholar] [CrossRef]
  6. Rehman, M.; Shah, R.A.; Khan, M.B.; Ali, N.A.A.; Alotaibi, A.A.; Althobaiti, T.; Ramzan, N.; Shaha, S.A.; Yang, X.; Alomainy, A.; et al. Contactless Small-Scale Movement Monitoring System Using Software Defined Radio for Early Diagnosis of COVID-19. IEEE Sens. J. 2021, 15, 17180–17188. [Google Scholar] [CrossRef]
  7. Yang, X.; Shah, S.A.; Ren, A.; Zhao, N.; Fan, D.; Hu, F.; Rehman, M.U.; von Deneen, K.M.; Tian, J. Wandering pattern sensing at S-band. IEEE J. Biomed. Health Inform. 2017, 22, 1863–1870. [Google Scholar] [CrossRef]
  8. Fioranelli, F.; Le Kernec, J.; Shah, S.A. Radar for health care: Recognizing human activities and monitoring vital signs. IEEE Potentials 2019, 38, 16–23. [Google Scholar] [CrossRef] [Green Version]
  9. Wan, Q.; Li, Y.; Li, C.; Pal, R. Gesture Recognition for Smart Home Applications Using Portable Radar Sensors. In Proceedings of the 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 6414–6417. [Google Scholar]
  10. Zhang, Z.; Tian, Z.; Zhou, M. Latern: Dynamic continuous hand gesture recognition using FMCW radar sensor. IEEE Sens. J. 2018, 18, 3278–3289. [Google Scholar] [CrossRef]
  11. Taylor, W.; Shah, S.A.; Dashtipour, K.; Zahid, A.; Abbasi, Q.H.; Imran, M.A. An intelligent non-invasive real-time human activity recognition system for next-generation healthcare. Sensors 2020, 20, 2653. [Google Scholar] [CrossRef]
  12. Nkwari, P.; Sinha, S.; Ferreira, H. Through-the-wall radar imaging: A review. IETE Tech. Rev. 2018, 35, 631–639. [Google Scholar] [CrossRef]
  13. Lang, P.; Fu, X.; Martorella, M.; Dong, J.; Qin, R.; Meng, X.; Xie, M. A comprehensive survey of machine learning applied to radar signal processing. arXiv 2020, arXiv:2009.13702. [Google Scholar]
  14. Fioranelli, F.; Shah, S.A.; Li, H.; Shrestha, A.; Yang, S.; Le Kernec, J. Radar sensing for healthcare. Electron. Lett. 2019, 55, 1022–1024. [Google Scholar] [CrossRef] [Green Version]
  15. Tahera, K.; Ramzan, N.; Ahmed, S.; Ur-Rehman, M. Advances in sensor technologies in the era of smart factory and industry 4.0. Sensors 2020, 20, 6783. [Google Scholar]
  16. He, W.; Goodkind, D.; Kowal, P.R. An Aging World: 2015; International Population Reports; U.S. Department of Health and Human Services National Institutes of Health: Washington, DC, USA, 2016.
  17. Kinsella, K.G.; Phillips, D.R. Global Aging: The Challenge of Success. Available online: http://ereserve.library.utah.edu/Annual/SOC/3650/Nathenson/soc3650globalaging.pdf (accessed on 13 June 2021).
  18. Kannus, P.; Parkkari, J.; Niemi, S.; Palvanen, M. Fall-induced deaths among elderly people. Am. J. Public Health 2005, 95, 422–424. [Google Scholar] [CrossRef] [PubMed]
  19. Ashleibta, A.M.; Taha, A.; Khan, M.A.; Taylor, W.; Tahir, A.; Zoha, A.; Abbasi, Q.H.; Imran, M.A. 5g-enabled contactless multi-user presence and activity detection for independent assisted living. Sci. Rep. 2021, 11, 1–15. [Google Scholar]
  20. Gibson, R.M.; Amira, A.; Ramzan, N.; Casaseca-de-la-Higuera, P.; Pervez, Z. Matching pursuit-based compressive sensing in a wearable biomedical accelerometer fall diagnosis device. Biomed. Signal Process. Control 2017, 33, 96–108. [Google Scholar] [CrossRef] [Green Version]
  21. Dong, B.; Ren, A.; Shah, S.A.; Hu, F.; Zhao, N.; Yang, X.; Haider, D.; Zhang, Z.; Zhao, W.; Abbasi, Q.H. Monitoring of atopic dermatitis using leaky coaxial cable. Healthc. Technol. Lett. 2017, 4, 244–248. [Google Scholar] [CrossRef]
  22. Yang, X.; Shah, S.A.; Ren, A.; Fan, D.; Zhao, N.; Zheng, S.; Zhao, W.; Wang, W.; Soh, P.J.; Abbasi, Q.H. S-band sensing-based motion assessment framework for cerebellar dysfunction patients. IEEE Sens. J. 2018, 19, 8460–8467. [Google Scholar] [CrossRef] [Green Version]
  23. Haider, D.; Ren, A.; Fan, D.; Zhao, N.; Yang, X.; Shah, S.A.; Hu, F.; Abbasi, Q.H. An efficient monitoring of eclamptic seizures in wireless sensors networks. Comput. Electr. Eng. 2019, 75, 16–30. [Google Scholar] [CrossRef]
  24. Tanoli, S.A.K.; Rehman, M.; Khan, M.B.; Jadoon, I.; Ali Khan, F.; Nawaz, F.; Shah, S.A.; Yang, X.; Nasir, A.A. An experimental channel capacity analysis of cooperative networks using Universal Software Radio Peripheral (USRP). Sustainability 2018, 10, 1983. [Google Scholar] [CrossRef] [Green Version]
  25. Yang, X.; Fan, D.; Ren, A.; Zhao, N.; Shah, S.A.; Alomainy, A.; Ur-Rehman, M.; Abbasi, Q.H. Diagnosis of the Hypopnea syndrome in the early stage. Neural Comput. Appl. 2020, 32, 855–866. [Google Scholar] [CrossRef] [Green Version]
  26. Bohr, A.; Memarzadeh, K. The rise of artificial intelligence in healthcare Applications. In Artificial Intelligence in Healthcare; Elsevier: Amsterdam, The Netherlands, 2020; pp. 25–60. [Google Scholar]
  27. Liaqat, S.; Dashtipour, K.; Shah, S.A.; Rizwan, A.; Alotaibi, A.A.; Althobaiti, T.; Arshad, K.; Assaleh, K.; Ramzan, N. Novel Ensemble Algorithm for Multiple Activity Recognition in Elderly People Exploiting Ubiquitous Sensing Devices. IEEE Sens. J. 2021, 16, 18214–18221. [Google Scholar] [CrossRef]
  28. Rehman, M.; Shah, R.A.; Khan, M.B.; AbuAli, N.A.; Shah, S.A.; Yang, X.; Alomainy, A.; Imran, M.A.; Abbasi, Q.H. RF Sensing Based Breathing Patterns Detection Leveraging USRP Devices. Sensors 2021, 21, 3855. [Google Scholar] [CrossRef]
  29. Bhattacharya, A.; Vaughan, R. Deep learning radar design for breathing and fall detection. IEEE Sens. J. 2020, 20, 5072–5085. [Google Scholar] [CrossRef]
  30. Tang, C.; Vishwakarma, S.; Li, W.; Adve, R.; Julier, S.; Chetty, K. Augmenting Experimental Data with Simulations to Improve Activity Classification in Healthcare Monitoring. In Proceedings of the 2021 IEEE Radar Conference (RadarConf21), Virtual, 10–14 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
  31. Karayaneva, Y.; Sharifzadeh, S.; Li, W.; Jing, Y.; Tan, B. Unsupervised Doppler Radar Based Activity Recognition for e-Healthcare. IEEE Access 2021, 9, 62984–63001. [Google Scholar] [CrossRef]
  32. Stove, A.G. Linear FMCW Radar Techniques. IEE Proc. F Radar Signal Process. 1992, 139, 343–350. [Google Scholar] [CrossRef]
  33. Chen, V.C. Advances in Applications of Radar Micro-Doppler Signatures. In Proceedings of the IEEE Conference on Antenna Measurements & Applications (CAMA), Antibes Juan-les-Pins, France, 16–19 November 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 1–4. [Google Scholar]
  34. Shah, S.A.; Fan, D.; Ren, A.; Zhao, N.; Yang, X.; Tanoli, S.A.K. Seizure episodes detection via smart medical sensing system. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 4363–4375. [Google Scholar] [CrossRef] [Green Version]
  35. Chin, Z.H.; Ng, H.; Yap, T.T.V.; Tong, H.L.; Ho, C.C.; Goh, V.T. Daily activities classification on human motion primitives detection dataset. In Computational Science and Technology; Springer: Berlin/Heidelberg, Germany, 2019; pp. 117–125. [Google Scholar]
  36. Ding, C.; Zou, Y.; Sun, L.; Hong, H.; Zhu, X.; Li, C. Fall Detection with Multi-Domain Features by a Portable FMCW Radar. In Proceedings of the IEEE MTT-S International Wireless Symposium (IWS), Guangzhou, China, 19–22 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–3. [Google Scholar]
  37. Shah, S.A.; Fioranelli, F. Human Activity Recognition: Preliminary Results for Dataset Portability Using FMCW Radar. In Proceedings of the International Radar Conference (RADAR), Toulon, France, 23–27 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
  38. Liu, X.; Jia, M.; Zhang, X.; Lu, W. A novel multichannel Internet of things based on dynamic spectrum sharing in 5G communication. IEEE Internet Things J. 2018, 6, 5962–5970. [Google Scholar] [CrossRef]
  39. Liu, X.; Zhang, X. NOMA-based resource allocation for cluster-based cognitive industrial internet of things. IEEE Trans. Ind. Inform. 2019, 16, 5379–5388. [Google Scholar] [CrossRef]
  40. Jalal, A.; Quaid, M.A.K.; Kim, K. A wrist worn acceleration based human motion analysis and classification for ambient smart home system. J. Electr. Eng. Technol. 2019, 14, 1733–1739. [Google Scholar] [CrossRef]
  41. Zhang, P.; Su, Z.; Dong, Z.; Pahlavan, K. Complex Motion Detection Based on Channel State Information and LSTM-RNN. In Proceedings of the 10th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 756–760. [Google Scholar]
  42. Anjomshoa, F.; Aloqaily, M.; Kantarci, B.; Erol-Kantarci, M.; Schuckers, S. Social behaviometrics for personalized devices in the internet of things era. IEEE Access 2017, 5, 12199–12213. [Google Scholar] [CrossRef]
  43. Oueida, S.; Kotb, Y.; Aloqaily, M.; Jararweh, Y.; Baker, T. An edge computing based smart healthcare framework for resource management. Sensors 2018, 18, 4307. [Google Scholar] [CrossRef] [Green Version]
  44. Al-Khafajiy, M.; Otoum, S.; Baker, T.; Asim, M.; Maamar, Z.; Aloqaily, M.; Taylor, M.; Randles, M. Intelligent control and security of fog resources in healthcare systems via a cognitive fog model. ACM Trans. Internet Technol. 2021, 21, 1–23. [Google Scholar] [CrossRef]
  45. Nipu, M.N.A.; Talukder, S.; Islam, M.S.; Chakrabarty, A. Human Identification Using Wifi Signal. In Proceedings of the Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Sydney, NSW, Australia, 14–18 March 2016; IEEE: Piscataway, NJ, USA, 2018; pp. 300–304. [Google Scholar]
  46. Zhang, H.; Fu, Z.; Shu, K.I. Recognizing ping-pong motions using inertial data based on machine learning classification algorithms. IEEE Access 2019, 7, 167055–167064. [Google Scholar] [CrossRef]
  47. Gurbuz, S.Z.; Amin, M.G. Radar-based human-motion recognition with deep learning: Promising applications for indoor monitoring. IEEE Signal Process. Mag. 2019, 36, 16–28. [Google Scholar] [CrossRef]
  48. Le Kernec, J.; Fioranelli, F.; Ding, C.; Zhao, H.; Sun, L.; Hong, H.; Lorandel, J.; Romain, O. Radar signal processing for sensing in assisted living: The challenges associated with real-time implementation of emerging algorithms. IEEE Signal Process. Mag. 2019, 36, 29–41. [Google Scholar] [CrossRef] [Green Version]
  49. Jankiraman, M. FMCW Radar Design; Artech House: London, UK, 2018. [Google Scholar]
  50. Kang, S.W.; Jang, M.H.; Lee, S. Identification of human motion using radar sensor in an indoor environment. Sensors 2021, 21, 2305. [Google Scholar] [CrossRef] [PubMed]
  51. Mahafza, B.R. Radar Systems Analysis and Design Using MATLAB; Chapman and Hall/CRC: Boca Raton, FL, USA, 2005. [Google Scholar]
  52. Saeed, U.; Jan, S.U.; Lee, Y.D.; Koo, I. Fault diagnosis based on extremely randomized trees in wireless sensor networks. Reliab. Eng. Syst. Saf. 2021, 205, 107284. [Google Scholar] [CrossRef]
  53. Jan, S.U.; Lee, Y.D.; Shin, J.; Koo, I. Sensor fault classification based on support vector machine and statistical time-domain features. IEEE Access 2017, 5, 8682–8690. [Google Scholar] [CrossRef]
  54. Saeed, U.; Lee, Y.D.; Jan, S.U.; Koo, I. CAFD: Context-aware fault diagnostic scheme towards sensor faults utilizing machine learning. Sensors 2021, 21, 617. [Google Scholar] [CrossRef]
  55. Ahmad, Z.; Rai, A.; Maliuk, A.S.; Kim, J.M. Discriminant feature extraction for centrifugal pump fault diagnosis. IEEE Access 2020, 8, 165512–165528. [Google Scholar] [CrossRef]
  56. Saeed, U.; Shah, S.Y.; Zahid, A.; Anjum, N.; Ahmad, J.; Imran, M.A.; Abbasi, Q.H.; Shaha, S.A. Wireless Channel Modelling for Identifying Six Types of Respiratory Patterns with SDR Sensing and Deep Multilayer Perceptron. IEEE Sens. J. 2021. [Google Scholar] [CrossRef]
  57. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  58. Fioranelli, F.; Shah, S.A.; Li, H.; Shrestha, A.; Yang, S.; Le Kernec, J. Radar Signatures of Human Activities. 2019. Available online: http://researchdata.gla.ac.uk/848/ (accessed on 13 June 2021).
Figure 1. Framework of the proposed contactless system model for fall detection.
Figure 1. Framework of the proposed contactless system model for fall detection.
Electronics 10 02237 g001
Figure 2. FMCW RADAR (a) structure (b) signal produced from the waveform-generator [50].
Figure 2. FMCW RADAR (a) structure (b) signal produced from the waveform-generator [50].
Electronics 10 02237 g002aElectronics 10 02237 g002b
Figure 3. Block diagram of the FMCW RADAR scheme for distinct activity recognition.
Figure 3. Block diagram of the FMCW RADAR scheme for distinct activity recognition.
Electronics 10 02237 g003
Figure 4. Diverse spaces used to acquire data on various human activities by employing participants of distinct ages [58].
Figure 4. Diverse spaces used to acquire data on various human activities by employing participants of distinct ages [58].
Electronics 10 02237 g004
Figure 5. Samples of the acquired micro-Doppler signatures of six different activities.
Figure 5. Samples of the acquired micro-Doppler signatures of six different activities.
Electronics 10 02237 g005
Figure 6. Experiment 1 results: (a) confusion matrix, (b) ResNet model accuracy, and (c) loss versus number of epochs.
Figure 6. Experiment 1 results: (a) confusion matrix, (b) ResNet model accuracy, and (c) loss versus number of epochs.
Electronics 10 02237 g006
Figure 7. Experiment 2 results: (a) confusion matrix, (b) ResNet model accuracy, and (c) loss versus number of epochs.
Figure 7. Experiment 2 results: (a) confusion matrix, (b) ResNet model accuracy, and (c) loss versus number of epochs.
Electronics 10 02237 g007aElectronics 10 02237 g007b
Table 1. Experiments 1 and 2 classification report of six distinct activities trained by residual neural network.
Table 1. Experiments 1 and 2 classification report of six distinct activities trained by residual neural network.
ActivityExperiment 1Experiment 2
PrecisionRecallF1-ScorePrecisionRecallF1-Score
Bending 94 % 85 % 89 % 65 % 82 % 73 %
Drinking 87 % 100 % 93 % 79 % 75 % 77 %
Falling 100 % 100 % 100 % 100 % 90 % 95 %
Sitting 95 % 100 % 98 % 90 % 90 % 90 %
Standing 100 % 90 % 95 % 86 % 75 % 80 %
Walking 100 % 100 % 100 % 100 % 100 % 100 %
Overall Accuracy96%85%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Saeed, U.; Shah, S.Y.; Shah, S.A.; Ahmad, J.; Alotaibi, A.A.; Althobaiti, T.; Ramzan, N.; Alomainy, A.; Abbasi, Q.H. Discrete Human Activity Recognition and Fall Detection by Combining FMCW RADAR Data of Heterogeneous Environments for Independent Assistive Living. Electronics 2021, 10, 2237. https://doi.org/10.3390/electronics10182237

AMA Style

Saeed U, Shah SY, Shah SA, Ahmad J, Alotaibi AA, Althobaiti T, Ramzan N, Alomainy A, Abbasi QH. Discrete Human Activity Recognition and Fall Detection by Combining FMCW RADAR Data of Heterogeneous Environments for Independent Assistive Living. Electronics. 2021; 10(18):2237. https://doi.org/10.3390/electronics10182237

Chicago/Turabian Style

Saeed, Umer, Syed Yaseen Shah, Syed Aziz Shah, Jawad Ahmad, Abdullah Alhumaidi Alotaibi, Turke Althobaiti, Naeem Ramzan, Akram Alomainy, and Qammer H. Abbasi. 2021. "Discrete Human Activity Recognition and Fall Detection by Combining FMCW RADAR Data of Heterogeneous Environments for Independent Assistive Living" Electronics 10, no. 18: 2237. https://doi.org/10.3390/electronics10182237

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop