Next Article in Journal
A Cell’s Viscoelasticity Measurement Method Based on the Spheroidization Process of Non-Spherical Shaped Cell
Next Article in Special Issue
Adapting Semi-Active Prostheses to Real-World Movements: Sensing and Controlling the Dynamic Mean Ankle Moment Arm with a Variable-Stiffness Foot on Ramps and Stairs
Previous Article in Journal
The Application of a Sonic Probe Extensometer for the Detection of Rock Salt Flow Field in Underground Convergence Monitoring
Previous Article in Special Issue
Performance Analysis of a Head and Eye Motion-Based Control Interface for Assistive Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Development and Progress in Sensors and Technologies for Human Emotion Recognition

1
School of Computer Science, Faculty of Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
2
School of Engineering, Faculty of Science and Engineering, Macquarie University, Sydney, NSW 2109, Australia
3
School of Computer and Information Sciences, University of Hyderabad, Hyderabad, Telangana 500046, India
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(16), 5554; https://doi.org/10.3390/s21165554
Submission received: 19 June 2021 / Revised: 8 August 2021 / Accepted: 13 August 2021 / Published: 18 August 2021
(This article belongs to the Special Issue Sensor Technology for Improving Human Movements and Postures)

Abstract

:
With the advancement of human-computer interaction, robotics, and especially humanoid robots, there is an increasing trend for human-to-human communications over online platforms (e.g., zoom). This has become more significant in recent years due to the Covid-19 pandemic situation. The increased use of online platforms for communication signifies the need to build efficient and more interactive human emotion recognition systems. In a human emotion recognition system, the physiological signals of human beings are collected, analyzed, and processed with the help of dedicated learning techniques and algorithms. With the proliferation of emerging technologies, e.g., the Internet of Things (IoT), future Internet, and artificial intelligence, there is a high demand for building scalable, robust, efficient, and trustworthy human recognition systems. In this paper, we present the development and progress in sensors and technologies to detect human emotions. We review the state-of-the-art sensors used for human emotion recognition and different types of activity monitoring. We present the design challenges and provide practical references of such human emotion recognition systems in the real world. Finally, we discuss the current trends in applications and explore the future research directions to address issues, e.g., scalability, security, trust, privacy, transparency, and decentralization.

1. Introduction

Emotion is significant in our daily life, and it plays a vital role in how we think, react, and behave [1]. In other words, it is a central part of decision making, problem-solving, communicating, or even negotiating in different situations [2]. Emotion can be regarded as a mental state that changes and is often characterized by physiological changes that can be seen by external physical expressions as well as observed by internal feelings. Thus, monitoring these changes is pivotal to detect and prevent a concern in an early stage, in particular, for those who have developed mental disabilities. Emotion recognition is a process that identifies human emotions. Figure 1 shows a basic process of human emotion recognition. It involves the steps from an input signal to emotion recognition. Input signals (e.g., an image or image sequences) are captured. Then they are processed based on feature extraction and feature classification approaches. Finally, the emotions are identified and detected. Emotions, e.g., happy, sad, angry, stress, etc., may vary in different places, circumstances, cultures, contexts, as well as with human personality, interests, etc. [3,4]. Emotion recognition is associated with ‘affective computing’ [5] research area that studies and develops systems to sense the emotional state of a user (using sensors) and process them using computer systems to recognize the emotions. This is fundamental in regards to the development of the multiple sensors applications where the devices are capable of taking the physical parameters of humans and translate them (the signals) to computational signals that are competent in expressing different human emotions in real-time. For instance, depending upon the mood of a person, lights in a room can change, or by detecting the facial expressions of students teaching methods can be improved. Emotion recognition has also become a new trend for e-marketing. Companies have effectively used this technology to attract more customers based on their choices for the promotion of products [6,7].

1.1. Motivation

With the development of sensor networks, the Internet of Things (IoT) technology, and wearable smart devices it can now be possible to detect different human emotions quickly and efficiently [8]. It has become a significant area of research when it comes to large-scale systems, in particular, related to human-machine interactions. It is argued that if the machines could understand humans’ affective emotional state, then it would be much easier and become smoother in communication [9]. For automatic recognition of emotions, various methods are employed. Some commonly used techniques include textual information, speech and body gestures, facial expressions, physiological signals. That said, this can be achieved by fabricating sensors based on needs and requirements that can contact a human body directly (i.e., invasive) or indirectly (i.e., non-invasive) when collecting the physiological parameters [10].
Among others, emotion recognition plays a crucial part in human health and the related medical procedure to detect, analyze, and determine the medical conditions of a person [11,12]. It is gaining popularity for the early detection of physiological and emotional changes for patients. It is often noted that during decision-making, users are influenced by their affective states. For example, during a happy situation, the perception remains biased at selecting happy events and it changes for negative situations accordingly, where the negative emotions may cause potential health problems [13]. Various data (i.e., physiological parameters) are collected via sensors and are transmitted to a computer system for analysis. For communication purposes, the commonly used technology is ZigBee. This helps in the transfer of data from sensors to computer systems. An application programming interface (API) is used for real-time data monitoring and management. Therefore, proper detection of human emotion can also improve wellness and fitness treatment by appropriate identification and disease diagnoses [14,15].
There have been several types of research going on with the objective to design and development of low cost and non-invasive physiological sensors that are being able to perform in real-time to capture and evaluate basic human emotions [16,17,18]. Recent market studies show that there is a huge demand for human emotion recognition systems. It is predicted that the global emotion detection and recognition market size would be USD 56.0 billion by 2024, compared to USD 21.6 billion in 2019. This trend in the increase indicates growth in the Compound Annual Growth Rate (CAGR) of 21% in the next few years [19]. Eventually, in emotion-sensing technology, the sensors for detecting human emotions will generate a high amount of data, including a user’s personally identifiable information (PII), sensitive health information as well as contextual information (e.g., location, date, and time) in many areas including law enforcement, entertainment, and surveillance and monitoring.

1.2. Contributions

Emotions are considered one of the key components of human social interaction. With the growing area of human-computer interaction (also known as human-machine interaction), it is important to study the need for computers to effectively understand this component in order to be perceived by users as truly effective in communication [20]. As of today, there is no complete, structured, and coherent architecture for human emotion recognition. The available architectures are system-dependent and rely on the specific feature and requirements of that system. This presents significant challenges in developing a robust, efficient, and scalable human emotion recognition system with flexible applications and services.
There are a few surveys that discuss human emotion recognition issues in general computing systems and are dedicated to serving certain conditions. For instance, literature [21] presents a survey of facial emotion recognition based on real-world user experiences in mixed reality. The core focus of this literature is to observing various emotion recognition in Augmented Reality (AR). Another proposal [22] discusses the possibility of automated emotion representation, recognition, and prediction based on Artificial Intelligence (AI) technology. The survey is mostly focused on the impact of emotion analysis and investigates the challenges of multimodal emotion detection for computers, robots, and integrated environments to establish an automated emotion recognition method. Proposal [2] presents a survey on human emotion recognition for the IoT and affective computing systems. In [23], a systematic study has been presented based on emotion recognition from facial expressions in an e-learning context.
Unlike these proposals, in this paper, we review the state-of-the-art development of sensors and technologies for human emotion recognition from a wider aspect of technology, design, methods, and applications. In other words, to gain the objectives of this paper, we review various types of emotion recognition techniques, their design criteria, choice of methods, and available applications. We also aim to provide a summary of our findings to the state of the trends in applying sensors in human emotion recognition and indicate the future possibilities of employing intelligent sensors for human emotion recognition [24].
Currently, there are different semantically incompatible areas of research that show different views on human emotion and their corresponding recognition systems. The novelty of this paper lies in a systematic and comprehensive approach to discussing the various human emotion recognition systems, technologies, applications, and their development challenges in single literature. The major contributions of the paper can be summarized as follows:
  • We review the state of the art development and progress in sensors and technologies for human emotion recognition. In particular, we provide a systematic study of various types of emotion recognition techniques, methods, and available applications.
  • We discuss the several challenges for human emotion recognition (including monitoring and analysis of behaviour patterns, and measurement actions) to develop a flexible and efficient human emotion recognition system at scale.
  • We provide a summary of the state of the trends in applying sensors in human emotion recognition and indicate the potential of employing such intelligent sensing systems for human emotion recognition in the future.

1.3. Methodology

To examine the state-of-the-art human emotion recognition and their relevant applications, we include the comparable papers published in a wider period of time, which are relevant to our present review. In addition, we emphasize and cite the other publications that we find are also applicable and have a close correlation to this review. A range of venues is considered to get a diverse range of aspects. This includes journal papers, conference/workshop/symposium papers, book chapters, and papers from multiple disciplinary repositories (e.g., technical reports, arXiv copies, etc.).
In the papers, that are included, we mostly search the keywords of human emotion, emotion recognition, wearable sensors, robotic sensors, motion analysis, emotion category, etc. in their abstract. Then we assessed the papers by reading, whether the paper describes an architecture, provides a survey, examines different human emotions techniques, etc. Of the 300 papers we examined, we find 170 papers are relevant to our research motivation. Then each paper is examined and tested against the key objective of the paper (i.e., development and progress towards human emotion recognition). We use Thompson Routers, ACM Computing Classification System, and Google Scholar.

1.4. Organization and Roadmap

The rest of the paper is organized as follows. In Section 2, we present the fundamental of human emotions and discuss the classification of basic human emotions. In Section 3, we present a review of widely used sensors for human emotion recognition. In Section 4, we provide a detailed discussion on different activity monitoring and their methodologies employed in various human emotion recognition systems. In Section 5, we discuss design challenges and different applications associated with some real-world applications followed by current trends and future directions on human emotion recognition in Section 6. Finally, in Section 7 we conclude the paper.

2. Human Emotion Recognition

In daily life, human beings encounter various conditions and events that generated different stimuli that interfere with their emotions. Research shows that human emotions are strongly associated with the cognitive judgment that direly impacts social and cultural behavior and communication [25]. There have been various attempts made to classify the emotions and categorize them based on different parameters e.g., mood, feeling, and affect [26,27,28,29,30]. In [31], Ekman classifies human emotion based on facial expressions. Six fundamental emotions are recorded they are happiness, sadness, fear, surprise, anger, and disgust. In [32], Ekman further extends his findings and added more eleven emotions including contempt, relief, satisfaction, shame, etc. Unlike Ekman, AlMejrad [13] classifies three types of human emotions based on brain wave signals, namely, motivational (e.g., hunger, pain, mood, etc.), basic (e.g., happy, sad, fear, etc.), and self-conscious (e.g., shame, embarrassment, pride, etc). Studies, e.g., [33,34] show that emotion is an integral part of decision making in everyday life. Similarly, research presented in [35] indicates that emotions are significant to make wider attention and create cognitive, physical, and social resources. This in turn help to enhance the physical, social, and intellectual skills of human beings. In [36], Feidakis et al. present a classification of emotions based on a proposed e-learning model. In total, 66 emotions have been classified. They are assorted into two groups, namely basic emotions and secondary emotions. The former (i.e., basic emotion) consists of 10 emotions, e.g., anger, joy, etc.) and the latter (i.e., secondary emotions) consists of the remaining 56 emotions. Other studies classify emotions based on voice [37] and physiological signals [38].
It can be seen that emotions contribute significantly to generating actions and their efficient execution and control. Therefore, studying human emotions and their efficient recognition (how they react) and monitoring (how these reactions affect a user physically and mentally) becomes an important issue.

3. Sensors for Human Emotion Recognition

Recognizing human emotion is considered a fascinating task for data scientists. Studying human emotion needs appropriate sensors to deploy for collecting the right data. These sensors are generally used for automatic emotion collection, recognition, and making intelligent decisions. The key central nervous system (CNS) emotional-affective processes are, (i) Primary-process, (ii) secondary-process, and (iii) tertiary-process [39]. In Figure 2, we show these three processes related to collect human emotions and make decisions based on the collected data. In the first process (referred to as, primary process) emotions are collected by sensors. In the second process (referred to as, the secondary process), appropriate learning technology (and memory) is used for the collected data. Finally, in the last process (referred to as, tertiary process) appropriate decisions are taken based on the higher cognitive functions. Therefore, the choice of appropriate sensors is significant. In this section, we discuss different sensors widely used for human emotion recognition. Sensors are integral parts for sensing various emotions from human beings and transfer them into the database for further processing and results, often happens autonomously [40]. With the rapid improvement of sensing technologies and different IoT-enabled wearable and flexible sensors, there is an improvement towards the sensing performance, accuracy in measurement, light-weight in carrying capacity and efficiency in the obtained results [41,42].
Emotion recognition using text is widely used when it comes especially for human machine interaction. In this case, textual information e.g., books, newspapers, contents of different websites, etc. are taken into consideration as a rich source of content for human emotion detection. Cameras are sued for performance monitoring by detecting the facial expressions of the individuals [43,44]. Similarly, facial emotion recognition is an important part that helps to determine human emotion using different visual communication systems [45]. Cameras are, in general, used to detecting these emotions. In addition, robots are used to communicate with human beings using AI technology. It helps to capture both logical and emotional information. Commonly, this technique includes various movements of the cheek, chin, eyes, wrinkles and mouth [46,47,48].
Another widely used technique is speech [49]. It contains information not only limited to what the person is saying but also holds the information of the speaker including their emotions, real-time interactions, and various meanings [50]. It has great potential to recognize human emotion in a human machine interaction setting both for communication and interaction. The practical applicability is vast, for instance, in call centers to analyze the customers’ needs and the given feedbacks, in specific, the kind of emotions an individual transmits [51]. The next common physiological parameter that is considered for human emotion recognition is body movements and gestures [52,53,54]. Emotions can be featured by the whole-body posture and movement quality. Different kinds of optical sensors, ambient light sensor and fiber-optic curvature sensor, Kinect sensor, etc. can be used for this purpose [55,56]. This technique plays an important role in detecting the movements of the aged or patients in home care (i.e., monitoring health status remotely). Early detection of movements and unusual activities can help to determine the seriousness of a patient, e.g., fall, or unattended in a place for a long time [57]. It is reported that it gained mush efficiency in monitoring the movement and activity of patients suffering from chronic movement disorder issues, for example, patient sufferings from Parkinson’s disease [58]. In other words, it helps in continuous physiological monitoring that reduces human intervention but at the same time increases the efficiency of a patient’s wellbeing.
Human emotion recognition using biosensors are well adopted for both human to human and human to machine interactions [59,60]. Fundamentally, it has the advantage of direct monitoring of physiological parameters controlled by the autonomous nervous system which is affected by emotions. These sensors collect signals of different body parts, e.g., heart, skin, brain etc [61,62]. There are a number of advantages to using biosensors as a means of recognizing human emotion. These sensors are getting smaller in size so easily be fitted with any wearable device. Light weight in nature that supports the basic integration of technologies for the IoT area. Moreover, the production cost of these sensors is getting relatively less which attacks more market space for the common users [63,64].
Different physiological signals (can also be known as biosignals) are monitored and measured and measured using these biosensors. For instance, electromyography (EMG), which refers to the muscle activity which is measured by the muscle response to a nerve’s stimulation of the certain muscle (i.e., the compulsion of motor neurons) and recording the electrical activity, also known as the galvanic skin response (GSR) [65]. Sensors also collect signals from electrodermal activity that refers to the measurement of skin conductivity, which increases if the skin is sweaty [66]. It sees the changes in resistance of the skin to a small electrical current. Skin temperature also helps to detect muscle strength by measuring the temperature on the surface of the skin. Sensors attached to a person can detect a certain abnormality in skin temperature and human emotion is therefore vary based on the observed reactions e.g., calm or excited [67]. Sensors are also used to determine blood volume pulse (BVP) that indicates the volume of blood that is currently running through the vessels. A photoplethysmograph (PPG) composed of a light source and photosensor is typically attached to the person’s skin, is used to determine the level of BVP by recordings the motion and pressure artefacts [68]. Sensors are also used to recognize human emotion by measuring human respiration i.e., how deep and fast a person is breathing. The sensors are typically placed with a rubber band around the chest area. Commonly, breathing patterns changes in response to changes in emotions. Various emotions e.g., anger, excitement, anxiety, happiness, etc. can be determined using such sensors [69].
The most commonly used biosensors for detecting human emotion is through the use of electrocardiogram (ECG) technique [70,71]. It is widely accepted to distinguish between human emotions. Sensors are placed over the chest area where the heart is situated and periodically measure the heart rates which are then used for the detention of emotions by the collected data. With the data analysis, heart rate variability (HRV) is determined which helps to reflect the state of relaxation or mental stress [72]. Sensors are also used to determine brain signals using electroencephalography (EEG) technique [73,74]. This is a popular choice to determine different emotional states by analyzing the human brain’s impulses. The EEG signals are collected using an electroencephalogram device that remains attached to a person’s scalp using adhesive-conducting gel or special headsets during the data collection process [75]. With a similar vision of EMG, eye movements are also used to determine human emotion [76]. In this case, the electrooculography (EOG) technique is used. To measure eye movement, pairs of sensors are placed either above or below the eye of a person. Sensors capture the signals that allow to determine the user’s attention for a particular object and helps to observe their subconscious behaviours [77]. With the improvements of the nanoscale devices, the use of flexible sensors shows immense potential to be utilized for healthcare, pervasive care, and industrial applications for efficient collection of data [78]. To this end, the applications of printed flexible sensors have been increased to use in human emotion recognition due to its certain advantages, e.g., low cost of fabrication, enhanced electrical and mechanical attributes, multifunctionality as well as the resolution [79,80,81]. Furthermore, the application of wearable flexible electronics [82] and affinity flexible biosensors [83] have been of great interest to collect human emotional data over the past years. With the enhancement of the printed sensors, both sensors have shown great promise for physical sensing to retrieve more insightful information. In Figure 3, we illustrate a taxonomy of different types of sensors based on the techniques, interaction, and physiological parameters.

4. Types of Activity Monitoring and Methodologies

There has been tremendous growth in the number of users for the different applications of human emotion recognition systems. The basic goal of such applications is to automatically collect human body parameters or electric impulses and classify a user’s emotion efficiently based on the collected information [84,85].
Different approaches and methodologies have been taken to recognize and evaluate human emotions. For this purpose, one of the most common processes is the use of natural language processing techniques. In this technique, emotions are extracted and sentiments by analyzing the input text. Proposal [86] presents a semi-automatic acquisition technique using a sentence or text for human emotion collection with the constructed emotion thesaurus. It has also used emotion-sensing and emotion computing technicians for automatic emotion detection that includes functions, e.g., syntax analysis, accidence analysis. Other design techniques, that use a robot for emotion detection based on language information have been reported in [87]. The information is extracted from words in conversations. Different communication interfaces are also used to enhance text communication that helps to understand human emotion in some specific cases [88]. Proposal [89] discusses an identification method of happiness and sadness emotions using the autonomic nervous system responses. For this purpose, two nervous signals, namely, SKT (SKin Temperature) and PPG (PhotoPlethysmoGram) were analyzed to extract a two-dimensional emotional feature vector. The signals are collected by the sensors attached to human skin. For the collection of SKT signals, the TSD200D sensor (Biopac, Goleta, CA, USA) is placed on the index finger, and, for the collection of PPG signals, the TSD200A sensor (Biopac, Goleta, CA, USA) is placed on the thumb. A cognitive-emotional model for eldercare is reported in [90]. In this model, facial expressions are collected using the Gabor filter, Local Binary Pattern algorithm (LBP), and k-Nearest Neighbor algorithm (KNN). Then using a robot these features are extracted and recognized for different variants of human emotions. The cognitive reappraisal strategy and Euclidean distance are used to obtain transition probability from one emotional state to another. With a similar vision to [88], the proposal [91] presents a new method of analyzing physiological signals with the help of the peaks (small peak and high peak) in the Electrodermal activity (EDA) signal to capture human emotion. An Empatica E4 smartwatch is used for the experiment that collects EDA, SKT, and HR values.
We noted earlier that EEG is one of the popular choices for use in emotion recognition. In Figure 4, we depict the process of EEG based human emotion recognition discussed in [92]. Several approaches have used EEG for emotion recognition [93,94]. For instance, the proposal [95] uses EEG to collect peripheral signals. In the proposed scheme, three methodologies have been used. First, both the peripheral physiological features and the EEG features are extracted. Second, using canonical correlation analysis (CCA), a new physiological feature space is used. Finally, different emotional labels are created to map the peripheral physiological features using a support vector machine (SVM). Note, both the peripheral physiological signals and EEG signals play an important part in this case. To avoid the noise, for EEG signals a lower cutoff frequency of 0.3 Hz and a higher cutoff frequency of 45 Hz is used. In proposal [96], SVM and gaussian process (GP) models are sued to determine music emotion estimation. The motivation of this study is to develop a comparison in the performance between SVM and GP models to the music genre and emotion recognition tasks. Another proposal [97] presents a real-time emotion recognition system based on different human emotions collected from EEG signals. Through the analysis of brain waves, the system can recognize similar discrete emotions that are closer to the valence-arousal coordinate space. The system includes six fundamental modules, e.g., emotion elicitation, EEG data acquisition, data preprocessing, feature extraction, emotion classification, and human-machine interface. A standard database consists of 16 video clips is used for identifying an individual’s emotional states.
Earlier, we noted that there is a strong correlation between the human emotional state and physiological responses [98,99]. Therefore, collecting appropriate human emotions and their efficient management is important to understand and classify human emotions. A commonly used mechanism for human emotion detection is the use of machine learning (ML) technology. For example, the proposal [100] uses six different types of machine learning algorithms to identity two negative emotions, namely, sadness and disgust. The sensors are used to collect the physiological signals (e.g., EDA, SKT, ECG, and PPG). The results are analyzed based on the preferred algorithm to understand the driver’s emotion and driving pattern. Note, negative emotions are taken into consideration because it is primarily responsible for a gradual declination or the ability of a person’s normal thinking process. Proposal [101], discusses the use of the cross-corpus evaluation technique to examine performances in data analysis obtained from different sensors and provide a realistic view of obtainable performances. The cross-corpus evaluation technique is used in ML disciplines that has the benefits to automatically detect similarity among multiple databases of specific emotion labels. In this proposal, the authors used six different databases to see the similarities.
A machine-learning algorithm to categorize EEG dynamics based on the self-reported emotional states of a person when listening to certain music is discussed in [102]. The approach classifies four music-induced emotional states (e.g., joy, anger, sadness, and pleasure). The motivation of this study is to study and classify EEG feature extraction technique that is associated with EEG dynamics and music-induced emotional states. Unlike [100,102], which directly computing the emotions of each music piece to a desired state, proposal [103] presents a ranking-based emotion recognition approach that can be used for various applications in music organization and retrieval. In this, a collection of music is ranked based on emotions, and then the emotion values of each music are determined bases on the other music pieces by their relevance.
Image processing techniques are also used for detecting human emotions. This has gained much popularity in the development of virtual reality (VR), AR research areas, and computer vision industries. In [104], an image processing technique is used to help to understand the facial expression recognition system requirements (using video cameras). First, human emotions are captures based on facial expressions, and second, music is used on these emotions that enhance the mood of the users. A list of songs is used based on current emotions. Likewise [104], a similar study of human emotion recognition is reported in [105], which is based on customized music recommendations. Emotion recognition detection, recognize naturally induced musical emotions and their classification based on physiological changes in music listening has been reported in [106]. An emotion-specific multilevel dichotomous classification (EMDC) is employed to compare the performance with direct multiclass classification. Proposal [107], uses a speech emotion recognition (SER) system that captures human emotion using voice speech signals as an input. Five emotions are recognized, they are, anger, anxiety, boredom, happiness, and sadness. The system automatically detects the emotion, and then appropriate music selection is carried out from a pool of listed songs stored in a database. In the case of a large audio database, the use of the anchor models system is proposed [108]. In such a model, an emotion class is classified by measuring the similarity to other emotion classes. A multi-label music emotion recognition system based on the hierarchical dirichlet process mixture model (HPDMM) is reported in [109]. In this work, different components of HPDMM are shared among various models of each emotion. Linear discriminant analysis is used as a discriminant factor that captures different emotions in real-world scenarios. The proposal [110], presents a framework for human emotion in videos by transferring knowledge from heterogeneous sources (e.g., image and text). This work tries to bridge the research gap that recognizes human emotion from text sources to the video domain.
Music is used to determine human emotion based on the noted EEG signal changes [111]. The fundamental of this is to see the brain’s processing of music in evoking different emotions. It is often argued that music acts as a direct expression of emotion from the brain waves. In Proposal [112], EEG data are collected before and after listening to music. Two kinds of music are used, preferred, and relaxing. Then the changes in emotion are measured using arousal and valence values. Likewise music, video clips are also used for human emotion recognition based on discrete emotion recognition techniques from multi-modal physiological signals, e.g., EEG, GSR, RSP, and ECG [113]. A higher-order crossings (HOC) analysis is used for emotion detection based on the EEG-based feature extraction technique has been reported in [114]. The HOC is employed for feature extraction and performs a robust classification of the different human emotional states. Another similar HOC assisted emotion recognition system using EEG signals is reported in [115]. In work [116], a technique to evaluate the emotional impact (captured by EEG signals) for each emotion is reported. An EEG-based technique is used to collect signals of different emotional expressions in the brain. The frontal brain asymmetry concept is employed to define an emotion elicitation evaluation division. A multidimensional directed information (MDI) analysis is taken into consideration that extracts various emotional measures to form an index (govern by the frontal brain asymmetry theory). This index helps to evaluate the asymmetry between the different EEG signals captured in two opposite brain hemispheres. The combination of electrocardiography (ECG) and photoplethysmography (PPG) is used to capture real-time human emotions [117]. The method is commonly known as pulse transit time (PTT). The idea of PPT is to calculate the time difference between the simulation of the heart (detected by the EEG signals) and the arrival of the corresponding blood pulse wave to a certain area, e.g., arm wrist (detected by the PPG signals). To make emotion recognition more immersive and realistic, VR scenes (e.g., using VR glasses) are used with traditional EEG-based applications [118]. A combination of EEG, EMG, and EOG signals for emotion recognition is reported in [119]. EEG signals help to recognize inner emotion, and EMG and EOG signals are used to remove artifacts. In Figure 5, we show a hybrid brain-computer interface combined with EEG, EOG, and EMG signals [120].
Note, the EEG signals of emotions vary from person to person, and there is no unique pattern for the signals. Therefore, EEG-based emotion recognition models are, in general, subject dependent. Several proposals study the need for subject independent emotion recognition based on EEG signals. It shows significance where the EEG of emotions of the subjects is not available to compose an emotion recognition model. For instance, in [121], a subject independent emotion recognition model has been reported based on variational mode decomposition (VMD) and deep neural network (DNN). VMD is used for feature extraction, and DNN is employed as the classifier for classifying emotions captured by the EEG data. Similarly, proposal [122] presents an EEG-based emotion recognition approach addressing the challenges of subject-dependencies in emotion recognition. A multi-task deep neural network model is applied to classify subject independent emotional labels.
We noted that different facial expressions are usually associated with various emotional states. In [123], an emotion detection method has been discussed that can automatically capture a smile of a person for analysis of human to human communication. Computer vision techniques are employed to capture physiological signals. The study presented in [124], considers the cultural sides of artifacts that have prominent significance in human emotion recognition. It examines the influence of language and culture on a subject’s familiarity and their perception of the overall emotion recognition. An ensemble classifier has been developed to deal with multiple languages to map and predict emotions. The diverse languages are employed for training and independently modeled the results.
The convolutional neural network (CNN) model has been proposed for human emotion recognition. It is a popular model that has been used for human emotion recognition based on a class of deep neural networks [125]. For instance, the proposal [126] presents an architecture for emotion detection that uses CNN. The proposed architecture is developed for user engagement estimation in entertainment applications. In [127], CNN is used to estimate emotions from the partially covered human face images by wearing a head-mounted display (HMD). Work presented in [128] discusses a framework that can capture emotional expressions and predict the mood of a person, perceived by other persons. In other words, the framework is able to automatically predict a person’s mood from a sequence of recognized emotions. A care-home is considered where the emotional expressions are captured using the human affective intelligence model and then the corresponding mood is determined by the experienced care-takers. CNN is also used for emotion detection in large-scale systems like the IoT. For instance, proposal [129] presents an architecture that can predict human facial expressions using a deep facial expression recognition algorithm supported by CNN. An approach to detect human emotion using both image and text has been reported in [130]. The proposed model examines the recognition of the character emotions from television drama characters which in turn will help to understand the story. To classify the images, a deep learning model (e.g., CNN) is employed to automatically identify the characters, objects, and activities. The process includes both the facial images of the television characters and the textual information that describes the situation. Seven emotional classes are considered. In [131], CNN is used to classify gray-scale images using a multi-class classifier. A single integrated module is used to detect the human face and recognize their emotions. A list of seven emotions is used for this study.
A physiological signal-based emotion recognition algorithm has been reported in [132]. The idea is to perform an efficient mapping from the discrete human emotions to a fixed pattern of physiological signals. A support vector machine (SVM) is used to automatically classify these emotions by pre-processing the physiological signals and the corresponding feature extraction. The fuzzy logic-based methodology is used for human recognition technology. For instance, a fuzzy relational approach to human emotion recognition has been proposed in [133]. The proposal uses external stimulus and studies three important regions including mouth, eyes, and eyebrows to facial expressions. Then the expressions are analyzed and segmented into individual frames of regions of interest. In other words, the fuzzy logic-based scheme is used for controlling the transition of emotion dynamics to the desired state. Several proposals have been discussed for emotion recognition in mobile platforms. For instance, the study presented in [134] discusses an emotion recognition system for mobile applications. In this, using the internal camera, the smartphone captures videos of a certain user. Selective frames are extracted from the video and generate subband images. Then based on the subband images, the local binary patterns (LBP) histogram is calculated. Finally, using a gaussian mixture model (GMM) based classifier emotions are classified.
In Table 1, we provide a summary of major methods for activity monitoring (focused human emotion recognition) and corresponding approach used (reference articles) that are discussed in this section. Overall, it can be seen that the use of different methodologies for human emotion recognition is paying much attention these days. Advancements in newly developed technologies and applications further enhance emotion recognition to an extent level for both the users and manufacturers.

5. Design Challenges and Applications

Sensing technology conjointly brings to life a replacement style approach that can embody many complex tasks than merely making a visual style for emotion recognition. It will combine monitoring and analysis of behaviour patterns, measurement actions, and noting facial expressions, voice intonation, and visual communication. Sensible devices are learning to assess the means of emotions they ‘perceive’ and to retort sensitively. An accurate emotion recognition process depends upon the uncertainty present in emotion recognition methods, in particular, when combining various disjoint models [135]. However, the design challenges may vary based on the system’s requirements and the designer’s choice. Challenges may come from representation learning of facial expressions, choices of sensors that can react accordingly, or even a person’s behavioural challenges to express a particular emotion [136]. Among others, one core challenge for human emotion recognition systems is present is the design of smart, intelligent, and collaborative systems which can interact in different service components of a system to provide more accurate results in real-time. Apart from that, there are electrical and non-electrical constraints that can affect the overall emotion recognition process. In Table 2, we summarize the various emotion recognition techniques based on such electrical and non-electrical constraints and provide an outline of their comparison.
The ability of everyday objects to retort to users’ emotional states may be accustomed to producing many personalized user experiences. It may be applied in instructional and diagnostic software systems, driver-less cars, personal AI, pervasive computing, sentient video games, video games, affectional toys, and different major shopper electronic devices. For instance, a refrigerator with an intrinsic feeling device might interpret a person’s mood and recommend appropriate food. Emotion and sensible home devices might offer diversion (music, videos, TV shows, or imagery) that matches the user’s current state of mind. Video games may use emotion-based training program technology to regulate game levels and problems per the player’s emotional states.
To provide an optimized human emotion recognition system by overcoming the challenges of device portability and other resource limitations (e.g., limited battery capacity, storage, and processing speed), a technical framework considering all of these aspects is significant. For our purpose, for instance, we use the face-api.js project [137] to see its feasibility in capturing human emotion. It is a promising initiative to test various assumptions with the model predictions and pieces of evidence to efficient detection of human emotion. This implements an AI-based CNN to solve for face detection and recognition of faces and face landmarks. To show the application scenarios, we used face-api.js project. With the development of such interactive systems (e.g., the faci-api.js software), the interest in facial expression recognition using resource constraint computing devices is growing gradually, and with it, new algorithms and approaches are being developed. The recent popularization of ML made an apparent breakthrough in the research field. The research is definitely on the right path, walking together with necessary fields like psychology, sociology, and physiology.
In work [138], we developed a practical smart sensors-based human emotion recognition system. Sensors continuously monitor the heart rate, skin conductance, and skin temperature. In the system, the signals (amplified and filtered) are processed by a microcontroller (C8051 Silabs microcontroller) and for the transmission, we used ZigBee technology. Furthermore, we developed an algorithm for the automatic recognition of emotions that combines various clustering techniques. Our proposed system has the potential to extract basic emotions i.e., happiness, angry, stress, and neutral from the physiological signals. It is easy to capture data using our model and even it can be integrated with a computer mouse for efficient classification of features.

6. Current Trends and Future Directions

With the advancement in human computer interactions technology, digital learning platforms, e-commerce sectors, IoT and smart technologies, and other wearable technologies (including low-cost, energy-aware and portable sensors development), the proliferation to the marketplace for emotion recognition systems is becoming significant in our everyday life [139,140,141,142]. The motion-sensing technology is no more in its experimental stage, it has become a reality. It is even used to understand the mental health of a person using various available mood-tracking apps in the market place [143]. This shows its acceptance and demand in users. As discussed in Section 5, the potential use of AI for face detection and recognition of faces and face landmarks helps in age estimation and gender recognition. AI shows one of the emerging data-driven and disruptive technologies towards human behaviour recognition [144]. AI technology uses ML, deep learning and federated learning processes to execute efficient learning patterns for the machines. The convergence of these newly emerging technologies in IoTs shows demand both in industry and academia. AI also shows a significant technology that has the potential to understand human emotion efficiently in long run to produce autonomous learning and motion detection [145]. More and more AI-related technologies and application platforms are enabling efficient management from sending to analyze human emotions [146]. Government, healthcare, retail, transportation, supply chain business, and many other applications are now using such emotion recognition systems for their business. For instance, automotive vehicles can use computer vision technology that helps to monitor the driver’s emotional state [147]. The sensors are used to interact with drivers to analyze emotional states, e.g., fatigue or drowsiness generate alerts for the driver. Similarly, retail stores can use computer vision emotion AI technology that captures visitors’ moods and reactions to advertise suitable products and promotions for them. In the United States, the total retail sales were recorded at USD 5.35 trillion in 2018, and the growth is expected to reach USD 5.99 trillion by 2023. This growth in e-commerce will consequently affect the marketplace of emotion recognition technologies and their market values [148].
Emotion recognition systems are becoming more sophisticated with the development of advanced wearable and processing technologies. However, attention must be paid to the efficient collection of physiological signals from sources. Suitable emotion detection models must be chosen. For this, signal processing techniques, feature extraction methods, and relevant classifiers should be employed [149,150]. As noted above, the development of AI, ML and VR technologies are considered as part of the next-generation technologies for developing human emotion recognition technologies. The future challenges, however, to build an efficient emotion recognition must consider many areas, for instance, from the data accusation to the supervision and controlling access of these data at a secure level [151,152]. The resource capability of most of the wearable devices (typically referring to the IoT-enabled portable sensors) is limited in terms of their memory, battery, and processing power. The resource-constrained nature of the sensors must be taken into consideration for the processing and analysis of heavy-weight algorithms. Security is another significant issue in the robust development of emotion recognition systems [153,154]. Given the nature and characteristics of these resource-constrained sensors, traditional security mechanisms cannot be employed directly in such constrained devices. There is a need for rethinking the light-weight security model for these devices [155]. That said, the authorized users must be given access to the resources, and at the same time, unauthorized access must be denied avoiding information leakage [156].
Human emotion recognition systems may deal with large volumes of data. It is noted that the process of human emotion recognition deals with personal information including health, location, and other highly confidential information. It is even more significant when considering large-scale IoT systems and their association with different human emotion recognition systems for securing access control and the delegation of access rights at scale [157,158,159]. Therefore, the protection of a user’s privacy is another challenge [154,160]. Furthermore, digital identity and identity management are two other important issues that must be taken care of concerning the dynamic nature and scale of the number of devices, applications, and associated services in a large-scale emotion recognition system [154,161,162]. The challenge also is to address the resource-constraint nature of the IoT devices e.g., battery capacity, processing speed, and memory capacity to capture, store and analyze effective human emotions [163,164,165,166]. Other notable challenges are the centralized control and trust issues while using such technologies for large scale systems like the IoT. To overcome these challenges, decentralized AI technology supported by blockchain is an alternative [167]. Blockchain is a tamper-evident, shared, and distributed digital ledger of transactions for crypto-currency systems that do not depends upon a trusted third party for data processing. In other words, instead of having a central ledger with data of the whole system, in a blockchain, every block contains all the necessary data [168]. This enhances the concept of the distributed ledger rather than simply creating a centralized one. As a revolutionary technology, the use of blockchain is beneficial in IoT as it overcomes the limitations of a centralized system for storing information. There have been various proposals that discuss the future perspective towards the use of blockchain in human emotion recognition to improve scalability, privacy, security, availability, and interoperability of data [169,170]. In this study, we show the different techniques and their application to human emotion recognition. In the future, we plan to conduct a more empirical study based on the techniques related to machine and deep learning technologies. Another avenue to work, more specifically, in the field of smart healthcare systems where detection of human emotion and their appropriate diagnosis are critical. Note, our review is limited to the existing methods and techniques to capture human emotions mostly based on the human point of view (i.e., various human body parameters collected through sensors). We also plan to study in more detail in the automatic collection, processing, evaluation of human emotions with minimal human interventions.

7. Conclusions

In this paper, we have presented a review of the development and progress in human emotion recognition systems and technologies. We have provided a detailed discussion on the various available mechanisms in the context of human emotion recognition comprehensively and systematically. We noted that biosensors are used widely for capturing human emotions in a more sophisticated way. We observed that there is a significant potential for Artificial Intelligence (AI) and Machine Learning (ML) technologies to contribute to next-generation human emotion technologies that can operate without any human interventions. However, there are significant challenges in integrating various systems and technologies in a decentralized way to build a robust and scalable embedded human emotion recognition system. In addition, security, privacy, trust, find-grained access control, and scalability are major concerns towards the development of an efficient human recognition monitoring system. However, new research is still required and currently ongoing in the area. It is expected that we will have new algorithms and systems in the future.

Author Contributions

S.P. and S.M. planned the paper, structured the article and contributed to the core research ideas. N.S. provided valuable feedback and helped in article preparation. All authors have reviewed the manuscript and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Egger, M.; Ley, M.; Hanke, S. Emotion recognition from physiological signal analysis: A review. Electron. Notes Theor. Comput. Sci. 2019, 343, 35–55. [Google Scholar] [CrossRef]
  2. Dzedzickis, A.; Kaklauskas, A.; Bucinskas, V. Human Emotion Recognition: Review of Sensors and Methods. Sensors 2020, 20, 592. [Google Scholar] [CrossRef] [Green Version]
  3. Deng, J.; Ren, F. A survey of textual emotion recognition and its challenges. IEEE Trans. Affect. Comput. 2021. [Google Scholar] [CrossRef]
  4. Mumenthaler, C.; Sander, D.; Manstead, A. Emotion recognition in simulated social interactions. IEEE Trans. Affect. Comput. 2018, 11, 308–312. [Google Scholar] [CrossRef] [Green Version]
  5. Tao, J.; Tan, T. Affective computing: A review. In International Conference on Affective Computing and Intelligent Interaction; Springer: Berlin/Heidelberg, Germany, 2005; pp. 981–995. [Google Scholar]
  6. Akgün, A.E.; Koçoğlu, İ.; İmamoğlu, S.Z. An emerging consumer experience: Emotional branding. Procedia-Soc. Behav. Sci. 2013, 99, 503–508. [Google Scholar] [CrossRef] [Green Version]
  7. Álvarez-Pato, V.M.; Sánchez, C.N.; Domínguez-Soberanes, J.; Méndoza-Pérez, D.E.; Velázquez, R. A Multisensor Data Fusion Approach for Predicting Consumer Acceptance of Food Products. Foods 2020, 9, 774. [Google Scholar] [CrossRef] [PubMed]
  8. Atzori, L.; Iera, A.; Morabito, G. The Internet of Things: A survey. Comput. Netw. 2010, 54, 2787–2805. [Google Scholar] [CrossRef]
  9. Rani, P.; Sarkar, N.; Smith, C.A.; Adams, J.A. Affective communication for implicit human-machine interaction. In Proceedings of the SMC’03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics, Conference Theme-System Security and Assurance (Cat. No. 03CH37483), Washington, DC, USA, 8 October 2003; Volume 5, pp. 4896–4903. [Google Scholar]
  10. Murugappan, M.; Ramachandran, N.; Sazali, Y. Classification of human emotion from EEG using discrete wavelet transform. J. Biomed. Sci. Eng. 2010, 3, 390. [Google Scholar] [CrossRef] [Green Version]
  11. Suryadevara, N.K.; Mukhopadhyay, S.C. Determining wellness through an ambient assisted living environment. IEEE Intell. Syst. 2014, 29, 30–37. [Google Scholar] [CrossRef]
  12. Suryadevara, N.; Chen, C.P.; Mukhopadhyay, S.; Rayudu, R. Ambient assisted living framework for elderly wellness determination through wireless sensor scalar data. In Proceedings of the 2013 Seventh International Conference on Sensing Technology (ICST), Wellington, New Zealand, 3–5 December 2013; pp. 632–639. [Google Scholar]
  13. AlMejrad, A.S. Human emotions detection using brain wave signals: A challenging. Eur. J. Sci. Res. 2010, 44, 640–659. [Google Scholar]
  14. Ghayvat, H.; Awais, M.; Pandya, S.; Ren, H.; Akbarzadeh, S.; Chandra Mukhopadhyay, S.; Chen, C.; Gope, P.; Chouhan, A.; Chen, W. Smart aging system: Uncovering the hidden wellness parameter for well-being monitoring and anomaly detection. Sensors 2019, 19, 766. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Pandya, S.; Ghayvat, H.; Kotecha, K.; Awais, M.; Akbarzadeh, S.; Gope, P.; Mukhopadhyay, S.C.; Chen, W. Smart home anti-theft system: A novel approach for near real-time monitoring and smart home security for wellness protocol. Appl. Syst. Innov. 2018, 1, 42. [Google Scholar] [CrossRef] [Green Version]
  16. Varghese, A.A.; Cherian, J.P.; Kizhakkethottam, J.J. Overview on emotion recognition system. In Proceedings of the 2015 International Conference on Soft-Computing and Networks Security (ICSNS), Coimbatore, India, 25–27 February 2015; pp. 1–5. [Google Scholar] [CrossRef]
  17. Kundu, T.; Saravanan, C. Advancements and recent trends in emotion recognition using facial image analysis and machine learning models. In Proceedings of the 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Mysuru, India, 15–16 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
  18. Hassan, M.M.; Alam, M.G.R.; Uddin, M.Z.; Huda, S.; Almogren, A.; Fortino, G. Human emotion recognition using deep belief network architecture. Inf. Fusion 2019, 51, 10–18. [Google Scholar] [CrossRef]
  19. Gaggioli, A. Online Emotion Recognition Services Are a Hot Trend. Cyberpsychol. Behav. Soc. Netw. 2019, 22, 358–359. [Google Scholar] [CrossRef]
  20. Haag, A.; Goronzy, S.; Schaich, P.; Williams, J. Emotion recognition using bio-sensors: First steps towards an automatic system. In Tutorial and Research Workshop on Affective Dialogue Systems; Springer: Berlin/Heidelberg, Germany, 2004; pp. 36–48. [Google Scholar]
  21. Mehta, D.; Siddiqui, M.F.H.; Javaid, A.Y. Facial emotion recognition: A survey and real-world user experiences in mixed reality. Sensors 2018, 18, 416. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Marechal, C.; Mikolajewski, D.; Tyburek, K.; Prokopowicz, P.; Bougueroua, L.; Ancourt, C.; Wegrzyn-Wolska, K. Survey on AI-Based Multimodal Methods for Emotion Detection. In High-Performance Modelling and Simulation for Big Data Applications; Kołodziej, J., González-Vélez, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; Volume 11400, pp. 307–324. [Google Scholar]
  23. Landowska, A.; Brodny, G.; Wrobel, M.R. Limitations of Emotion Recognition from Facial Expressions in e-Learning Context; CSEDU: Gdansk, Poland, 2017; pp. 383–389. [Google Scholar]
  24. Coito, T.; Firme, B.; Martins, M.S.; Vieira, S.M.; Figueiredo, J.; Sousa, J. Intelligent Sensors for Real-Time Decision-Making. Automation 2021, 2, 62–82. [Google Scholar] [CrossRef]
  25. Hess, U.; Hareli, S. The influence of context on emotion recognition in humans. In Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 4–8 May 2015. [Google Scholar]
  26. Korkmaz, T.; Erol, H. Classification Of Human Facial Expressions For Emotion Recognition Using A Distributed Computer System. In Proceedings of the 2020 5th International Conference on Computer Science and Engineering (UBMK), Diyarbakir, Turkey, 9–11 September 2020; pp. 1–6. [Google Scholar] [CrossRef]
  27. Picard, R.W.; Vyzas, E.; Healey, J. Toward machine emotional intelligence: Analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1175–1191. [Google Scholar] [CrossRef] [Green Version]
  28. Christie, I.C.; Friedman, B.H. Autonomic specificity of discrete emotion and dimensions of affective space: A multivariate approach. Int. J. Psychophysiol. 2004, 51, 143–153. [Google Scholar] [CrossRef]
  29. Kim, B.H.; Jo, S. Deep physiological affect network for the recognition of human emotions. IEEE Trans. Affect. Comput. 2018, 11, 230–243. [Google Scholar] [CrossRef] [Green Version]
  30. Hofmann, J.; Platt, T.; Ruch, W. Laughter and smiling in 16 positive emotions. IEEE Trans. Affect. Comput. 2017, 8, 495–507. [Google Scholar] [CrossRef]
  31. Ekman, P.; Sorenson, E.R.; Friesen, W.V. Pan-cultural elements in facial displays of emotion. Science 1969, 164, 86–88. [Google Scholar] [CrossRef] [Green Version]
  32. Eckman, P. Basic Emotions, Handbook of Cognition and Emotion. In Proceedings of the Conference on Automotive User Interfaces and Interactive Vehicular Applications, Portsmouth, NH, USA, 1999; pp. 51–58. [Google Scholar]
  33. De Sousa, R. The Rationality of Emotion; Mit Press: Cambridge, MA, USA, 1990. [Google Scholar]
  34. Damasio, A.R. A second chance for emotion. In Cognitive Neuroscience of Emotion; Lane, R.D., Nadel, L., Eds.; Oxford University Press: Oxford, UK, 2000; pp. 12–23. [Google Scholar]
  35. Fredrickson, B.L. What good are positive emotions? Rev. Gen. Psychol. 1998, 2, 300–319. [Google Scholar] [CrossRef] [PubMed]
  36. Feidakis, M.; Daradoumis, T.; Caballé, S. Endowing e-learning systems with emotion awareness. In Proceedings of the 2011 Third International Conference on Intelligent Networking and Collaborative Systems, Fukuoka, Japan, 30 November–2 December 2011; pp. 68–75. [Google Scholar]
  37. Kanjo, E.; Younis, E.M.; Sherkat, N. Towards unravelling the relationship between on-body, environmental and emotion data using sensor information fusion approach. Inf. Fusion 2018, 40, 18–31. [Google Scholar] [CrossRef]
  38. Martini, N.; Menicucci, D.; Sebastiani, L.; Bedini, R.; Pingitore, A.; Vanello, N.; Milanesi, M.; Landini, L.; Gemignani, A. The dynamics of EEG gamma responses to unpleasant visual stimuli: From local activity to functional connectivity. NeuroImage 2012, 60, 922–932. [Google Scholar] [CrossRef] [PubMed]
  39. Tyng, C.M.; Amin, H.U.; Saad, M.N.; Malik, A.S. The influences of emotion on learning and memory. Front. Psychol. 2017, 8, 1454. [Google Scholar] [CrossRef] [PubMed]
  40. Pantic, M.; Rothkrantz, L.J.M. Automatic analysis of facial expressions: The state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1424–1445. [Google Scholar] [CrossRef] [Green Version]
  41. Elfenbein, H.A.; Ambady, N. On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychol. Bull. 2002, 128, 203. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Pawar, S.; Kithani, V.; Ahuja, S.; Sahu, S. Smart Home Security Using IoT and Face Recognition. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6. [Google Scholar]
  43. Nguyen, B.T.; Trinh, M.H.; Phan, T.V.; Nguyen, H.D. An efficient real-time emotion detection using camera and facial landmarks. In Proceedings of the 2017 Seventh International Conference on Information Science and Technology (ICIST), Da Nang, Vietnam, 16–19 April 2017; pp. 251–255. [Google Scholar] [CrossRef]
  44. Michel, P.; El Kaliouby, R. Real time facial expression recognition in video using support vector machines. In Proceedings of the 5th International Conference on Multimodal Interfaces, Vancouver, BC, Canada, 5–7 November 2003; pp. 258–264. [Google Scholar]
  45. Daros, A.R.; Zakzanis, K.K.; Ruocco, A. Facial emotion recognition in borderline personality disorder. Psychol. Med. 2013, 43, 1953–1963. [Google Scholar] [CrossRef]
  46. Zhang, Z.; Luo, P.; Loy, C.C.; Tang, X. Facial landmark detection by deep multi-task learning. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 94–108. [Google Scholar]
  47. Jaiswal, A.; Raju, A.K.; Deb, S. Facial Emotion Detection Using Deep Learning. In Proceedings of the 2020 International Conference for Emerging Technology (INCET), Belgaum, India, 5–7 June 2020; pp. 1–5. [Google Scholar]
  48. Zheng, J.; Peng, L. A Deep Learning Compensated Back Projection for Image Reconstruction of Electrical Capacitance Tomography. IEEE Sensors J. 2020, 20, 4879–4890. [Google Scholar] [CrossRef]
  49. Kudiri, K.M.; Said, A.M.; Nayan, M.Y. Human emotion detection through speech and facial expressions. In Proceedings of the 2016 3rd International Conference on Computer and Information Sciences (ICCOINS), Kuala Lumpur, Malaysia, 15–17 August 2016; pp. 351–356. [Google Scholar] [CrossRef]
  50. Prasomphan, S. Detecting human emotion via speech recognition by using speech spectrogram. In Proceedings of the 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA), Paris, France, 19–21 October 2015; pp. 1–10. [Google Scholar] [CrossRef]
  51. Khanna, P.; Sasikumar, M. Recognizing emotions from human speech. In Thinkquest 2010; Springer: Berlin/Heidelberg, Germany, 2011; pp. 219–223. [Google Scholar]
  52. Chen, P.; Kuang, Y.; Li, J. Human motion capture algorithm based on inertial sensors. J. Sens. 2016, 2016, 4343797. [Google Scholar] [CrossRef]
  53. Stathopoulou, I.O.; Tsihrintzis, G.A. Emotion recognition from body movements and gestures. In Intelligent Interactive Multimedia Systems and Services; Springer: Berlin/Heidelberg, Germany, 2011; pp. 295–303. [Google Scholar]
  54. Ahmed, F.; Bari, A.H.; Gavrilova, M.L. Emotion Recognition From Body Movement. IEEE Access 2019, 8, 11761–11781. [Google Scholar] [CrossRef]
  55. Mao, Q.R.; Pan, X.Y.; Zhan, Y.Z.; Shen, X.J. Using Kinect for real-time emotion recognition via facial expressions. Front. Inf. Technol. Electron. Eng. 2015, 16, 272–282. [Google Scholar] [CrossRef]
  56. Di, H.; Li, Y.; Liu, K.; An, L.; Dong, J. Hand gesture monitoring using fiber-optic curvature sensors. Appl. Opt. 2019, 58, 7935–7942. [Google Scholar] [CrossRef] [PubMed]
  57. Jalloul, N. Wearable sensors for the monitoring of movement disorders. Biomed. J. 2018, 41, 249–253. [Google Scholar] [CrossRef] [PubMed]
  58. Pulliam, C.L.; Heldman, D.A.; Brokaw, E.B.; Mera, T.O.; Mari, Z.K.; Burack, M.A. Continuous assessment of levodopa response in Parkinson’s disease using wearable motion sensors. IEEE Trans. Biomed. Eng. 2017, 65, 159–164. [Google Scholar] [CrossRef] [PubMed]
  59. Hui, T.K.; Sherratt, R.S. Coverage of emotion recognition for common wearable biosensors. Biosensors 2018, 8, 30. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Wu, W.; Zhang, H.; Pirbhulal, S.; Mukhopadhyay, S.C.; Zhang, Y.T. Assessment of biofeedback training for emotion management through wearable textile physiological monitoring system. IEEE Sens. J. 2015, 15, 7087–7095. [Google Scholar] [CrossRef]
  61. Pantelopoulos, A.; Bourbakis, N. A survey on wearable biosensor systems for health monitoring. In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 4887–4890. [Google Scholar]
  62. Du, G.; Long, S.; Yuan, H. Non-Contact Emotion Recognition Combining Heart Rate and Facial Expression for Interactive Gaming Environments. IEEE Access 2020, 8, 11896–11906. [Google Scholar] [CrossRef]
  63. Turk, M.; Robertson, G. Perceptual user interfaces (introduction). Commun. ACM 2000, 43, 32–34. [Google Scholar] [CrossRef]
  64. Gotovtsev, P. How IoT Can Integrate Biotechnological Approaches for City Applications—Review of Recent Advancements, Issues, and Perspectives. Appl. Sci. 2020, 10, 3990. [Google Scholar] [CrossRef]
  65. Jerritta, S.; Murugappan, M.; Nagarajan, R.; Wan, K. Physiological signals based human emotion recognition: A review. In Proceedings of the 2011 IEEE 7th International Colloquium on Signal Processing and its Applications, Penang, Malaysia, 4–6 March 2011; pp. 410–415. [Google Scholar]
  66. Shukla, J.; Barreda-Angeles, M.; Oliver, J.; Nandi, G.; Puig, D. Feature extraction and selection for emotion recognition from electrodermal activity. In IEEE Transactions on Affective Computing; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  67. Vos, P.; De Cock, P.; Munde, V.; Petry, K.; Van Den Noortgate, W.; Maes, B. The tell-tale: What do heart rate; skin temperature and skin conductance reveal about emotions of people with severe and profound intellectual disabilities? Res. Dev. Disabil. 2012, 33, 1117–1127. [Google Scholar] [CrossRef]
  68. Kushki, A.; Fairley, J.; Merja, S.; King, G.; Chau, T. Comparison of blood volume pulse and skin conductance responses to mental and affective stimuli at different anatomical sites. Physiol. Meas. 2011, 32, 1529. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  69. Hameed, R.A.; Sabir, M.K.; Fadhel, M.A.; Al-Shamma, O.; Alzubaidi, L. Human emotion classification based on respiration signal. In Proceedings of the International Conference on Information and Communication Technology, Jeju Island, Korea, 16–18 October 2019; pp. 239–245. [Google Scholar]
  70. Zhuang, N.; Zeng, Y.; Tong, L.; Zhang, C.; Zhang, H.; Yan, B. Emotion recognition from EEG signals using multidimensional information in EMD domain. BioMed Res. Int. 2017, 2017, 8317357. [Google Scholar] [CrossRef]
  71. Mardini, W.; Ali, G.A.; Magdady, E.; Al-momani, S. Detecting human emotions using electroencephalography (EEG) using dynamic programming approach. In Proceedings of the 2018 6th International Symposium on Digital Forensic and Security (ISDFS), Antalya, Turkey, 22–25 March 2018; pp. 1–5. [Google Scholar] [CrossRef]
  72. Choi, K.H.; Kim, J.; Kwon, O.S.; Kim, M.J.; Ryu, Y.H.; Park, J.E. Is heart rate variability (HRV) an adequate tool for evaluating human emotions?–A focus on the use of the International Affective Picture System (IAPS). Psychiatry Res. 2017, 251, 192–196. [Google Scholar] [CrossRef] [PubMed]
  73. Jenke, R.; Peer, A.; Buss, M. Feature extraction and selection for emotion recognition from EEG. IEEE Trans. Affect. Comput. 2014, 5, 327–339. [Google Scholar] [CrossRef]
  74. Alarcao, S.M.; Fonseca, M.J. Emotions recognition using EEG signals: A survey. IEEE Trans. Affect. Comput. 2017, 10, 374–393. [Google Scholar] [CrossRef]
  75. Suhaimi, N.S.; Mountstephens, J.; Teo, J. EEG-Based Emotion Recognition: A State-of-the-Art Review of Current Trends and Opportunities. Comput. Intell. Neurosci. 2020, 2020, 8875426. [Google Scholar] [CrossRef]
  76. Lim, J.Z.; Mountstephens, J.; Teo, J. Emotion Recognition Using Eye-Tracking: Taxonomy, Review and Current Challenges. Sensors 2020, 20, 2384. [Google Scholar] [CrossRef] [PubMed]
  77. Lu, Y.; Zheng, W.L.; Li, B.; Lu, B.L. Combining eye movements and eeg to enhance emotion recognition. IJCAI 2015, 15, 1170–1176. [Google Scholar]
  78. Chen, Z.; Zhao, D.; Ma, R.; Zhang, X.; Rao, J.; Yin, Y.; Wang, X.; Yi, F. Flexible temperature sensors based on carbon nanomaterials. J. Mater. Chem. B 2021, 9, 1941–1964. [Google Scholar] [CrossRef]
  79. Zhang, X.; Chen, F.; Han, L.; Zhang, G.; Hu, Y.; Jiang, W.; Zhu, P.; Sun, R.; Wong, C.P. Flexible, Highly Sensitive, and Ultrafast Responsive Pressure Sensor with Stochastic Microstructures for Human Health Monitoring. Adv. Eng. Mater. 2021, 23, 2000902. [Google Scholar] [CrossRef]
  80. Cheng, M.; Zhu, G.; Zhang, F.; Tang, W.l.; Jianping, S.; Yang, J.Q.; Zhu, L.Y. An review of flexible force sensors for human health monitoring. J. Adv. Res. 2020. [Google Scholar] [CrossRef]
  81. Yamamoto, Y.; Harada, S.; Yamamoto, D.; Honda, W.; Arie, T.; Akita, S.; Takei, K. Printed multifunctional flexible device with an integrated motion sensor for health care monitoring. Sci. Adv. 2016, 2, e1601473. [Google Scholar] [CrossRef] [Green Version]
  82. Melzer, M.; Mönch, J.I.; Makarov, D.; Zabila, Y.; Cañón Bermúdez, G.S.; Karnaushenko, D.; Baunack, S.; Bahr, F.; Yan, C.; Kaltenbrunner, M. Wearable magnetic field sensors for flexible electronics. Adv. Mater. 2015, 27, 1274–1280. [Google Scholar] [CrossRef]
  83. Takaloo, S.; Zand, M.M. Wearable electrochemical flexible biosensors: With the focus on affinity biosensors. Sens. Bio-Sens. Res. 2021, 32, 100403. [Google Scholar] [CrossRef]
  84. Ranasinghe, S.; Al Machot, F.; Mayr, H.C. A review on applications of activity recognition systems with regard to performance and evaluation. Int. J. Distrib. Sens. Netw. 2016, 12, 1550147716665520. [Google Scholar] [CrossRef] [Green Version]
  85. Poria, S.; Majumder, N.; Mihalcea, R.; Hovy, E. Emotion recognition in conversation: Research challenges, datasets, and recent advances. IEEE Access 2019, 7, 100943–100953. [Google Scholar] [CrossRef]
  86. Zhang, Y.; Li, Z.; Ren, F.; Kuroiwa, S. Semi-automatic emotion recognition from textual input based on the constructed emotion thesaurus. In Proceedings of the 2005 International Conference on Natural Language Processing and Knowledge Engineering, Wuhan, China, 30 October–1 November 2005; pp. 571–576. [Google Scholar]
  87. Matsumoto, K.; Minato, J.; Ren, F.; Kuroiwa, S. Estimating human emotions using wording and sentence patterns. In Proceedings of the 2005 IEEE International Conference on Information Acquisition, Hong Kong, China, 27 June–3 July 2005; p. 6. [Google Scholar] [CrossRef]
  88. Shaheen, S.; El-Hajj, W.; Hajj, H.; Elbassuoni, S. Emotion recognition from text based on automatically generated rules. In Proceedings of the 2014 IEEE International Conference on Data Mining Workshop, Shenzhen, China, 14 December 2014. [Google Scholar]
  89. Park, M.W.; Kim, C.J.; Hwang, M.; Lee, E.C. Individual emotion classification between happiness and sadness by analyzing photoplethysmography and skin temperature. In Proceedings of the 2013 Fourth World Congress on Software Engineering, Hong Kong, China, 3–4 December 2013; pp. 190–194. [Google Scholar]
  90. Jing, H.; Lun, X.; Dan, L.; Zhijie, H.; Zhiliang, W. Cognitive emotion model for eldercare robot in smart home. China Commun. 2015, 12, 32–41. [Google Scholar] [CrossRef]
  91. Pollreisz, D.; TaheriNejad, N. A simple algorithm for emotion recognition, using physiological signals of a smart watch. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Korea, 11–15 July 2017; pp. 2353–2356. [Google Scholar]
  92. Park, J.; Park, J.; Shin, D.; Choi, Y. A BCI Based Alerting System for Attention Recovery of UAV Operators. Sensors 2021, 21, 2447. [Google Scholar] [CrossRef] [PubMed]
  93. Chao, H.; Liu, Y. Emotion Recognition From Multi-Channel EEG Signals by Exploiting the Deep Belief-Conditional Random Field Framework. IEEE Access 2020, 8, 33002–33012. [Google Scholar] [CrossRef]
  94. Li, D.; Wang, Z.; Wang, C.; Liu, S.; Chi, W.; Dong, E.; Song, X.; Gao, Q.; Song, Y. The fusion of electroencephalography and facial expression for continuous emotion recognition. IEEE Access 2019, 7, 155724–155736. [Google Scholar] [CrossRef]
  95. Chen, S.; Gao, Z.; Wang, S. Emotion recognition from peripheral physiological signals enhanced by EEG. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 2827–2831. [Google Scholar]
  96. Markov, K.; Matsui, T. Music genre and emotion recognition using Gaussian processes. IEEE Access 2014, 2, 688–697. [Google Scholar] [CrossRef]
  97. Liu, Y.J.; Yu, M.; Zhao, G.; Song, J.; Ge, Y.; Shi, Y. Real-time movie-induced discrete emotion recognition from EEG signals. IEEE Trans. Affect. Comput. 2017, 9, 550–562. [Google Scholar] [CrossRef]
  98. Suryadevara, N.; Gaddam, A.; Mukhopadhyay, S.; Rayudu, R. Wellness determination of inhabitant based on daily activity behaviour in real-time monitoring using sensor networks. In Proceedings of the 2011 Fifth International Conference on Sensing Technology, Palmerston North, New Zealand, 28 November–1 December 2011; pp. 474–481. [Google Scholar]
  99. Survadevara, N.; Mukhopadhyay, S.; Rayudu, R. Applying SARIMA time series to forecast sleeping activity for wellness model of elderly monitoring in smart home. In Proceedings of the 2012 Sixth International Conference on Sensing Technology (ICST), Kolkata, India, 18–21 December 2012; pp. 157–162. [Google Scholar]
  100. Park, B.J.; Yoon, C.; Jang, E.H.; Kim, D.H. Physiological signals and recognition of negative emotions. In Proceedings of the 2017 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Korea, 18–20 October 2017; pp. 1074–1076. [Google Scholar]
  101. Schuller, B.; Vlasenko, B.; Eyben, F.; Wöllmer, M.; Stuhlsatz, A.; Wendemuth, A.; Rigoll, G. Cross-corpus acoustic emotion recognition: Variances and strategies. IEEE Trans. Affect. Comput. 2010, 1, 119–131. [Google Scholar] [CrossRef]
  102. Lin, Y.P.; Wang, C.H.; Jung, T.P.; Wu, T.L.; Jeng, S.K.; Duann, J.R.; Chen, J.H. EEG-based emotion recognition in music listening. IEEE Trans. Biomed. Eng. 2010, 57, 1798–1806. [Google Scholar]
  103. Yang, Y.H.; Chen, H.H. Ranking-based emotion recognition for music organization and retrieval. IEEE Trans. Audio Speech Lang. Process. 2010, 19, 762–774. [Google Scholar] [CrossRef]
  104. Iyer, A.V.; Pasad, V.; Sankhe, S.R.; Prajapati, K. Emotion based mood enhancing music recommendation. In Proceedings of the 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 19–20 May 2017; pp. 1573–1577. [Google Scholar]
  105. Ramanathan, R.; Kumaran, R.; Rohan, R.R.; Gupta, R.; Prabhu, V. An Intelligent Music Player Based on Emotion Recognition. In Proceedings of the 2017 2nd International Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS), Bengaluru, India, 21–23 December 2017; pp. 1–5. [Google Scholar]
  106. Kim, J.; André, E. Emotion recognition based on physiological changes in music listening. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 2067–2083. [Google Scholar] [CrossRef] [PubMed]
  107. Lukose, S.; Upadhya, S.S. Music player based on emotion recognition of voice signals. In Proceedings of the 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT), Kerala, India, 6–7 July 2017; pp. 1751–1754. [Google Scholar]
  108. Attabi, Y.; Dumouchel, P. Anchor models for emotion recognition from speech. IEEE Trans. Affect. Comput. 2013, 4, 280–290. [Google Scholar] [CrossRef]
  109. Wang, J.C.; Lee, Y.S.; Chin, Y.H.; Chen, Y.R.; Hsieh, W.C. Hierarchical Dirichlet process mixture model for music emotion recognition. IEEE Trans. Affect. Comput. 2015, 6, 261–271. [Google Scholar] [CrossRef]
  110. Xu, B.; Fu, Y.; Jiang, Y.G.; Li, B.; Sigal, L. Heterogeneous knowledge transfer in video emotion recognition, attribution and summarization. IEEE Trans. Affect. Comput. 2016, 9, 255–270. [Google Scholar] [CrossRef] [Green Version]
  111. Rengers, J. Investigating Association between Musical Features and Emotion through EEG Signal Analysis. Bachelor’s Thesis, University of Twente, Enschede, The Netherlands, 2020. [Google Scholar]
  112. Nawaz, R.; Nisar, H.; Yap, V.V. Recognition of Useful Music for Emotion Enhancement Based on Dimensional Model. In Proceedings of the 2018 2nd International Conference on BioSignal Analysis, Processing and Systems (ICBAPS), Kuching, Malaysia, 24–26 July 2018; pp. 176–180. [Google Scholar]
  113. Song, T.; Zheng, W.; Lu, C.; Zong, Y.; Zhang, X.; Cui, Z. MPED: A multi-modal physiological emotion database for discrete emotion recognition. IEEE Access 2019, 7, 12177–12191. [Google Scholar] [CrossRef]
  114. Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion recognition from EEG using higher order crossings. IEEE Trans. Inf. Technol. Biomed. 2009, 14, 186–197. [Google Scholar] [CrossRef]
  115. Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion recognition from brain signals using hybrid adaptive filtering and higher order crossings analysis. IEEE Trans. Affect. Comput. 2010, 1, 81–97. [Google Scholar] [CrossRef]
  116. Petrantonakis, P.C.; Hadjileontiadis, L.J. A novel emotion elicitation index using frontal brain asymmetry for enhanced EEG-based emotion recognition. IEEE Trans. Inf. Technol. Biomed. 2011, 15, 737–746. [Google Scholar] [CrossRef] [PubMed]
  117. Beckmann, N.; Viga, R.; Dogangün, A.; Grabmaier, A. Measurement and Analysis of Local Pulse Transit Time for Emotion Recognition. IEEE Sensors J. 2019, 19, 7683–7692. [Google Scholar] [CrossRef]
  118. Xu, T.; Yin, R.; Shu, L.; Xu, X. Emotion recognition using frontal eeg in vr affective scenes. In Proceedings of the 2019 IEEE MTT-S International Microwave Biomedical Conference (IMBioC), Nanjing, China, 6–8 May 2019; Volume 1, pp. 1–4. [Google Scholar]
  119. Shin, J.; Maeng, J.; Kim, D.H. Inner Emotion Recognition using Multi Bio-Signals. In Proceedings of the 2018 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), JeJu, Korea, 24–26 June 2018; pp. 206–212. [Google Scholar]
  120. Chamola, V.; Vineet, A.; Nayyar, A.; Hossain, E. Brain-Computer Interface-Based Humanoid Control: A Review. Sensors 2020, 20, 3620. [Google Scholar] [CrossRef] [PubMed]
  121. Pandey, P.; Seeja, K. Subject independent emotion recognition from EEG using VMD and deep learning. J. King Saud Univ. Comput. Inf. Sci. 2019. [Google Scholar] [CrossRef]
  122. Hwang, S.; Ki, M.; Hong, K.; Byun, H. Subject-Independent EEG-based Emotion Recognition using Adversarial Learning. In Proceedings of the 2020 8th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Korea, 26–28 February 2020; pp. 1–4. [Google Scholar]
  123. Perusquía-Hernández, M.; Hirokawa, M.; Suzuki, K. A wearable device for fast and subtle spontaneous smile recognition. IEEE Trans. Affect. Comput. 2017, 8, 522–533. [Google Scholar] [CrossRef]
  124. Albornoz, E.M.; Milone, D.H. Emotion recognition in never-seen languages using a novel ensemble method with emotion profiles. IEEE Trans. Affect. Comput. 2015, 8, 43–53. [Google Scholar] [CrossRef]
  125. Do, L.N.; Yang, H.J.; Nguyen, H.D.; Kim, S.H.; Lee, G.S.; Na, I.S. Deep neural network-based fusion model for emotion recognition using visual data. J. Supercomput. 2021, 10, 1–18. [Google Scholar]
  126. Sokolov, D.; Patkin, M. Real-time emotion recognition on mobile devices. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; p. 787. [Google Scholar]
  127. Yong, H.; Lee, J.; Choi, J. Emotion Recognition in Gamers Wearing Head-mounted Display. In Proceedings of the 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 1251–1252. [Google Scholar]
  128. Katsimerou, C.; Heynderickx, I.; Redi, J.A. Predicting mood from punctual emotion annotations on videos. IEEE Trans. Affect. Comput. 2015, 6, 179–192. [Google Scholar] [CrossRef]
  129. Hua, W.; Dai, F.; Huang, L.; Xiong, J.; Gui, G. HERO: Human emotions recognition for realizing intelligent Internet of Things. IEEE Access 2019, 7, 24321–24332. [Google Scholar] [CrossRef]
  130. Lee, J.H.; Kim, H.J.; Cheong, Y.G. A Multi-modal Approach for Emotion Recognition of TV Drama Characters Using Image and Text. In Proceedings of the 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), Busan, Korea, 19–22 February 2020; pp. 420–424. [Google Scholar]
  131. Pathar, R.; Adivarekar, A.; Mishra, A.; Deshmukh, A. Human Emotion Recognition using Convolutional Neural Network in Real Time. In Proceedings of the 2019 1st International Conference on Innovations in Information and Communication Technology (ICIICT), Chennai, India, 25–26 April 2019; pp. 1–7. [Google Scholar]
  132. Joesph, C.; Rajeswari, A.; Premalatha, B.; Balapriya, C. Implementation of physiological signal based emotion recognition algorithm. In Proceedings of the 2020 IEEE 36th International Conference on Data Engineering (ICDE), Dallas, TX, USA, 20–24 April 2020; pp. 2075–2079. [Google Scholar]
  133. Chakraborty, A.; Konar, A.; Chakraborty, U.K.; Chatterjee, A. Emotion recognition from facial expressions and its control using fuzzy logic. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2009, 39, 726–743. [Google Scholar] [CrossRef]
  134. Hossain, M.S.; Muhammad, G. An emotion recognition system for mobile applications. IEEE Access 2017, 5, 2281–2287. [Google Scholar] [CrossRef]
  135. Kołakowska, A.; Landowska, A.; Szwoch, M.; Szwoch, W.; Wrobel, M.R. Emotion recognition and its applications. In Human-Computer Systems Interaction: Backgrounds and Applications 3; Springer: Berlin/Heidelberg, Germany, 2014; pp. 51–62. [Google Scholar]
  136. Sadka, O.; Antle, A. Interactive Technologies for Emotion-regulation Training: Opportunities and Challenges. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–12. [Google Scholar]
  137. JavaScript API for Face Detection. Available online: https://github.com/justadudewhohacks/face-api.js/ (accessed on 4 January 2021).
  138. Quazi, M.; Mukhopadhyay, S.; Suryadevara, N.; Huang, Y.M. Towards the smart sensors based human emotion recognition. In Proceedings of the 2012 IEEE International Instrumentation and Measurement Technology Conference Proceedings, Graz, Austria, 13–16 May 2012; pp. 2365–2370. [Google Scholar]
  139. Schuller, B.W. Speech emotion recognition: Two decades in a nutshell, benchmarks, and ongoing trends. Commun. ACM 2018, 61, 90–99. [Google Scholar] [CrossRef]
  140. Lyu, C.; Li, P.; Wang, D.; Yang, S.; Lai, Y.; Sui, C. High-Speed Optical 3D Measurement Sensor for Industrial Application. IEEE Sensors J. 2021, 21, 11253–11261. [Google Scholar] [CrossRef]
  141. Leelaarporn, P.; Wachiraphan, P.; Kaewlee, T.; Udsa, T.; Chaisaen, R.; Choksatchawathi, T.; Laosirirat, R.; Lakhan, P.; Natnithikarat, P.; Thanontip, K.; et al. Sensor-Driven Achieving of Smart Living: A Review. IEEE Sensors J. 2021, 21, 10369–10391. [Google Scholar] [CrossRef]
  142. Pal, S.; Hitchens, M.; Varadharajan, V. Access control for Internet of Things—enabled assistive technologies: An architecture, challenges and requirements. In Assistive Technology for the Elderly; Elsevier: Amsterdam, The Netherlands, 2020; pp. 1–43. [Google Scholar]
  143. Andalibi, N.; Buss, J. The Human in Emotion Recognition on Social Media: Attitudes, Outcomes, Risks. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–16. [Google Scholar]
  144. Huang, M.H.; Rust, R.T. A strategic framework for artificial intelligence in marketing. J. Acad. Mark. Sci. 2021, 49, 30–50. [Google Scholar] [CrossRef]
  145. Saxena, A.; Khanna, A.; Gupta, D. Emotion recognition and detection methods: A comprehensive survey. J. Artif. Intell. Syst. 2020, 2, 53–79. [Google Scholar] [CrossRef]
  146. Wright, J. Suspect AI: Vibraimage, Emotion Recognition Technology, and Algorithmic Opacity. arXiv 2020, arXiv:2009.00502. [Google Scholar]
  147. Izquierdo-Reyes, J.; Ramirez-Mendoza, R.A.; Bustamante-Bello, M.R.; Pons-Rovira, J.L.; Gonzalez-Vargas, J.E. Emotion recognition for semi-autonomous vehicles framework. Int. J. Interact. Des. Manuf. 2018, 12, 1447–1454. [Google Scholar] [CrossRef]
  148. Mordorintelligence. The Emotion Detection and Recognition Market. Available online: https://www.mordorintelligence.com/industry-reports/emotion-detection-and-recognition-edr-market (accessed on 15 May 2021).
  149. Zhang, Y.; Qian, Y.; Wu, D.; Hossain, M.S.; Ghoneim, A.; Chen, M. Emotion-aware multimedia systems security. IEEE Trans. Multimed. 2018, 21, 617–624. [Google Scholar] [CrossRef]
  150. Zhang, K.; Ling, W. Joint Motion Information Extraction and Human Behavior Recognition in Video Based on Deep Learning. IEEE Sens. J. 2020, 20, 11919–11926. [Google Scholar] [CrossRef]
  151. Fang, W.C.; Wang, K.Y.; Fahier, N.; Ho, Y.L.; Huang, Y.D. Development and validation of an EEG-based real-time emotion recognition system using edge AI computing platform with convolutional neural network system-on-chip design. IEEE J. Emerg. Sel. Top. Circuits Syst. 2019, 9, 645–657. [Google Scholar] [CrossRef]
  152. Hagendorff, T.; Wezel, K. 15 challenges for AI: Or what AI (currently) can’t do. AI Soc. 2019, 1–11. [Google Scholar] [CrossRef]
  153. Pal, S.; Hitchens, M.; Varadharajan, V.; Rabehaja, T. Policy-based access control for constrained healthcare resources. In Proceedings of the 2018 IEEE 19th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM), Chania, Greece, 12–15 June 2018; pp. 588–599. [Google Scholar]
  154. Pal, S.; Hitchens, M.; Varadharajan, V. On the design of security mechanisms for the Internet of Things. In Proceedings of the 2017 Eleventh International Conference on Sensing Technology (ICST), Sydney, NSW, Australia, 4–6 December 2017; pp. 1–6. [Google Scholar]
  155. Sawaneh, I.A.; Sankoh, I.; Koroma, D.K. A survey on security issues and wearable sensors in wireless body area network for healthcare system. In Proceedings of the 2017 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 15–17 December 2017; pp. 304–308. [Google Scholar]
  156. Chaudhry, S.A.; Yahya, K.; Al-Turjman, F.; Yang, M.H. A secure and reliable device access control scheme for IoT based sensor cloud systems. IEEE Access 2020, 8, 139244–139254. [Google Scholar] [CrossRef]
  157. Pal, S.; Hitchens, M.; Varadharajan, V. Towards a secure access control architecture for the Internet of Things. In Proceedings of the 2017 IEEE 42nd Conference on Local Computer Networks (LCN), Singapore, 9–12 October 2017; pp. 219–222. [Google Scholar]
  158. Pal, S.; Rabehaja, T.; Hill, A.; Hitchens, M.; Varadharajan, V. On the integration of blockchain to the internet of things for enabling access right delegation. IEEE Internet Things J. 2019, 7, 2630–2639. [Google Scholar] [CrossRef]
  159. Pal, S.; Hitchens, M.; Rabehaja, T.; Mukhopadhyay, S. Security requirements for the internet of things: A systematic approach. Sensors 2020, 20, 5897. [Google Scholar] [CrossRef] [PubMed]
  160. Kapoor, V.; Singh, R.; Reddy, R.; Churi, P. Privacy Issues in Wearable Technology: An Intrinsic Review. In Proceedings of the International Conference on Innovative Computing & Communications (ICICC), Delhi, India, 6 April 2020. [Google Scholar]
  161. Poonia, A.S.; Banerjee, C.; Banerjee, A.; Sharma, S. Security Issues in Internet of Things (IoT)-Enabled Systems: Problem and Prospects. In Soft Computing: Theories and Applications; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1419–1423. [Google Scholar]
  162. Pal, S.; Hitchens, M.; Varadharajan, V. Modeling identity for the internet of things: Survey, classification and trends. In Proceedings of the 2018 12th International Conference on Sensing Technology (ICST), Limerick, Ireland, 4–6 December 2018; pp. 45–51. [Google Scholar]
  163. Rabehaja, T.; Pal, S.; Hitchens, M. Design and implementation of a secure and flexible access-right delegation for resource constrained environments. Future Gener. Comput. Syst. 2019, 99, 593–608. [Google Scholar] [CrossRef]
  164. Pal, S. Wind energy—An innovative solution to global warming? In Proceedings of the 2009 1st International Conference on the Developements in Renewable Energy Technology (ICDRET), Dhaka, Bangladesh, 17–19 December 2009; pp. 1–3. [Google Scholar]
  165. Pal, S. Evaluating the impact of network loads and message size on mobile opportunistic networks in challenged environments. J. Netw. Comput. Appl. 2017, 81, 47–58. [Google Scholar] [CrossRef]
  166. Pal, S. Extending Mobile Cloud Platforms Using Opportunistic Networks: Survey, Classification and Open Issues. J. UCS 2015, 21, 1594–1634. [Google Scholar]
  167. Salah, K.; Rehman, M.H.U.; Nizamuddin, N.; Al-Fuqaha, A. Blockchain for AI: Review and open research challenges. IEEE Access 2019, 7, 10127–10149. [Google Scholar] [CrossRef]
  168. Fernández-Caramés, T.M.; Fraga-Lamas, P. A Review on the Use of Blockchain for the Internet of Things. IEEE Access 2018, 6, 32979–33001. [Google Scholar] [CrossRef]
  169. Xu, K. Establishment of Music Emotion Model Based on Blockchain Network Environment. Wirel. Commun. Mob. Comput. 2020, 2020, 8870886. [Google Scholar] [CrossRef]
  170. Panda, S.S.; Jena, D. Decentralizing AI Using Blockchain Technology for Secure Decision Making. In Advances in Machine Learning and Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2021; pp. 687–694. [Google Scholar]
Figure 1. A simple process involved in the human emotion recognition system.
Figure 1. A simple process involved in the human emotion recognition system.
Sensors 21 05554 g001
Figure 2. Studying human emotions to generate appropriate decisions based on control and cognitive regulations (top-down and bottom-up).
Figure 2. Studying human emotions to generate appropriate decisions based on control and cognitive regulations (top-down and bottom-up).
Sensors 21 05554 g002
Figure 3. A taxonomy of different types of sensors based on the techniques, interaction, and physiological parameters.
Figure 3. A taxonomy of different types of sensors based on the techniques, interaction, and physiological parameters.
Sensors 21 05554 g003
Figure 4. EEG-based Brian Computer Interaction Process (BCI) to collect human meotion [92].
Figure 4. EEG-based Brian Computer Interaction Process (BCI) to collect human meotion [92].
Sensors 21 05554 g004
Figure 5. A hybrid brain-computer interface combined with EEG, EOG, and EMG signals [120].
Figure 5. A hybrid brain-computer interface combined with EEG, EOG, and EMG signals [120].
Sensors 21 05554 g005
Table 1. Summary of major methods for activity monitoring (focused human emotion recognition) and corresponding approach used (reference articles).
Table 1. Summary of major methods for activity monitoring (focused human emotion recognition) and corresponding approach used (reference articles).
MethodActivity MonitoringApproach (References)
SKTSkin temperature[89]
PPGHeart rate monitoring[89]
EEGElectrophysiological signals (from brain)[93,94,95]
EMGNerve’s stimulation of the muscle[120]
ECGElectrical signal from heart[117]
EOGSignals from outer retina[120]
GSRSignals from sweat gland activity[113]
Table 2. Comparison of electrical and non-electrical constraints in human emotion recognition techniques.
Table 2. Comparison of electrical and non-electrical constraints in human emotion recognition techniques.
Measurements Depending on Electrical ConstraintsMeasurements Depending on Non-Electrical Constraints
Direct contact with sensorsModulated sensorsContactNon-contact
EEG, EMG, ECG, EOGGSRPPG, RR, SKTPPG
Less intrusiveMore intrusive
More usabilityLess usability
Interface for user is lessInterface for user is more
Integrate new componentsModerate integration of components
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pal, S.; Mukhopadhyay, S.; Suryadevara, N. Development and Progress in Sensors and Technologies for Human Emotion Recognition. Sensors 2021, 21, 5554. https://doi.org/10.3390/s21165554

AMA Style

Pal S, Mukhopadhyay S, Suryadevara N. Development and Progress in Sensors and Technologies for Human Emotion Recognition. Sensors. 2021; 21(16):5554. https://doi.org/10.3390/s21165554

Chicago/Turabian Style

Pal, Shantanu, Subhas Mukhopadhyay, and Nagender Suryadevara. 2021. "Development and Progress in Sensors and Technologies for Human Emotion Recognition" Sensors 21, no. 16: 5554. https://doi.org/10.3390/s21165554

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop