Next Article in Journal
A Method to Present and Analyze Ensembles of Information Sources
Previous Article in Journal
Hybrid CUSUM Change Point Test for Time Series with Time-Varying Volatilities Based on Support Vector Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wearable Inertial Sensors for Daily Activity Analysis Based on Adam Optimization and the Maximum Entropy Markov Model

1
Department of Computer Science, Air University, Islamabad 44000, Pakistan
2
Department of Human-Computer Interaction, Hanyang University, Ansan 15588, Korea
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(5), 579; https://doi.org/10.3390/e22050579
Submission received: 18 April 2020 / Revised: 8 May 2020 / Accepted: 19 May 2020 / Published: 20 May 2020

Abstract

:
Advancements in wearable sensors technologies provide prominent effects in the daily life activities of humans. These wearable sensors are gaining more awareness in healthcare for the elderly to ensure their independent living and to improve their comfort. In this paper, we present a human activity recognition model that acquires signal data from motion node sensors including inertial sensors, i.e., gyroscopes and accelerometers. First, the inertial data is processed via multiple filters such as Savitzky–Golay, median and hampel filters to examine lower/upper cutoff frequency behaviors. Second, it extracts a multifused model for statistical, wavelet and binary features to maximize the occurrence of optimal feature values. Then, adaptive moment estimation (Adam) and AdaDelta are introduced in a feature optimization phase to adopt learning rate patterns. These optimized patterns are further processed by the maximum entropy Markov model (MEMM) for empirical expectation and highest entropy, which measure signal variances for outperformed accuracy results. Our model was experimentally evaluated on University of Southern California Human Activity Dataset (USC-HAD) as a benchmark dataset and on an Intelligent Mediasporting behavior (IMSB), which is a new self-annotated sports dataset. For evaluation, we used the “leave-one-out” cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving an improved recognition accuracy of 91.25%, 93.66% and 90.91% when compared with USC-HAD, IMSB, and Mhealth datasets, respectively. The proposed system should be applicable to man–machine interface domains, such as health exercises, robot learning, interactive games and pattern-based surveillance.

1. Introduction

Nowadays, rapid development in the area of wearable sensors has revolutionized the monitoring of human life logs in indoor/outdoor environments. These advancements enabled us to develop sophisticated sensors, which are attached safely on human body and monitor behavioral patterns. In addition, these enhancements have developed personalized environments which has led to improving the living standard of humans. However, the sensors for life log monitoring still have challenges in recognizing activities with minimal contextual information. Similarly, several limitations such as inconsistent human motions, resting, unconsciousness and overflow breathing reflect on sensors’ inaccurate human activities detection and cause difficulties in recognizing activities with complex postures.
Currently, wearable sensors have a wide range of real-world applications of human activity recognition (HAR). The applications include security surveillance systems, healthcare monitoring, sports assistance, interactive 3D games and smart homes [1]. In security, uncertain event detection applications are examined using HAR systems to take safety measures against violence activities (i.e., fighting, falling, and strikes) in the surrounding area. In healthcare monitoring, it is possible to analyze a patient’s heart rate, body motion, brain activity and other critical health data that become suitable to assist them in usual or unusual patient’s behavior. On the other hand, in the rehabilitation process, patients can easily monitor their health fitness and medication routines. In sports assistance, these wearable sensors provide velocity tracking for physical trainings and sweat rate [2] in order to make them conduct exercises more effectively. In interactive 3D games, body part movements are controlled by wearable sensors and physical exergames are playable in indoor environment, while in smart homes, long-range distance care is provided, such as children day-care and elderly activities monitoring.
Considering the wearable sensor-based technologies for HAR, a wide range of sensors have been integrated to get distinctness in the acquisition of cue understanding. Among them, a few wearable sensors like accelerometers, gyroscopes and magnetometers make it feasible to detect multiple aspects of human life and to measure position changes, angular rotation and body movements in 3-dimensional space [3,4,5]. However, despite such substantial amounts of information provided by wearable sensors, there are still some HAR challenges [6,7] that face the unresolved issues and lack the capability of giving perfect HAR results.
In this paper, we propose a novel multifused wearable HAR system that overcomes the complexity of human life log routines by measuring the changes of body position and orientation. For the inertial (i.e., accelerometers and gyroscopes) filtered data, we have adapted all the three main approaches as statistical features (i.e., median, variance, etc.), frequency features, and wavelet transform features. However, as a large dimension of features increases the computational HAR complexity, we use adaptive moment estimation (Adam) and AdaDelta optimization algorithms to properly discriminate among various activity classes. Finally, the maximum entropy Markov model is embodied in the model to measure empirical expectation and highest entropy of various human activities to obtain significant accuracy. As a performance evaluation, the comparison between the proposed classifier and conventional classifiers was performed. We applied our model to the IM-Sporting Behaviors (IMSB) dataset, which is based on different signal patterns of sport activities. Simultaneously, we utilized the proposed model for two public inertial sensor-based datasets called the University of Southern California human activity dataset (USC-HAD) and the Mhealth human dataset. The main contributions of this paper are as follows:
  • We proposed multifeature extraction methods having both time domain and frequency domain features of varied signal patterns.
  • For complex human activity patterns of sports and daily life living, we designed an Adam optimization-based maximum entropy Markov model that provided contextual information as well as classifying behaviors.
  • In addition, comprehensive evaluation was performed on two public benchmark datasets as well as one self-annotated dataset, which achieved significant results when compared with other state-of-the-art methods.
The remaining parts of the paper are organized as follows: In Section 2, we review the related work of two main categories of human activity analysis. Section 3 addresses the proposed HAR model with the integration of multifused methods, Adam optimization, and the maximum entropy Markov model. Section 4 presents the experimental setup and the results of the comparison with existing well-known statistical state-of-the-art methods. Finally, Section 5 discusses conclusion and future directions.

2. Related Work

A significant amount of research is undergoing for the development of HAR via multiple categories of sensors, such as video sensors and body-worn inertial sensors. Here, we comprehensively divide the related work into two parts, video sensor based HAR analysis and wearable sensors based activity analysis.

2.1. Video Sensor-Based HAR Analysis

In vision sensors, video cameras are fixed at surveillance locations to perform automated inspections of human activities. In recent years, these sensors have also become widely used in healthcare and fitness industries for researchers to improve the practices of user authentication [8,9]. Htike et al. [10] proposed a posture-based HAR system using one static camera. For video surveillance, their model is distributed into two stages; training and evolution stages. These stages were further implemented by four different classifiers, namely, a feed-forward neural network, K-means, multilayer perceptron, and fuzzy C-means. In [11], Jalal et al. designed a depth video-based translation and scaling invariant HAR model with the combination of a hidden Markov model (HMM). In their work, the invariant features are computed by different transforms such as R transformation and Radon transformation. Furthermore, for classification, they used HMM and principal component analysis (PCA) to recognize multiple human activities.
Babiker et al. [12] proposed series of preprocessing operations which include background subtraction, binarization, and morphological techniques. In addition, for classification of activities, a robust neural network model-based multilayer feed forward perceptron network was used. In [13], Zhou et al. developed an adaptive learning method to determine the physical location and motion speed of a human from a single camera view without estimating (i.e., intrinsic and extrinsic parameters) of camera calibration in indoor environments. Additionally, to recognize the human activities, hierarchical decision tree and size reduction methods were used.

2.2. Wearable Sensor-Based Activity Analysis

In recent decades, a significant amount of HAR research work has been mainly adapted through visual information [14]. However, due to several limitations in HAR such as long-range human movements and illumination changes of vision sensors, wearable-sensor technology has been gaining attentions as a new solution among researchers in the community [15]. In addition, the demands for understanding and monitoring human life log activities via wearable sensors have grown incrementally. In [16], Roy et al. proposed a hybrid method for recognizing daily living activities in smart environment using body-worn sensors and ambient sensors. This work focused on spatiotemporal constraints that improved the accuracy and reduced the overhead computation of the HAR system. Nweke et al. [17] presented analysis of human activity detection and monitoring by multisensor fusion via an accelerometer and a gyroscope. They attached multiple sensors on different body locations (i.e., wrist, chest, ankle and hip) and obtained good results via random forest (RF) and voting significant schemes.
In [18], Zebin et al. used inertial sensors on five different sensor locations on the lower body. These different body locations were selected to classify human activities more precisely than in HAR. The authors systematically proposed a feature learning method to automate feature learning from raw input using convolutional neural networks (CNN). Zhu et al. [19] developed a multisensor fusion network using two inertial sensors, which were attached one on the foot and the other on the waist of a human subject. They combined the multisensor fusion method with HMM and neural networks to reduce the computational complexity and to obtain better accuracy.
In spite of previous HAR research, there are still challenges in dynamic movement, multisensor computations and precise signal data acquisition. Therefore, we suggest a novel methodology for HAR in this paper.

3. Designed Framework for Wearable HAR

Initially, the purposed HAR model acquire data as input from inertial measurement unit (IMU) sensors having 3-axis signal values of accelerometers, gyroscopes and magnetometers. These signals pass through stages of perusal and standardized procedures to realize the taste of reasonable classification. Firstly, raw signals are divided into frame size (i.e., 50 ms) via fixed-sliding window analysis. These signals involve multiple filters such as median, Savitzky–Golay and hampel to refine themselves by eliminating small saw tooth waves from the data. Secondly, in the signal normalization phase, we ensure other denoising effects (i.e., short-term fluctuation, maximum noise removal, etc.) that smoothen signal outliers. Thirdly, we propose multifused features from different domains including statistics, frequency and time, which are quantized via codebook generation for proper symbol selection. Finally, in the wearable HAR classification module, a novel combined classifier of Adam optimization and the maximum entropy Markov model is implemented on the optimal feature sets in order to recognize different activities. The schematic diagram of our complete model is shown in Figure 1.

3.1. Data Acquisition and Denoising

As IMU sensors are highly sensitive to even minor amounts of random noise, any intentional/unintentional change may cause irrelevancy among signal values and may further alter signal shapes, which badly affects the feature extraction phase. For this reason, three different filtration techniques, i.e., Savitzky–Golay, median and hampel filters are applied to the datasets to eliminate the noise associated with the inertial signals. In Savitzky–Golay (Figure 2a), a set of digital data points are computed to smoothen the raw data and to increase the precision of data without misleading the signal outliers. Similarly, the hampel filter (Figure 2b) detects and removes the random outliers of the raw signals, whereas, the median filter (Figure 2c) acts as a nonlinear approach that eliminates the impulsive type of noise and restores the processed signals to nearly normal motions. Based on these signal acquisition, we adopted the median filter which provided better results when compared to the other filters’ impulsive types of noise in all three datasets.

3.2. Windowing Selection

In order to maximize the recognition accuracy, window selection has to be chosen to obtain more contextual information [20]. For window selection, we studied different windowing strategies that have been adopted by many researchers. In inertial-based sensor work, most researchers windowed their signals in the segment ranges of 4–5 seconds to analyze the daily activities of humans. The cyclic motion patterns in USC-HAD, Intelligent Media sporting behavior (IMSB) and Mhealth datasets make it easier to understand and to recognize the windowing selection mechanism with the fixed-sliding [21].

3.3. Feature Extraction Methods

After signals selection, we discuss our proposed multifused feature model in which a detailed description of three major domains (i.e., statistical features, frequency and wavelet) are combined together for validation. Statistical features are measured by clear-cut segregation of signal values and as such are computationally less intensive, whereas frequency domain features mainly focus on a periodic structure of the signal, such as spectral entropy and the Helbert transform. In addition, wavelet features are commonly used to find absolute patterns of the signal, i.e., the Walsh–Hadamard transform. Algorithm 1 defines the multifused features model.
Algorithm 1: Inertial Signal Features Computation
Input: Acc = Accelerometer (x,y,z), Gyr = Gyroscope (x,y,z) and SR = Sampling Rate (100 Hz)
Output: Multifused feature vectors ( u 1 , u 2 , u 3 u n )
f e a t u r e v e c t o r s []
s a m p l e s i g n a l GetSampleSignal()
  Overlap GetOverlappingTime()
Procedure HAR(Acc,Gyr,SR)
M u l t i F u s e d V e c t o r []
DenoiseData MedianFilter(Acc,Gyr)
SampledData(DenoiseData,SR)
While exit condition not satisfied do
[ m i n ,   m a x ,   m e a n ,   v a r i a n c e ] ExtractStatisticalFeatures(sample data)
[ L B P F e a t u r e s ] ExtractLocalBinaryPatternFeatures(sample signal)
[ W H T ,   C Z T ,   H T ] ExtractWaveletFeatures(sampledsignal)
M u l t i F u s e d V e c t o r [ min, max, mean, variance, LBP, WHT, CZT, HT]
Return MultiFusedVector

3.3.1. Statistical Features

The statistical features S(Vstat) reflect the average, middle, squared deviation and max/min values of sample i signal in each frame. These features hold a major factor to examine the overall changes that are explored as a response for each activity of n as
S ( V stat ) = i = 1 n c i n , i = 1 n ( Z Z ¯ ) 2 n 1 ,   min ( signal ) ( M i ) ,   max ( signal ) ( X i )
where n is sampled data size, c is the coefficients in the feature vector, Z is the value of first vector and Z ¯ is the mean of all sampled data. Figure 3 shows a 1D plot with a combination of different statistical features of the walking forward activity log using the USC-HAD dataset.

3.3.2. Chirp Z-Transform

In frequency domain features, the chirp z-transform (CZT) is used as a high-speed convolution algorithm that evaluates the z-transform of a signal [22]. The functions are viewed as polynomials with poles as roots and zeros, where poles are the peak energy spectrum concentration and zeros are modeled on frequency spectrum troughs (Figure 4). It helps to estimate the transfer function of a system by an accurate number of zeros. Poles in the system cause a finer bandwidth dimensions and efficient reduction of the transfer function in polynomials ratios. It is defined as
X k = i = 0 N x ( n ) z k n
z k = A . W k ,    k = 0 , 1 , B 1
where x(n) is the original signal, z is the arbitrary complex number up to n number of points. A is the starting point of the complex number and W is the complex ratio up to the points of k.

3.3.3. Helbert Transform

During Helbert transform, we identified the minimum level of frequency retained by calculating the Fourier transform of the given signal a(t) to discard the negative frequencies and to double the magnitude of positive values [23]. These outputs become the complex-valued signals in which imaginary and real part values form a Hilbert transform pair, as shown in Figure 5. This pair acts as a specific linear operator, which gives the Hilbert space of real eigenstate values of
H ( a ) ( t ) = 1 π x ( a ) t a da
where x(a) is the signal which has the amplitude spectrum and autocorrelation function. x(a) is the real variable signal/real data sequence. The input data is zero-padded or truncated to length t − a.

3.3.4. D Local Binary Pattern (1D LBP)

1D LPB is statistically used as a nonparametric operator which defines the number of counts for each change in the inertial sensors signals that exceed the threshold. For each data sample, a binary code produced and found precise variations in the processed inertial signals [24]. In this algorithm, the middle sample is selected as a threshold value and the other values are compared against the particular threshold. If the values are smaller than the threshold value, it is set to 0, and vice versa (Figure 6). The formation of 1D LBP using the inertial signal is defined as
1 DLBP ( x , y , z ) = i = 0 n S iner ( t ) 2 i
where S iner { 1 , t threshold 0 , t < threshold .

3.3.5. Walsh–Hadamard Transform

Walsh–Hadamard transform (WHT) is used as an orthogonal transformation that splits our inertial signal into a set of signals [25]. Then, it finds dense property (i.e., energy of these signals), which deals with the real numbers, helps to minimize the computational costs [26] and produces a more robust set of features. Figure 7 represents the sensor fusion of an accelerometer and gyroscope with motion patterns of walking forward activity via WHT, respectively. The discrete Walsh–Hadamard transform (DWHT) of a vector is represented by
X w ( k ) = n = 0 N 1 x ( n ) i = 0 M 1 ( 1 ) n i K M 1 i ,   k = 0 ,   1 , , N 1
where N is the number of samples of the vector data and M = log 2 N .

3.3.6. First Order Derivatives

During first order derivation, we calculated the rate of change of inertial sensors’ coordinates (i.e., X, Y, Z) and found the direction of a signal, which measures the slope of the tangent to the signal and explores the instantaneous rate of change in a signal (See Figure 8). This implies how rapidly the adjacent points change their positions over time. They are computed as
X i = X i + 1 Y i Δ Y ,   Y i = Y i + 1 Y i Δ Z ,   Z i = Z i + 1 Z i Δ X , where   I = 1 ,   2 .   n 1

4. Combined Classifiers

To enhance the accuracy performance of the multifused features model, we used the combined classifiers strategy in which optimization techniques work as preclassifiers. For wearable HAR classification, we applied the maximum entropy Markov model along with Adam and AdaDelta optimizations techniques.

4.1. Adam Optimization Algorithm

Optimization algorithms are used as a progressive set of routines that calculates adaptive learning rate and finds the closest optimal solutions for problems with a confused set of operations. Adaptive moment estimation (Adam) [27] is among the essential strategies of optimization algorithms that compute individual adaptive learning rates based on the first and the second moments of the gradients. In our case, we trained the model with a learning rate of 0.00005. In addition, it computes the finest properties of both algorithms, named RmsProp and AdaGrad. RmsProp provides the average of recent magnitudes of the inertial signals gradients and AdaGrad deals with sparse gradients with uncentered variance. Both these algorithms are formulated as
m t = β m t 1 + ( 1 beta 1 ) g t
v t = β 2 v t 1 + ( 1 beta 2 ) g 2 t
where m t and v t are estimates of the first moment of the mean and the second moment of the uncentered variance, respectively. Figure 9 shows a 3D plot of Adam optimization values of walking forward and running forward activities.

4.2. AdaDelta Optimization Algorithm

In the case of the AdaDelta optimization algorithm [28], we adapted learning rates based on a moving window of gradient descent updates. At instances, it continues learning updates and tunes parameters to obtain maximum possible learning values. On the other hand, the sum of gradients is recursively defined as the running average E[g2]t at current time step, and also depends on the previous average and the current gradient
E [ g 2 ] t = rE [ g 2 ] t 1 + ( 1 r ) g t 2
where r is the fraction similar to the momentum term. Figure 10 shows the AdaDelta results of different activities using the USC-HAD dataset.

4.3. The Maximum Entropy Markov Model

After feature vector optimization, we tested our proposed model against two challenging datasets that contain activities of multiple classes. For a multiclass problem, we aimed to measure empirical expectation and highest entropy of different activities using the maximum entropy Markov model (MEMM). The idea behind the decision to use this model was to overcome the concept of the conventional hidden Markov model (HMM) [29] framework in which observation and transition functions are replaced by an individual function P ( s | s ,   o ) . Thus, the present observation depends on the present state.
In contrast, instead of relying on its present states, MEMM can think of the observations as being associated with transition states. Initially, all observations are connected with state transitions instead of with states. It is defined as
α t + 1 ( s ) = s α t ( s ) P s ( s | o t + 1 )
where α t ( s ) is the probability of states at given time t to the observation categorization. Then, it estimates probability distribution of the data that are dependable on certain constraints procured from the training data. Each constraint indicates some characteristics of the training data which predict values of all individual features in the learned distribution. In addition, it emits the tokens from the training data which determine the best set of observation features and are also able to solve the multinomial classification based on prediction problems. This classification model generalizes to find coefficients that match the breakdown of the dependent variable as
P ( S | O ) = r = 1 n P ( O r | q r ) r = 1 n P ( S r | S r 1 )
where S is the state sequence and O is the sequence of observations, i.e., O1, O2, …, On. In order to maximize the conditional probability P, a set of observations is tagged with labels S1, S2, …, Sn, as
P ( S 1 , , S n | O 1 , , O n ) = r = 1 n P ( S r | S r 1 , O r )
1 m s i r = 1 m s i f r ( o t r , s t r ) = 1 m s I r = 1 m s i s S P s ( s | o t r ) f a ( o t r , s )
where t1, t2, t3, …, t r are the time stamps that comprise of transition function P s .
P ( s | s ,   o ) = 1 Z ( o , s ) exp ( r w r f r ( o , s ) )
where w r is the weight to be learned and is associated with the feature f r ( o , s ) acting as the categorical feature functions also known as real-valued, and Z ( o , s ) is the normalizing term ensuring the matrix sum. Figure 11 describes the overall flow of MEMM applied to six different activities of the IMSB dataset.

5. Experimental Results and Analysis

5.1. Experimental Setting

To evaluate the training/testing performance of the proposed model, we used the “leave-one-out” cross validation method on three benchmark datasets named USC-HAD (ACM, Pittsburgh, PA, USA), IMSB (Islamabad, Pakistan) and Mhealth (IWAAL, Belfast, Northern Ireland). These datasets include multiple activities taken in different environments, i.e., public areas, sports fields and indoor-outdoor locations. Inertial sensors (i.e., accelerometers, gyroscopes and magnetometers) are used to capture simple and complex activities patterns that cover nearly all aspects of human motions by multiple subjects.
The USC-HAD dataset [30] is taken from a motion node device which includes a wearable network of 6-Degrees of freedom (DoF) for sensing and 3D motion tracking. It consists of multiple sensors such as a gyroscope and an accelerometer, which give real time orientations. These sensors are located at the front right hip because it is one of the top five locations used in [31]. A group of 14 subjects performed 12 different activities, namely, jumping up, running forward, walking forward, elevator down, elevator up, sitting, standing, sleeping, walking left, walking right, walking downstairs and walking upstairs. The devices used in this experiment have a sampling rate of 100 Hz.
The second dataset is our proposed self-annotated IM-sporting behavior (IMSB) dataset [32], which embodies data from three body worn accelerometers. These sensors are located at the knee and below the neck and wrist regions to capture different aspects of human motions. Group of 20 subjects performed six different sporting behaviors, namely, football, skipping, basketball, badminton, cycling, and table tennis. The volunteers were both professionals and athletes. The age range of the volunteers was 20–30 years-old and their weight range was 60–100 kgs. The experimental environments involved indoor/outdoor courts to record different motions of athletes in different situations.
We also tested with a third benchmark dataset named Mhealth. The Mhealth dataset [33] includes both static and dynamic activities of ten subjects, which are recorded from different sensors such as one 2-lead Electrocardiogram (ECG) sensor, two 3-axis gyroscope sensors, three 3-axis magnetometer sensors, and three 3-axis accelerometer sensors. These sensors are located at the left ankle, the right wrist and the chest. This dataset includes twelve different outdoor activities, namely, standing still, lying down, sitting and relaxing, walking, jogging, running, cycling, climbing stairs, crouching, waist bends forward, jump front and back, and frontal elevation of arms.

5.2. Hardware Platform

In the experiment of the HAR system, the sensor platform comprised of three MPU6050 sensors (InvenSense, San Jose, CA, USA). These sensors were interfaced with the Arduino device using jumper wires for electrical communication. Three NRF24L01 (Nordic Semiconductor, Trondheim, Norway) modules were also connected with the MPU6050 sensors. All three modules were responsible for the transmission of data to the fourth module. The setup of the fourth module, known as the receiver module, was completed by connecting the Arduino (Smart Projects, Italy), NRFL01 (Spark Fun Electronics, Boulder, USA) and a memory card. At each instance of data collection, three modules were mounted at the wrist, knee and below the neck, as shown in Figure 12. The fourth module was connected with the computer, which receives data from three sensors mounted on human body. The 9-volt batteries were used with the setup to obtain uninterrupted data wirelessly. The open source Arduino software (IDE) was used to simulate the operation in a real-time environment.
The MPU6050 sensor module is a complete 6-axis “motion tracking device”. It has a 3-axis gyroscope, a 3-axis accelerometer and a “digital motion processor” combined in a small package and based on micro electro mechanical system (MEMS) technology. The advantage of the MPU6050 sensor is its in-built digital motion processor (DMP) which is used for the computation of motion processing protocols. We received signal data of angles of yaw, roll and pitch. Thus, the effort of the host in the computation of the motion data is minimized.
With the current setup having 9-volt battery, the lifetime of the setup can operate up to 30 h, or less than two days. Therefore, it is recommended to recharge or replace the battery, so that sensors can fulfill their role for longer in the HAR system.

5.3. Experimental Result and Evaluation

In this section, experiments are repeated twice to evaluate the performance of the proposed wearable HAR model compared to the three benchmark datasets. Table 1 depicts the confusion matrix of human activity recognition of 12 different activities using the USC-HAD dataset with a mean accuracy of 91.25% using the 6-observations problem. On the other hand, Table 2 presents the recognition results on the IMSB dataset with a mean accuracy of 93.6% using 3-observations. Table 3 shows the confusion matrix of 12 different physical outdoor activities with a mean accuracy of 90.91% using the Mhealth dataset.
From Figure 13 and Figure 14, it can be observed that a few sports activity pairs, i.e., badminton and table tennis and football and basketball involve high resemblance in motion patterns, i.e., forehand smashing, split-step footwork, defending, rushing and jumping. Therefore, our proposed multifused wearable HAR model highlighted uniqueness factors of badminton and table tennis by recognizing specific movements of wrist motion (see Figure 13). Similarly, Figure 14 shows segregated patterns of feet movements in the cases of football and basketball using the IMSB dataset.
In Table 4, Table 5 and Table 6, we present the comparative study of the proposed model with two other statistically well-known classifiers, i.e., random forest and artificial neural network (ANN) classifiers using precision, recall and F-measure parameters. Overall results show that the proposed method achieved significantly far better performance than the other classifiers. Table 7 shows the comparison results using USC-HAD, IMSB, and Mhealth datasets, respectively.
Finally, our proposed model faces the following certain challenges in practical implementation:
  • In practical implementation, we faced pattern issues of human motions of the same activities being performed by different subjects.
  • Wearable sensors are highly sensitive in terms of orientation and positions of the subject’s body, and therefore, data readings can be quite different if the sensor is placed slightly above or below the exact locations (i.e., the wrist, knee and below the neck).

6. Conclusions

In this paper, we have presented a robust framework that can precisely predict HAR of two challenging datasets in a multisensor environment by catering the augmented signal via multifused features. These features include statistical properties, 1D-LBP, CZT, WHT and first order derivative features to extract the optimal data. In addition, Adam and AdaDelta are used to optimize, train and recognize different types of daily life log and sporting activities. Our proposed system outperforms the others in term of accuracy and shows 91.25%, 90.91, and 93.66% improved results when compared with USC-HAD, IMSB and Mhealth datasets, respectively.
In the future, we will adapt new feature extraction strategies from other domains to classify much more complex activities of different scenarios such as the smart home, offices and public malls via other advanced wearable sensors. In addition, we plan to introduce elderly people to our setups at homes and hospitals.

Author Contributions

Conceptualization: S.B.u.d.T.; methodology: A.J.; software: S.B.u.d.T.; validation: S.B.u.d.T.; formal analysis: K.K.; resources: A.J. and K.K.; writing—review and editing: A.J. and K.K.; funding acquisition: A.J. and K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (No. 2018R1D1A1A02085645) and by a grant (19CTAP-C152247-01) from the Technology Advancement Research Program funded by the Ministry of Land, Infrastructure and Transport of the Korean government.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ranasinghe, S.; Machot, F.A.; Mayr, H.C. A review on applications of activity recognition systems with regard to performance and evaluation. Int. J. Distrib. Sens. Netw. 2016, 12, 1–22. [Google Scholar] [CrossRef] [Green Version]
  2. Mukhopadhyay, S.C. Wearable sensors for human activity monitoring: A review. IEEE Sens. J. 2015, 15, 1321–1330. [Google Scholar] [CrossRef]
  3. Ahmed, N.; Rafiq, J.I.; Islam, M.R. Enhanced Human Activity Recognition Based on Smartphone Sensor Data Using Hybrid Feature Selection Model. Sensors 2020, 20, 317. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Susan, S.; Agrawal, P.; Mittal, M.; Bansal, S. New shape descriptor in the context of edge continuity. CAAI Trans. Intell. Technol. 2019, 4, 101–109. [Google Scholar] [CrossRef]
  5. Jalal, A.; Kim, Y.-H.; Kim, Y.-J.; Kamal, S.; Kim, D. Robust human activity recognition from depth video using spatiotemporal multi-fused features. Pattern Recognit. 2017, 61, 295–308. [Google Scholar] [CrossRef]
  6. Janidarmian, M.; Fekr, A.R.; Radecka, K.; Zilic, Z. A comprehensive analysis on wearable acceleration sensors in human activity recognition. Sensors 2017, 17, 529. [Google Scholar] [CrossRef]
  7. Anindya, N.; Mukhopadhyay, S.C. Wearable Electronics Sensors: Current Status and Future Opportunities. In Wearable Electronics Sensors, 1st ed.; Mukhopadhyay, S.C., Ed.; Springer International Publishing: Cham, Switzerland, 2015; pp. 1–35. [Google Scholar]
  8. Suryadevara, N.K.; Quazi, T.; Mukhopadhyay, S.C. Smart sensing system for human emotion and behaviour recognition. In Proceedings of the Indo-Japanese Conference on Perception and Machine Intelligence, Kolkata, India, 12–13 January 2012; pp. 11–22. [Google Scholar]
  9. Shokri, M.; Tavakoli, K. A review on the artificial neural network approach to analysis and prediction of seismic damage in infrastructure. Int. J. Hydromechatronics 2019, 4, 178–196. [Google Scholar] [CrossRef]
  10. Htike, K.K.; Khalifa, O.O.; Ramli, H.A.M.; Abushariah, M.A.M. Human activity recognition for video surveillance using sequences of postures. In Proceedings of the 2014 IEEE International Conference on e-Technologies and Networks for Development (ICeND2014), Beirut, Lebanon, 29 April–1 May 2014; pp. 79–82. [Google Scholar]
  11. Jalal, A.; Uddin, M.Z.; Kim, T.S. Depth video-based human activity recognition system using translation and scaling invariant features for life logging at smart home. IEEE Trans. Consum. Electron. 2012, 58, 863–871. [Google Scholar] [CrossRef]
  12. Babiker, M.B.; Khalifa, O.O.; Htike, K.K.; Zaharadeen, M. Automated daily human activity recognition for video surveillance using neural network. In Proceedings of the 2017 IEEE International Conference on Smart Instrumentation, Measurement and Application (ICSIMA), Putrajaya, Malaysia, 28–30 November 2017; pp. 1–5. [Google Scholar]
  13. Zhou, Z.; Chen, X.; Chung, Y.; He, Z.; Han, T.X.; Keller, J.M. Activity Analysis, Summarization, and Visualization for Indoor Human Activity Monitoring. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1489–1498. [Google Scholar] [CrossRef] [Green Version]
  14. Prati, A.; Shan, C.; Wang, K.I. Sensors, vision and networks: From video surveillance to activity recognition and health monitoring. J. Ambient. Intell. Smart Environ. 2019, 11, 5–22. [Google Scholar]
  15. Lee, J.; Kim, D.; Ryoo, H.; Shin, B. Sustainable Wearables: Wearable Technology for Enhancing the Quality of Human Life. Sustainability 2016, 8, 466. [Google Scholar] [CrossRef] [Green Version]
  16. Roy, N.; Misra, A.; Cook, D. Ambient and smartphone sensor assisted ADL recognition in multi-inhabitant smart environments. J. Ambient. Intell. Humaniz. Comput. 2016, 7, 1–19. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Nweke, H.F.; Teh, Y.W.; Mujtaba, G.; Alo, U.R.; Al-garadi, M.A. Multi-sensor fusion based on multiple classifier systems for human activity identification. Human-Centric Comput. Inf. Sci. 2019, 9, 1–44. [Google Scholar] [CrossRef] [Green Version]
  18. Zebin, T.; Scully, P.J.; Ozanyan, K.B. Human activity recognition with inertial sensors using a deep learning approach. In Proceedings of the 2016 IEEE Conference on Sensors, Orlando, FL, USA, 30 October–3 November 2016; pp. 1–3. [Google Scholar]
  19. Zhu, C.; Sheng, W. Multi-sensor fusion for human daily activity recognition in robot-assisted living. In Proceedings of the 2009 ACM/IEEE International Conference on Human robot interaction (HRI), La Jolla, CA, USA, 11–13 March 2012; pp. 303–304. [Google Scholar]
  20. Cao, L.; Wang, Y.; Zhang, B.; Jin, Q.; Vasilakos, A.V. GCHAR: An efficient Group-based Context—Aware human activity recognition on smartphone. J. Parallel Distrib. Comput. 2018, 118, 67–80. [Google Scholar] [CrossRef]
  21. Jalal, A.; Quaid, M.A.K.; Kim, K. A Wrist Worn Acceleration Based Human Motion Analysis and Classification for Ambient Smart Home System. J. Electr. Eng. Technol. 2019, 14, 1733–1739. [Google Scholar] [CrossRef]
  22. Tingting, Y.; Junqian, W.; Lintai, W.; Yong, X. Three-stage network for age estimation. CAAI Trans. Intell. Technol. 2019, 4, 122–126. [Google Scholar] [CrossRef]
  23. Zhu, C.; Miao, D. Influence of kernel clustering on an RBFN. CAAI Trans. Intell. Technol. 2019, 4, 255–260. [Google Scholar] [CrossRef]
  24. Abdul, Z.K.; Al-Talabani, A.B.; Abdulrahman, A.O. A New Feature Extraction Technique Based on 1D Local Binary Pattern for Gear Fault Detection. Shock Vib. 2016, 2016, 1–6. [Google Scholar] [CrossRef] [Green Version]
  25. Priya, L.G.G.; Domnic, S. Walsh–Hadamard Transform Kernel-Based Feature Vector for Shot Boundary Detection. IEEE Trans. Image Process. 2014, 23, 5187–5197. [Google Scholar]
  26. Wiens, T. Engine speed reduction for hydraulic machinery using predictive algorithms. Int. J. Hydromechatronics 2019, 1, 16–31. [Google Scholar] [CrossRef]
  27. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  28. Osterland, S.; Weber, J. Analytical analysis of single-stage pressure relief valves. Int. J. Hydromechatronics 2019, 2, 32–53. [Google Scholar] [CrossRef]
  29. Gaglio, S.; Re, G.L.; Morana, M. Human Activity Recognition Process Using 3-D Posture Data. IEEE Trans. Hum. Mach. Syst. 2014, 45, 586–597. [Google Scholar] [CrossRef]
  30. Zhang, M.; Sawchuk, A.A. USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing (UbiComp) Workshop on Situation, Activity and Goal Awareness (SAGAware), Pittsburgh, PA, USA, 5–8 September 2012; pp. 1036–1043. [Google Scholar]
  31. Ichikawa, F.; Chipchase, J.; Grignani, R. Where’s the phone? A study of mobile phone location in public spaces. In Proceedings of the 2005 IEE International Conference on Mobile Technology, Guangzhou, China, 15–17 November 2005; pp. 142–148. [Google Scholar]
  32. Intelligent Media Center (IMC). Available online: http://portals.au.edu.pk/imc/Pages/Datasets.aspx (accessed on 19 February 2020).
  33. Banos, O.; Villalonga, C.; Garcia, R.; Saez, A.; Damas, M.; Holgado, J.A.; Lee, S.; Pomares, H.; Rojas, I. Design, implementation and validation of a novel open framework for agile development of mobile health applications. Biomed. Eng. OnLine 2015, 14, 1–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Vaka, P.R. A Pervasive Middleware for Activity Recognition with Smartphones. Master’s Thesis, University of Missouri, Columbia, MO, USA, 2015. [Google Scholar]
  35. Zhang, M.; Sawchuk, A.A. A feature selection-based framework for human activity recognition usingwearable multimodal sensors. In Proceedings of the 6th International Conference on Body Area Networks, Beijing, China, 7–8 November 2011; pp. 92–98. [Google Scholar]
  36. Nguyen, H.D.; Tran, K.P.; Zeng, X.; Koehl, L.; Tartare, G. Wearable Sensor Data Based Human Activity Recognition using Machine Learning: A new approach. arXiv 2019, arXiv:1905.03809. [Google Scholar]
  37. Guo, H.; Chen, L.; Peng, L.; Chen, G. Wearable Sensor Based Multimodal Human Activity Recognition Exploiting the Diversity of Classifier Ensemble. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp’16), Heidelberg, Germany, 12–16 September 2016; ACM: New York, NY, USA, 2016; pp. 1112–1123. [Google Scholar]
  38. Quaid, M.A.K. Multi-Cue Fusion and Reweighted Genetic Algorithm for Physical Healthcare Routines on Wearable Accelerometer Sensors. Ph.D. Thesis, Air University, Islamabad, Pakistan, April 2019. [Google Scholar]
  39. Ghaleb, F.A.; Kamat, M.B.; Salleh, M.; Rohani, M.F.; Razak, S.A. Two-stage motion artefact reduction algorithm for electrocardiogram using weighted adaptive noise cancelling and recursive Hampel filter. PLoS ONE 2018, 13, e0207176. [Google Scholar] [CrossRef]
  40. SivaKumar, A. Geometry Aware Compressive Analysis of Human Activities: Application in a Smart Phone Platform. Ph.D. Thesis, Arizona State University, Tempe, AZ, USA, May 2014. [Google Scholar]
Figure 1. Flow architecture of proposed human activity recognition model.
Figure 1. Flow architecture of proposed human activity recognition model.
Entropy 22 00579 g001
Figure 2. Signal Preprocessing. Inertial sensors with unfiltered (unprocessed) and filtered (processed) signals of correct walking activity via (a) Savitzky–Golay, (b) hampel and (c) median filters on the University of Southern California Human Activity Dataset (USC-HAD) dataset.
Figure 2. Signal Preprocessing. Inertial sensors with unfiltered (unprocessed) and filtered (processed) signals of correct walking activity via (a) Savitzky–Golay, (b) hampel and (c) median filters on the University of Southern California Human Activity Dataset (USC-HAD) dataset.
Entropy 22 00579 g002
Figure 3. 1D vector plot of statistical features of the walking forward activity log using the USC-HAD dataset.
Figure 3. 1D vector plot of statistical features of the walking forward activity log using the USC-HAD dataset.
Entropy 22 00579 g003
Figure 4. Accelerometer signal representation via chirp z-transform of the walking forward signal.
Figure 4. Accelerometer signal representation via chirp z-transform of the walking forward signal.
Entropy 22 00579 g004
Figure 5. Hilbert transform features depicted for x components of the walking forward signal.
Figure 5. Hilbert transform features depicted for x components of the walking forward signal.
Entropy 22 00579 g005
Figure 6. Local binary pattern (LBP) applied using signal data. (a) Segment of inertial signal sample, (b) sample values of associate signal, (c) middle value Pc as threshold for associate values Po, P1, P2, …, P7 and (d) produced LBP code converted into decimal representation.
Figure 6. Local binary pattern (LBP) applied using signal data. (a) Segment of inertial signal sample, (b) sample values of associate signal, (c) middle value Pc as threshold for associate values Po, P1, P2, …, P7 and (d) produced LBP code converted into decimal representation.
Entropy 22 00579 g006
Figure 7. 1-D Walsh–Hadamard transform (WHT) as (a) a WHT signal feature and (b) magnitude of WHT coefficients of the walking forward activity.
Figure 7. 1-D Walsh–Hadamard transform (WHT) as (a) a WHT signal feature and (b) magnitude of WHT coefficients of the walking forward activity.
Entropy 22 00579 g007
Figure 8. First order derivation representation of walking forward and jumping up activities using the USC-HAD dataset.
Figure 8. First order derivation representation of walking forward and jumping up activities using the USC-HAD dataset.
Entropy 22 00579 g008
Figure 9. Adaptive moment estimation (Adam) optimization algorithms with adaptive learning of (a) walking forward and (b) running forward activities using the USC-HAD dataset.
Figure 9. Adaptive moment estimation (Adam) optimization algorithms with adaptive learning of (a) walking forward and (b) running forward activities using the USC-HAD dataset.
Entropy 22 00579 g009
Figure 10. AdaDelta optimization algorithm with adaptive learning of (a) elevator down and (b) elevator up activities using the USC-HAD dataset.
Figure 10. AdaDelta optimization algorithm with adaptive learning of (a) elevator down and (b) elevator up activities using the USC-HAD dataset.
Entropy 22 00579 g010
Figure 11. Maximum entropy Markov model algorithm applied to six different activities of the Intelligent Media sporting behavior (IMSB) dataset.
Figure 11. Maximum entropy Markov model algorithm applied to six different activities of the Intelligent Media sporting behavior (IMSB) dataset.
Entropy 22 00579 g011
Figure 12. Sensors mounted on human body.
Figure 12. Sensors mounted on human body.
Entropy 22 00579 g012
Figure 13. Signals representing wrist motion in badminton (left column) and table tennis (right column).
Figure 13. Signals representing wrist motion in badminton (left column) and table tennis (right column).
Entropy 22 00579 g013
Figure 14. Signals representing feet movement in football (left column) and basketball (right column).
Figure 14. Signals representing feet movement in football (left column) and basketball (right column).
Entropy 22 00579 g014
Table 1. Confusion matrix of human activity recognition (HAR) accuracies of individual activities using the USC-HAD dataset.
Table 1. Confusion matrix of human activity recognition (HAR) accuracies of individual activities using the USC-HAD dataset.
ActivitiesA1A2A3A4A5A6A7A8A9A10A11A12
A10.920.05000.020000.01000
A20.050.8900000.040000.020
A3000.960000.020.020000
A40.040.0100.92000.0300000
A500.03000.9400000.0200.01
A6000.0200.020.9100.02000.030
A70000.0300.010.95000.0100
A80.0300.02000.0300.8800.0200.02
A900.0200.04000.020.030.860.0300
A100000.0300.020000.930.020
A11000.03000.0500.020.0300.870
A12000.0300.0100.01000.0300.92
Mean Accuracy = 91.25%
A1 = jumping up; A2 = running forward; A3 = walking forward; A4 = elevator down; A5 = elevator up; A6 = sitting; A7 = standing; A8 = sleeping; A9 = walking left; A10 = walking right; A11 = walking downstairs and A12 = walking upstairs. In addition, diagonal values are an outcome, where the model correctly predicts the positive class.
Table 2. Confusion matrix of HAR accuracies of individual activities using the IMSB dataset.
Table 2. Confusion matrix of HAR accuracies of individual activities using the IMSB dataset.
ActivityS1S2S3S4S5S6
S10.9300.07000
S200.960.04000
S30.0600.94000
S40.02000.8900.09
S500.020.0200.960
S6000.010.0500.94
Mean Accuracy = 93.6%
S1 = football; S2 = skipping; S3 = basketball; S4 = badminton; S5 = cycling and S6 = table tennis. In addition, diagonal values are an outcome, where the model correctly predicts the positive class.
Table 3. Confusion matrix of HAR accuracies of individual activities using the Mhealth dataset.
Table 3. Confusion matrix of HAR accuracies of individual activities using the Mhealth dataset.
ActivitiesH1H2H3H4H5H6H7H8H9H10H11H12
H10.940000.020.03000.010.0100.01
H20.040.900.020.01000.010000.010.01
H30.010.040.9300000.010000.01
H4000.020.9600.0100000.010
H50.010.04000.920.0200000.020
H60.01000.020.030.9400.010000
H70.02000.020.020.010.890.02000.020
H800.020.010000.030.9100.0200
H900.0200.03000.030.020.870.020.010
H100.010.0200.0100.0300.0100.900.010.01
H1100.020000.030.01000.020.910.01
H120.010.030.020.02000.03000.0400.84
Mean Accuracy = 90.91%
H1 = standing still; H2 = lying down; H3 = sitting and relaxing; H4 = walking; H5 = jogging; H6 = running; H7 = cycling; H8 = climbing stairs; H9 = crouching; H10 = waist bends forward; H11 = jump front and back; H12 = frontal elevation of arms. In addition, diagonal values are an outcome, where the model correctly predicts the positive class.
Table 4. Classification results of the three classifiers using the USC-HAD dataset.
Table 4. Classification results of the three classifiers using the USC-HAD dataset.
MethodsMaximum Entropy Markov ModelRandom ForestANN
ActivitiesPrecisionRecallF-MeasurePrecisionRecallF-MeasurePrecisionRecallF-Measure
A10.8210.7300.7730.8080.7260.7640.8060.7240.763
A20.8160.7230.7670.7980.7190.7560.7940.7170.753
A30.8270.7380.7800.8580.7650.8080.8530.7320.788
A40.8210.7300.7730.7120.6410.6740.7040.7240.714
A50.8240.7340.7760.8280.7480.7850.8210.7280.772
A60.8190.7280.7710.8120.7380.7730.8050.7220.761
A70.8260.7360.7780.7250.6680.6950.7150.7300.722
A80.8140.7210.7650.7180.6580.6860.7110.7150.713
A90.8110.7160.7610.7620.6930.7250.7560.7100.732
A100.8230.7320.7750.8710.7170.7860.8660.7260.790
A110.8130.7190.7630.7960.7090.7490.7830.7130.746
A120.8210.7300.7730.8070.7240.7630.8040.7230.761
Table 5. Classification results of the three classifiers using the IMSB dataset.
Table 5. Classification results of the three classifiers using the IMSB dataset.
Dynamic ActivitiesMaximum Entropy Markov ModelRandom ForestANN
ActivitiesPrecisionRecallF-MeasurePrecisionRecallF-MeasurePrecisionRecallF-Measure
S10.8770.8920.8840.8680.8360.8510.8640.8330.848
S20.8670.8840.8760.8480.8320.8390.8410.8210.831
S30.8700.8860.8780.8650.8270.8450.8610.8240.842
S40.8580.8760.8670.8470.8010.8220.8370.8090.823
S50.8660.8830.8750.8640.8230.8420.8550.8190.837
S60.8770.8920.8840.8690.8330.8500.8580.8330.845
Table 6. Classification results of the three classifiers using the Mhealth dataset.
Table 6. Classification results of the three classifiers using the Mhealth dataset.
MethodsMaximum Entropy Markov ModelRandom ForestANN
ActivitiesPrecisionRecallF-MeasurePrecisionRecallF-MeasurePrecisionRecallF-Measure
H10.7280.6910.7090.7120.6760.6930.7060.6710.688
H20.7200.6810.7000.7030.6660.6840.6970.6610.679
H30.7260.6880.7070.7090.6730.6910.7040.6690.686
H40.7320.6950.7130.7160.6800.6980.7110.6760.693
H50.7240.6860.7040.7070.6710.6890.7020.6660.684
H60.7280.6910.7090.7120.6760.6930.7060.6710.688
H70.7170.6790.6980.7010.6640.6810.6950.6590.676
H80.7220.6840.7020.7050.6690.6860.7000.6640.681
H90.7130.6740.6930.6960.6590.6770.6900.6540.671
H100.7200.6810.7000.7030.6670.6840.6970.6610.679
H110.7220.6840.7020.7050.6690.6860.7000.6640.681
H120.7050.6660.6850.6880.6510.6690.6820.6460.664
Table 7. Comparison of the proposed method’s accuracy with state-of-the-art methods using the USC-HAD, IMSB and Mhealth datasets.
Table 7. Comparison of the proposed method’s accuracy with state-of-the-art methods using the USC-HAD, IMSB and Mhealth datasets.
MethodsRecognition Accuracy using USC-HAD (%)Recognition Accuracy using IMSB (%)Recognition Accuracy using Mhealth(%)
Classification using Random Forest [34]90.785.43-
Classification using Single Layer [35]89.3--
Ensemble algorithms [36,37]86.9-90.01
Classification using LSVM [38]-80-
Hampel Estimated Module [39]--85.18
Symbolic approximation [40]84.3--
Proposed HAR + Ada Delta90.7990.1388.67
Proposed HAR + Adam91.2593.6690.91
Bold letters for proposed recognition accuracy.

Share and Cite

MDPI and ACS Style

Tahir, S.B.u.d.; Jalal, A.; Kim, K. Wearable Inertial Sensors for Daily Activity Analysis Based on Adam Optimization and the Maximum Entropy Markov Model. Entropy 2020, 22, 579. https://doi.org/10.3390/e22050579

AMA Style

Tahir SBud, Jalal A, Kim K. Wearable Inertial Sensors for Daily Activity Analysis Based on Adam Optimization and the Maximum Entropy Markov Model. Entropy. 2020; 22(5):579. https://doi.org/10.3390/e22050579

Chicago/Turabian Style

Tahir, Sheikh Badar ud din, Ahmad Jalal, and Kibum Kim. 2020. "Wearable Inertial Sensors for Daily Activity Analysis Based on Adam Optimization and the Maximum Entropy Markov Model" Entropy 22, no. 5: 579. https://doi.org/10.3390/e22050579

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop