Next Article in Journal
Effects of a Long-Term Wearable Activity Tracker-Based Exercise Intervention on Cardiac Morphology and Function of Patients with Cystic Fibrosis
Previous Article in Journal
Enriching IoT Modules with Edge AI Functionality to Detect Water Misuse Events in a Decentralized Manner
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Real-Time Recognition System of Driving Propensity Based on AutoNavi Navigation Data

1
College of Electromechanical Engineering, Qingdao University of Science & Technology, Qingdao 266000, China
2
Collaborative Innovation Center for Intelligent Green Manufacturing Technology and Equipment of Shandong, Qingdao 266000, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(13), 4883; https://doi.org/10.3390/s22134883
Submission received: 7 June 2022 / Revised: 23 June 2022 / Accepted: 26 June 2022 / Published: 28 June 2022
(This article belongs to the Section Navigation and Positioning)

Abstract

:
Driving propensity is the driver’s attitude towards the actual traffic situation and the corresponding decision-making or behavior during the driving process. It is of great significance to improve the accuracy of safety early warning and reduce traffic accidents. In this paper, a real-time identification system of driving propensity based on AutoNavi navigation data is proposed. The main work includes: (1) A dynamic data acquisition method of AutoNavi navigation is proposed to obtain the time, speed and acceleration of the driver during the navigation process. (2) The dynamic data collection method of AutoNavi navigation is analyzed and verified through the dynamic data obtained in the real vehicle experiment. The principal component analysis method is used to process the experimental data to extract the driving propensity characteristics variables. (3) The fruit fly optimization algorithm combined with GRNN (generalized neural network) and the feature variable set are used to build a FOA-GRNN-based model. The results show that the overall accuracy of the model can reach 94.17%. (4) A driving propensity identification system is constructed. The system has been verified through real vehicle test experiments. This paper provides a novel and convenient method for building personalized intelligent driver assistance systems in practical applications.

1. Introduction

Human factors account for more than 90% of traffic accidents. More than 70% of traffic accidents are caused by drivers [1]. Detecting and controlling driver behavior is an effective way to improve vehicle driving safety. Driving propensity is the driver’s attitude towards the actual traffic situation and the preference of the corresponding psychological decision or behavior value during the driving process, which can better reflect the relationship between driver factors and traffic accidents [2]. Intention recognition is the core part of the automotive active safety early warning system [3]. It is easy to overlook driving propensity as an important part of intention recognition. The effectiveness and accuracy of the early warning system need to be further improved. It is of great significance to improve the accuracy of safety early warning and reduce the occurrence of traffic accidents to refine the research on driving propensity of different drivers, conduct in-depth research on the identification methods of driving propensity and introduce driving propensity identification into the active safety driving assistance system of automobiles.
With the rapid development and wide application of electronic maps, there are map service providers represented by AutoNavi Maps in China, which provide users with free map location and driving navigation services. However, there are relatively few driving propensity studies based on navigation dynamic data. The massive real-time driving data has not been fully utilized. It is necessary to conduct in-depth research on driving propensity identification based on AutoNavi navigation dynamic data. Based on the Android development platform and AutoNavi Map SDK and API, a driving propensity identification system application is developed in this paper, which realizes the real-time collection, processing and storage of the driver’s dynamic data propensity identification features. Driving propensity can be accurately identified based on AutoNavi navigation data.
Wang Xiaoyuan et al. [4] proposed and defined the concept of driving propensity through systematic research on driver’s psychology and behavior. The influencing factors and performance characteristics of driving propensity were comprehensively analyzed from the dimensions of a human–vehicle environment. Song Yiqing [5] found that driver characteristics, vehicle characteristics, road conditions and weather conditions all had varying degrees of influence on driving propensity. Wang Mengsha [6] revealed the driving propensity transfer mechanism adapting to the time-varying law of vehicle grouping relationship by analyzing the vehicle grouping relationship in a multi-lane complex environment.
Wang Xiaoyuan et al. [7,8,9] adopted dynamic data acquisition systems such as vehicle-mounted LIDAR, multi-function speedometer, global positioning system and high-precision sensors. Time-varying data of a human–vehicle environment could be captured by designing real vehicle experiments, psychophysiological test experiments and driving simulation experiments. An exploratory study was carried out on the online representation and real-time identification of driver propensity using machine learning algorithms.
Driving propensity can be divided into dynamic driving propensity and static driving propensity. The dynamic driving propensity refers to the transient and changing driving propensity of the driver due to the influence of other factors such as changing traffic situation and road environment during the driving process. Static driving propensity refers to the relatively stable and profound driving habits formed by the driver in the past driving experience [4].
The static driving propensity is closely related to the driving style, which can reveal the static driving propensity to a certain extent [10]. Martinez et al. [11] believed that the driver’s driving style had an important impact on the vehicle’s energy management and driving safety. The research on driving style can be divided into three aspects. One is the way of collecting driving style data; the other is the selection of driving style characteristic parameters; the third is driving style recognition algorithm. Data used to identify driving propensity can be obtained in many different ways, such as driving simulator [12], Controller Area Network (CAN) [13,14], millimeter wave radar [15], vehicle camera [16,17], Global Positioning System (GPS) [18], on-board diagnostics (OBD) [15,19] and questionnaires [20,21]. The feature data used for driving style recognition can be divided into three categories. One is driver physical signals [22,23], such as steering angle, accelerator opening, gestures and other related signals; the other is driver physiological signals, such as ECG [24], EEG [25], EMG [26], etc.; the third is vehicle motion parameters, such as vehicle speed [27], acceleration [28,29,30], yaw angle [31], etc. Early research on driving style recognition algorithms was mostly based on rules and fuzzy logic methods [32]. However, the setting of the threshold is highly subjective and cannot be changed according to the dynamic data. In recent years, many researchers have carried out a lot of research on driving style recognition based on machine learning algorithms, such as clustering algorithm [33,34,35], Bayesian estimation algorithm [36], decision tree algorithm [37], support vector machine [38,39] and random forest [40,41]. Wang et al. [13] investigated a new framework for driving style analysis using raw driving patterns and Bayesian nonparametric methods using driving data collected by the Mobileye vision system and the vehicle CAN bus. Zhu et al. [14] used inertial navigation system, ranging system and vehicle CAN system to build a driving data acquisition platform. The platform can collect multi-dimensional data such as vehicle distance, vehicle relative angle, accelerator opening, master cylinder pressure, acceleration, steering angle, vehicle latitude and longitude. A typical driving style for the personalized adaptive cruise is studied. Long et al. [21] investigated the effectiveness of using a driving style scale to identify driving styles. The reliability and validity of the Chinese version of the Multidimensional Driving Style Inventory (MDSI) were verified by exploratory factor analysis and confirmatory factor analysis. Mohammadnazar et al. [33] utilized basic safety information generated by connected vehicles to quantify instantaneous driving behavior. Unsupervised machine algorithms were used to classify driver driving styles in different spatial environments and road types. The clustering results showed that there were differences in the driving styles of drivers. There were differences in the thresholds for aggressive and calm driving due to differences in the environment and road conditions. The percentage of people with aggressive driving styles was also higher on commercial streets than on highways and residential streets. Mantouka et al. [35] applied a two-stage K-means clustering method to detect drivers’ dangerous driving styles. Driving behavior data were obtained using speed and acceleration data from a smartphone. Aggressive and non-aggressive driving behaviors were detected by initial clustering and normal and dangerous driving styles of drivers were detected by secondary clustering. Suzdaleva et al. [36] studied the problem of online detection of driving style based on recursive Bayesian estimation of a mixture of regular and class components. On the premise of driving styles during driving, seven driving styles related to fuel economy were identified through an online estimation algorithm. The algorithm was also used to model and predict fuel consumption, speed, accelerator pedal position and gear selection. Wang et al. [39] used a semi-supervised support vector machine to classify driving styles while using a small amount of annotated driving data in order to solve the problem of manually annotating a large amount of driving data. Experiments showed that compared with SVM, the classification accuracy of semi-supervised SVM was improved by about 10%, which not only improved the classification performance in general, but also significantly reduces the need for prior data annotation.
At present, there is lack of driving propensity based on navigation dynamic data and the massive real-time driving data are not fully utilized. In the field of driving behavior research, there are relatively few applications of AutoNavi navigation data. Zhao et al. [42] developed the multinomial logit model (MNL) to explore the impact of factors, including day of the week, time of day, congestion level, traffic control devices and road conditions, on road safety risk levels in the interchange area of an urban expressway based on a large amount of aggregate driving behavior data obtained from AutoNavi software. The results showed that the factors that significantly influence risky roads include day of the week, number of lanes, congestion level (slow moving), traffic disturbance (with the merge or diverge within 500 m), type of advance guide sign system (three-level advance guide sign system) and complexity of diagrammatic guide signs (low or medium complexity). Bian et al. [43] investigated navigation prompt timing (NPT), navigation prompt message (NPM) and their combination in an audio navigation system on driving behavior on an urban expressway with five exits. The results showed that the driver’s psychological state and operation of the vehicle on the urban expressway were affected by the prompt timing and messages of the audio navigation system. Guo et al. [44] developed a traffic crash risk prediction model based on risky driving behavior and traffic flow. The data employed in their research were captured using the in-vehicle AutoNavigator software. The model accurately predicted 84.48% of the crashes, while its false alarm rate remained as low as 9.75%, which indicated that the traffic crash risk prediction model had high accuracy. By analyzing the relationship between traffic flow, risky driving behavior and crashes through partial dependency plots (PDPs), the impact of traffic flow and risky driving behavior variables on certain traffic crashes in the prediction model was determined.
Existing research on driving propensity and driving styles were mostly based on equipment such as driving simulators, millimeter-wave radars, global positioning systems and various sensors. Research studies based on driving simulator safety are easier to set up with different traffic scenarios. However, the real road conditions cannot be represented with the driving simulator and it is difficult to simulate the complexity and diversity of the real-world traffic environment. The equipment of the research based on the millimeter radar, global positioning system and various sensors is expensive, and the installation is relatively complicated. It is cumbersome to process the data, resulting in high identification cost and poor practicability. Aiming at the above problems, a dynamic data acquisition algorithm for AutoNavi navigation and a driving propensity identification method based on AutoNavi dynamic data and fruit fly optimization algorithm combined with GRNN (generalized regression neural network) are proposed in this paper. Finally, a driving propensity identification system is established based on the Android development platform and the AutoNavi map open platform. The real-time collection, processing and storage of the driver’s dynamic data during driving can be realized with the driving propensity system. The dynamic data acquisition method of AutoNavi is realized by constructing dynamic data acquisition application programs and designing data acquisition algorithms. Nine driving propensity characteristic datasets derived from time, speed and acceleration during navigation can be obtained through this method. The fruit fly optimization algorithm is used to iteratively optimize the smooth factor in GRNN to improve the prediction accuracy. The driving propensity identification system application is built based on the Android development platform and AutoNavi Map SDK and API. The application is integrated into the personal intelligent terminal to realize the real-time collection, processing and storage of the driver’s driving data and the accurate identification of the driving propensity during the navigation process. AutoNavi navigation data are applied to the identification of driving propensity for the first time to achieve the accurate prediction of a driver’s driving behavior and preferences. It is very important for preventing traffic accidents. This paper provides a novel idea for establishing a personalized vehicle active safety early warning system.

2. Materials and Methods

2.1. Participants

A total of 50 drivers were organized to participate in the real vehicle experiment, of which the ratio of male to female was 8:2, the age distribution was between 25 and 55 years old and the driving experience was between 1 and 22 years old. The basic information of the drivers is shown in Figure 1.

2.2. Apparatus

The experimental materials were temperament type questionnaire, cartel 16 personality factors questionnaire and driver psychological test questionnaire. These questionnaires are composed of questions representing driver’s psychophysiological characteristics and driving behavior characteristics, which has good content reliability and validity [4]. A GL8 experimental vehicle was used as the real vehicle platform, as shown in Figure 2. In this paper, only one smartphone can complete all data collection and realize real-time driving propensity identification.

2.3. Procedure

The real vehicle experiments were carried out during sunny weather and dry road conditions. The experimental road type was a general urban road and the route was along Songling Road, Laoshan District, Qingdao City, Shandong Province, China, as shown in Figure 2. The starting point was A, the end point was C and the way point was B. The whole journey was 9.45 km. The roads in section A–B were mostly commercial areas and residential areas. The traffic flow was large. The B–C section was mainly surrounded by factories. The traffic flow was relatively small. The specific time arrangement of the experiment was as follows: weekdays Tuesday morning peak 7:30–8:00, afternoon peak 15:30–16:00; weekday Wednesday morning peak 9:30–10:00, evening peak 18:30–19:00; non-working Saturday mornings 9:00–9:30 in the morning and 18:00–18:30 in the evening peak. The experiments lasted for four weeks.
Before the driving experiment, the basic information of each participant was recorded, including age, gender and driving experience. The participants were organized to fill out the driving psychological test questionnaire, the temperament type questionnaire and the cartel 16 personality factors questionnaire [4], as shown in Figure 2. The driving propensity type of the participants can be preliminarily determined by the questionnaire test results. The experimental vehicle was equipped with a debugged smartphone for dynamic data collection of AutoNavi navigation. Different participants drove in sequence on the experimental route. An experimental assistant was arranged to ensure the normal operation of the equipment during the experiment. At the end of each driver’s experiment, the dynamic driving data collection APP ended the navigation and automatically saved the real-time data to the mobile phone database. Due to the different experimental time periods, the experimental vehicle could obtain two sets of experimental data by going back and forth on the experimental route once.

2.4. AutoNavi Navigation Dynamic Data Acquisition Algorithm

The AutoNavi navigation dynamic data acquisition application program consists of a positioning module, navigation module, data acquisition module, data processing module and data storage module.
Map loading and basic map interaction functions can be realized by calling the AutoNavi map API in the positioning module. The secondary development is performed based on the AutoNavi positioning SDK, the location service and map application module of Android development platform. The real-time positioning function is achieved by applying for mobile phone positioning permission. The vehicle positioning data are called back through the positioning API interface. The positioning data can be obtained from this function with the data acquisition module. The navigation module implements basic navigation functions through callback path planning, navigation creation and real-time navigation. The vehicle driving data during the navigation process can be called back through the navigation API interface. The driver’s navigation driving data are obtained from this function with the data acquisition module. The data acquisition module is connected to the positioning module and the navigation module. Through the positioning and navigation data collection interface, the function is called back to obtain the real-time driving data of the driver, including vehicle latitude and longitude coordinates, vehicle speed, driving time and driving mileage. The data processing module receives and processes the real-time driving data transmitted by the data acquisition module. The characteristic data that better characterizes the driving propensity can be obtained with the driving data deduction algorithm, including travel time T e , average speed V a v e , maximum speed V max , rapid acceleration times N a c c , rapid deceleration N d e c , normal acceleration time T a c c , normal deceleration time T d e c , average acceleration A a v e , and maximum acceleration A max . The data storage module receives and stores the dynamic data transmitted by the data acquisition module and the data processing module in real time.
A multi-dimensional driving propensity characteristic parameter acquisition algorithm is proposed in this paper. The specific acquisition algorithm of each characteristic parameter is as follows:
(1)
Travel time  T e  acquisition algorithm
The travel time obtained in this paper is the effective travel time when the vehicle travels between the two endpoints of a certain navigation with non-zero driving speed. The driver’s driving propensity can be better characterized with this parameter. The travel time T e acquisition algorithm is implemented based on real-time monitoring of vehicle speed changes during navigation. After the navigation is turned on, accumulate the driving time when the speed is not zero every second. The system collection frequency is 1 Hz. The total travel time can be calculated until the end of the navigation. For the convenience of describing the algorithm, the introduction of intermediate variables is shown in Table 1.
The flow of travel time T e acquisition algorithm is shown in Figure 3.
(2)
Average speed  V a v e and maximum speed V max  acquisition algorithm
The average speed V a v e and maximum speed V max obtained in this paper are the average effective speed and the maximum effective speed of the vehicle passing through the two endpoints of a certain section. That is, the average effective speed and the maximum effective speed of the vehicle within the effective travel time T e . Average speed V a v e and maximum speed V max acquisition algorithm is implemented based on real-time monitoring of speed changes. After the navigation is turned on, detect every second when the speed is not zero. The system collection frequency is 1 Hz. Accumulate the speed corresponding to each second in the effective travel time T e while obtaining the effective travel time to get the effective speed sum V s u m . Divide it by the travel time to obtain the average speed V a v e . The calculation formula is shown in Equation (1). The maximum speed V max is calculated by comparing the speed values V n before and after each second. The larger speed value V n is temporarily saved as the maximum speed V max . The comparison is continued until the end of the trip. The final maximum V max speed is saved. The calculation formula is shown in Equation (2). For the convenience of describing the algorithm, the introduction of intermediate variables is shown in Table 2.
V a v e = T e n = 1 V n T e = V s u m T e , ( n = 1 , 2 , , T e )
V max = { V n     V n > V max V max V n V max   ,   ( n = 1 , 2 , , T e )
A flow chart of the average speed V a v e and maximum speed V max acquisition algorithm is shown in Figure 4.
(3)
Rapid acceleration times N a c c , rapid deceleration N d e c , normal acceleration time T a c c and normal deceleration time T d e c  acquisition algorithm
The rapid acceleration times N a c c , rapid deceleration N d e c , normal acceleration time T a c c and normal deceleration time T d e c obtained in this paper are the driving behavior events generated by the vehicle passing through the two endpoints of a certain navigation section. That is, the number of rapid acceleration behaviors, the number of rapid deceleration behaviors, the time of normal driving behavior, and the time of normal deceleration behavior generated by the vehicle within the effective travel time T e . Rapid acceleration times N a c c , rapid deceleration N d e c , normal acceleration time T a c c and normal deceleration time T d e c acquisition algorithm is based on real-time monitoring of acceleration changes. The system collection frequency is 1 Hz. The acceleration can be calculated from the speed change per second using Equation (3).
A n = V n V n 1 1 ,   ( n = 1 , 2 , , T )
When the acceleration A n is larger of less than different thresholds, sTime is recorded as the start time of the sudden acceleration (deceleration) behavior and the normal acceleration (deceleration) behavior event. At the same time, the system starts to monitor continuously. When the acceleration A n does not meet the threshold condition, it is recorded as the end time of the corresponding driving behavior event. Determine whether the duration meets the valid duration threshold of the driving behavior event. According to the relevant regulations of the traffic safety passage rules and the actual road test data, we selected the sudden acceleration threshold as 2.22 m/s2, the sudden deceleration threshold as −2.22 m/s2, the normal acceleration threshold as 0.45 m/s2, the normal deceleration threshold as −0.45 m/s2 and the effective duration threshold of the driving behavior event as 5 s. For the convenience of describing the algorithm, the introduction of intermediate variables is shown in Table 3.
The flow chart of rapid acceleration times N a c c , rapid deceleration N d e c , normal acceleration time T a c c and normal deceleration time T d e c acquisition algorithm is shown in Figure 5.
(4)
Average acceleration A a v e and maximum acceleration A max  acquisition algorithm
The average acceleration A a v e and maximum acceleration A max obtained in this paper are the average acceleration and maximum acceleration generated by the vehicle during the normal acceleration time T a c c . The average acceleration A a v e and maximum acceleration A max acquisition algorithm is based on the real-time monitoring of acceleration changes. The system collection frequency is 1 Hz. The accumulated acceleration sum A s u m can be obtained by accumulating the acceleration per second A n during the acceleration time T a c c . The average acceleration A a v e can be obtained by the ratio of the accumulated acceleration sum A s u m to the acceleration time T a c c , which can be calculated with Equation (4). At the same time, compare the acceleration value A n before and after each second in the acceleration time T a c c . The larger acceleration value is assigned to A max . If A n > A max , A max = A n ; if A n < A max , A max keeps the original value unchanged. The comparison continues until the end of the stroke, saving the maximum acceleration A max . The calculation formula is shown in Equation (5). For the convenience of describing the algorithm, the introduction of intermediate variables is shown in Table 4.
A a v e = T a c c n = 1 A n T a c c = A s u m T a c c , ( n = 1 , 2 , , T a c c )
A max = { A n     A n > A max A max A n A max   ,   ( n = 1 , 2 , , T a c c )
The flow chart of the average acceleration and maximum acceleration A max acquisition algorithm is shown in Figure 6.

3. Results and Discussion

3.1. Data

The experimental data collected in this paper contain 12 driving characteristic variables, as shown in Table 5.
Some data are shown in Table 6.
After the experiment, the questionnaire test results of each driver in the experimental sample are counted, and the preliminary prediction results of each driver’s driving propensity are recorded. The driver’s driving behavior responses when faced with different traffic situations and road environments can be viewed through video playback, including the driver’s operational response, facial expressions, and driving propensity. The driving propensity can be comprehensively determined according to the driver’s driving behavior responses and the preliminary prediction result of the driving propensity. The preliminary judgement result of driving propensity is shown in Table 7.

3.2. Driving Propensity Feature Extraction Method

The data in this paper are collected through real vehicle experiments in the real road environment. This results in a lot of data noise, which causes many fluctuations in the collected raw data and affects the model training. Thus, a sliding mean filter is selected to process raw data, such as velocity and acceleration. The mean filter expression is as follows:
X ¯ n = 1 M i = 0 M 1 X n i
Among this, M is the sliding filter window size. X n 1 is the n i original data. The sliding filter window size M has a great impact on the filtering effect. According to the acquisition frequency of the original data (10 Hz), the value of M is selected as 5 to perform smoothing filtering on the original data.
By analyzing the characteristic parameters of driving propensity, the data size of each characteristic parameter is quite different, which can affect the target result. The data need to be normalized according to Equation (7).
a = a min a max a min a
Among this, d is the original data and a is the normalized value of input data d .
The driving propensity feature parameters collected in this paper contain multiple dimensions. Although the higher the dimension of the feature data, the better it can represent the driving propensity. However, there is a correlation between high-dimensional feature data, which will cause data redundancy. The principal component analysis (PCA) algorithm is used to reduce the dimension of the characteristic parameter set of driving propensity. The main factors in the driving propensity feature parameter set are extracted to obtain a set of principal component feature vectors that can represent each driving propensity type. Each principal component is linearly uncorrelated with each other. The explanation of the total variance of each principal component is shown in Table 8.
It can be seen from Table 8 that the total variance explained by the first five principal components reaches 88.311%.
Generally, according to the requirement that the cumulative contribution rate is greater than 85%, the first five principal components can fully characterize the changing characteristics of driving propensity. The characteristic values corresponding to the interpretation of the total variance of each principal component are shown in Figure 7. The characteristic value of each principal component also shows that the information contained in the first five principal components can better characterize the changing characteristics of driving propensity. Combined with the cumulative contribution rate and component eigenvalues of the characteristic parameters of each driving propensity, the first five principal components are the characteristic variables for driving propensity identification.
The scores of the first five principal components are given in Table 9. These five principal components are linear combinations of 12 feature variables. The scores of the principal components are used as the input in the driving propensity identification model.

3.3. Driving Propensity Recognition Model Based on FOA-GRNN

A driving propensity identification method based on AutoNavi dynamic and FOA-GRNN is proposed in this paper. The process is shown in Figure 8.
Driving propensity is a dynamic measurement of the behavioral preference characteristics of car operators during driving. The types of driving propensity can be divided into aggressive, normal and conservative. The generalized recurrent neural network (GRNN) proposed by Donald F. Specht is a variant of radial basis neural network [45]. GRNN is a neural network that uses the probability density function to predict the output of the input data. It has strong learning ability and nonlinear mapping ability. It can still achieve good classification results with a small number of samples. Therefore, GRNN is chosen to dynamically identify the driving propensity.
The basic structure of GRNN includes a four-layer network of input layer, pattern layer, summation layer and output layer [46]. The input of the model is X = [ x 1 , x 2 , , x n ] T . The output of the model is Y = [ y 1 , y 2 , , y n ] T and y = { 0 , 1 , 2 } .
The fruit fly optimization algorithm (FOA) is proposed by Wenchao Pan in 2011 [47]. The olfactory and visual characteristics of fruit flies is cleverly used to search for food for iterative optimization with FOA algorithm. The principle of FOA algorithm has the advantages of simple principle and fast convergence speed [48]. The basic principle of FOA algorithm can be divided into two stages: drosophila uses smell to search for food and vision is used to observe and determine the location of the food, and then fly towards that location. The prediction accuracy of GRNN is greatly affected by the smooth factor and the drosophila optimization algorithm is an effective method to optimize the smooth factor [49]. The smooth factor value is optimized by the fruit fly optimization algorithm. The specific optimization idea is to adjust the smooth factor value to the optimum by using the mechanism of drosophila olfactory random foraging and visual search for the highest odor concentration position. The root mean square error (RMSE) between the predicted value and the actual value of the network output driving propensity is minimized through iterative optimization. The corresponding drosophila taste concentration value reaches the optimal value as the minimum value of RMSE. That is, the smooth factor in the generalized regression neural network obtains the optimal value, and it is input into the GRNN model. The driving propensity identification model based on the drosophila optimization algorithm to optimize GRNN is shown in Figure 9. The specific implementation steps are as follows.
Step 1: Input the driving propensity feature variable set and divide the training set and test set. Set GRNN parameters and input the training samples.
Step 2: Randomly initialize the fly position ( I n i t _ X 0 , I n i t _ Y 0 ) . Set the population size (Sizepop) and the maximum number of iterations (Maxgen).
Step 3: Give individual drosophila a random direction and distance to search for food by smell ( X i , Y I ) .
X i = X 0 + R a n d ( )
Y i = Y 0 + R a n d ( )
Step 4: Determine the distance D i s t ( i ) between the coordinates of each drosophila and the origin, and then calculate the taste concentration judgment value S i , which will be used as a smoothing factor for GRNN
D i s t ( i ) = ( X i 2 + Y i 2 )
σ = S i = 1 D i s t ( i )
Step 5: Substitute the taste concentration judgment value into the taste con-centration judgment condition (if S i < 0.001, S i = 1, if S i > 0.001, S i = S i ). Bring the smooth factor value σ = S i into the GRNN model to obtain the taste concentration S m e l l ( i ) of the individual location of the drosophila. In this paper, the root mean square error RMSE of the driving propensity prediction obtained by the GRNN model is used as the taste concentration S m e l l ( i ) .
S m e l l ( i ) = i = 1 M ( y i y i ^ ) 2 M
Step 6: Find the location of the individual with the lowest taste concentration S m e l l ( i ) in the drosophila population.
[ b e s t S m e l l b e s t I n d e x ] = min ( S m e l l ( i ) )
Step 7: Determine whether the taste concentration is better than the previous taste concentration. If yes, go to Step 8; if not, return to Step 3 to continue iterative optimization.
Step 8: Keep the optimal taste concentration value and the corresponding drosophila individual coordinates, and the drosophila will use vision to fly to this location.
S m e l l b e s t = b e s t S m e l l X b e s t = X ( b e s t I n d e x ) Y b e s t = Y ( b e s t I n d e x )
Step 9: Determine whether the maximum number of iterations has been reached, and if so, save and output the optimal flavor concentration, with the optimal smooth factor as σ . Establish the driving propensity identification model of FOA_GRNN.
Step 10: Identify the driving propensity of the input driver prediction sample data.
In this paper, a driving propensity identification model is established based on FOA-GRNN. The experimental data of 30 drivers were selected from the experimental samples for the training and testing of the model. Among them, we selected 1200 sets of experimental data of 10 drivers of aggressive type, ordinary type and conservative type, respectively, and divided them into a training set and test set according to the ratio 8:2. In order to further verify the trained driving propensity identification model, the experimental data of another 20 drivers in the experimental sample were input into the model to verify the recognition accuracy of the model for each type of driving propensity.
In this study, MATLAB 2019a was used for model training simulation experiments. The initial position interval of the drosophila group was set to [0,1], the size of the drosophila group was 10 and the flight direction and distance interval of the drosophila group to search by smell was [−10,10], the maximum number of iterative optimizations of the drosophila population is 200. In the divided training set samples, the first five principal components selected by principal component analysis are used as the input vector for driving propensity identification to train the model.
The convergence effect of the root mean square error (RMSE) of driving propensity prediction after 200 iterations of optimization is shown in Figure 9. From the convergence of RMSE in Figure 10, it can be seen that the effect of early iteration of FOA is more obvious and the update of flavor concentration is faster. In the iterative optimization process, the RMSE begins to converge in the 105th generation. At this time, the minimum error value is 0.016, that is, the minimum flavor concentration value is 0.016 and the optimal smooth factor σ is 0.062. At this time, the position of the drosophila population is (65.4529, −125.8327).
At this time, the optimal smooth factor σ = 0.062 is substituted into the GRNN network model and the test samples are input into the optimized driving propensity identification model. Limited to the length of the article, only the test results of the aggressive samples are shown here, as shown in Table 10, where 0 represents the aggressive type, 1 represents the ordinary type and 2 represents the conservative type.
The overall accuracy of the driving propensity identification model based on FOA-GRNN can reach 94.17%, which has high identification accuracy and can effectively identify various driving propensity types. It can be shown from Table 11 that the GRNN optimized by FOA iteratively optimizes the value of the smooth factor and has a good predictive ability, and the established FOA-GRNN identification model has a good identification effect on the driving propensity.
In order to further verify the identification accuracy of the established FOA-GRNN driving propensity identification model for each driver’s driving propensity, the experimental data corresponding to the other 20 drivers in the real vehicle experimental sample were selected for model verification. The final verification results are shown in Table 11. The verification results show that the driving propensity identification model based on FOA-GRNN has an accuracy rate of about 95% for aggressive and conservative drivers, and more than 92% for ordinary drivers, which shows that it has high identification accuracy for each driver’s driving propensity. Since the characteristics of aggressive and conservative drivers are more obvious than ordinary drivers, both aggressive and conservative drivers are better than ordinary drivers in terms of model identification accuracy
As a comparison, the single generalized regression neural network (GRNN) and the BP neural network (back propagation neural network, BPNN) were used to process the same data samples to establish a driving propensity identification model and test the performance of two models. The identification accuracy is compared with the accuracy of the FOA-GRNN identification model.
The prediction accuracy of GRNN is greatly affected by the smooth factor σ , so this paper randomly selects 10 groups of smooth factors for testing and obtains the accuracy of the identification model. The test results are shown in Table 12.
It can be seen from Table 12 that the accuracy rate of the GRNN driving propensity identification model is the highest at 89.2%. The identification effect, which can be greatly affected by the smooth factor, is not ideal. The smooth factor of the GRNN model needs to be manually adjusted to find the best identification effect. The optimization process is cumbersome. Compared with the single GRNN identification model, the overall accuracy of the FOA-GRNN identification model proposed in this paper is improved by 5~10%, which has a better stability.
A BPNN driving propensity identification model which uses a 5-7-3 neural network structure is constructed according to the number of nodes in the input layer and output layer. The quasi-Newton method (trainbfg) is selected as the training function, the sigmoid function is the hidden layer transfer function, and the softmax function is selected as the transfer function for the output layer. Many parameters of the BP neural network need to be adjusted. The learning rate and training accuracy have a great influence on the algorithm. The three groups of learning rate (lr) are set to 0.01, 0.05 and 0.1. The three groups of training accuracy (goal) are set to 0.1, 0.01 and 0.001, and the maximum number of training times is 500. The test results are shown in Table 13.
It can be seen from Table 13 that the highest accuracy rate of the BPNN driving propensity identification model is 91.3%. By manually adjusting the parameters of the BPNN network, the model can achieve a good identification effect. However, many parameters of BPNN need to be set and the training process is cumbersome, which cannot guarantee the best recognition effect of the model. The comparison results show that the driving propensity identification accuracy of the GRNN model is the lowest. The identification accuracy of the BPNN model can achieve good results, but it has slow learning speed and too many parameters, and continuous tuning is required. The generalized regression neural network optimized by the fruit fly optimization algorithm proposed in this paper has the advantages of simple optimization process, high accuracy and good stability. It has higher identification accuracy and better stability in driving propensity identification.

3.4. Real Vehicle Experimental Test

Based on the Android development platform and AutoNavi SDK, a driving propensity identification system APP suitable for Android smartphones is developed in this paper, as shown in Figure 11. The smartphone is used as the carrier of the driving propensity identification system. The Android smartphone used in this experiment is the Redmi K30i mobile phone. The hardware configuration is as follows: the CPU is Qualcomm Snapdragon 765 G, the main frequency is 2.4 GHz, the eight-core, the running memory is 6 GB and the body memory is 128 GB.
In order to test the validity and reliability of the driving propensity identification system APP, three types of experiments are designed in this paper. The trained FOA-GRNN driving propensity identification model and the single GRNN and BPNN driving propensity identification models are imported into the system driving propensity identification module, and then the packaged driving propensity identification model is imported. The identification system APP is installed in the smartphone and fixed on the experimental vehicle. Twenty experimenters were selected to carry out driving experiments in sequence and the driving propensity identification system APP was used to collect, process and identify the driving propensity of the driver’s dynamic data during the navigation process. The specific experiments were as follows:
Experiment 1: The driving propensity identification system based on the FOA-GRNN model was used to conduct a real vehicle experimental test and the experimenters conducted 240 groups of experiments, including 80 groups of aggressive type, normal type and conservative type. The experimental results are shown in Table 14.
By calculating and analyzing the identification results in the above table, various evaluation indicators of the driving propensity identification system were obtained as shown in Table 15.
It can be seen from Table 15 that the system model has high identification accuracy and recall rate. The characteristics of aggressive and conservative drivers are more obvious than normal drivers. Therefore, in terms of precision, recall rate and comprehensiveness, both aggressive and conservative drivers are better than normal drivers.
Experiment 2: The driving propensity identification system based on the GRNN models was used to conduct the real vehicle experimental test and the experimenters conducted 240 groups of experiments, including 80 groups of aggressive type, ordinary type and conservative type. The experimental results are shown in Table 16.
By calculating and analyzing the identification results in the above table, various evaluation indicators of the driving propensity identification system were obtained as shown in Table 17.
Experiment 3: The driving propensity recognition system based on BPNN was used to conduct the real vehicle experimental test and the experimenters conducted 240 groups of experiments, with 80 groups for the aggressive type, the ordinary type and the conservative type, respectively. The experimental results are shown in Table 18.
By calculating and analyzing the identification results in the above table, various evaluation indicators of the driving propensity identification system were obtained as shown in Table 19.
The performance indicators of the driving propensity identification system based on the FOA-GRNN model, GRNN model and BPNN model are shown in Figure 12, Figure 13 and Figure 14. Compared with the driving propensity identification system based on the GRNN model, the accuracy of the driving propensity identification system based on the FOA-GRNN model is at least 5% higher, and the accuracy of the driving propensity identification system is also improved compared to the BPNN model. In terms of precision, recall and F1 score, the driving propensity identification system based on the FOA-GRNN model has better performance indicators for aggressive, normal and conservative drivers than the other two systems. The identification effect and the system stability are higher than the other two systems. Although the identification accuracy of the system model in the real vehicle test is lower than training in MATLAB, it still achieves good identification accuracy. The generalization ability of the system model is strong, which verifies the effectiveness and practicability of the system.

4. Conclusions

Driving propensity is the driver’s attitude towards the real traffic situation and the preference of the corresponding decision-making or behavior value in the process of driving, which can better reflect the relationship between the driver’s factors and traffic accidents.
In the existing driving propensity-related research, the driving data collection equipment is expensive, the installation is complicated and the experimental data processing is cumbersome, resulting in high costs and poor practicability for the identification of driving propensity. There are relatively few studies that do not take full advantage of the vast amount of real-time driving data. A driving propensity identification method based on AutoNavi navigation dynamic data and FOA-GRNN is proposed in this paper and a driving propensity identification system based on the Android development platform and AutoNavi map open platform is established. Starting from the realization of personalized automotive active safety assistance system, in-depth research on the identification method of driving propensity was conducted. The specific research results are as follows:
(1)
The dynamic data collection method of AutoNavi Navigation. A dynamic data acquisition method for AutoNavi Navigation is proposed in this paper. A dynamic data collection application based on the Android development platform and AutoNavi map API and SDK is developed. The data such as time, speed and acceleration are collected through AutoNavi API. The algorithm which can collect nine kinds of driving propensity characteristic parameters is designed, which realizes the real-time collection, processing and storage of driver characteristic data in the process of navigation and driving. This makes it possible to accurately identify the driving propensity based on the dynamic data of AutoNavi.
(2)
An experimental framework suitable for driving propensity research. Starting from the consideration of the driver’s own factors, the driving propensity is preliminary judged through the authoritative test questionnaire. The reliability and validity of the test results are analyzed in this paper. Combined with the observation during the driver’s experiment and the video playback after the experiment, the driving propensity of the driver is comprehensively determined. The feasibility of the AutoNavi’s dynamic data acquisition program and the effectiveness of the acquisition algorithm are verified by the real vehicle experimental data.
(3)
Feature parameter extraction of driving propensity. Considering the computational timeliness of the driving propensity model, the multidimensional data collected from the real vehicle experiments are processed for dimensionality reduction. The principal component analysis algorithm is selected to reduce the dimension of driving data and filter out redundant features. The feature data that contributes the most to various driving propensity is extracted and finally the feature parameters that better represent the driving propensity are obtained.
(4)
Driving propensity identification model. Combined with the fruit fly optimization algorithm and generalized regression neural network, a driving propensity identification model based on FOA-GRNN is proposed. The model is trained, tested and verified by using the driving propensity feature variable set. The results show that the FOA-GRNN model proposed in this paper can realize the accurate identification of driving propensity and achieve better results for the identification of various driving propensity types. Compared with the GRNN and BPNN models, it is proved that the FOA-GRNN model has better stability and higher identification accuracy.
(5)
Driving propensity identification system. By analyzing the functional requirements of the system, the overall framework of the system is determined and a modular system construction method is designed. Based on the Android development platform and AutoNavi map API and SDK, the driving propensity identification system is constructed. It is verified with the real vehicle experimental test that the functional modules of the system can operate stably, the overall performance of the model is better and the driving propensity can be accurately identified. The construction of this system can provide a new idea for the establishment of a human-centered safety-assisted driving system, which has certain practical significance.

Author Contributions

Conceptualization, X.W. and L.C.; methodology, X.W. and L.C.; software, L.C. and J.H.; validation, X.W. and L.C.; formal analysis, L.C., H.S. and G.W.; investigation, L.C., H.S. and J.H.; resources, X.W. and L.C.; data curation, L.C., H.L. and J.H.; writing—original draft preparation, L.C.; writing—review and editing, X.W., L.C. and J.H.; visualization, G.W., F.Z. and Q.W.; supervision, X.W.; project administration, X.W.; funding acquisition, X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Shandong Province, grant number ZR2020MF082; the Collaborative Innovation Center for Intelligent Green Manufacturing Technology and Equipment of Shandong Province, grant number IGSD-2020-012; the Qingdao Top Talent Program of Entrepreneurship and Innovation, grant number 19-3-2-11-zhc; and the National Key Research and Development Program, grant number 2018YFB1601500.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee at the College of Electromechanical Engineering, Qingdao University of Science & Technology.

Informed Consent Statement

The Ethics Committee at the College of Electromechanical Engineering, Qingdao University of Science & Technology supports the practice of protection of human participants in this research. All participants were informed of the research process and provided written informed consent in accordance with the Declaration of Helsinki. The two items involving humans included the driving experiment and questionnaire survey. Before the experiments, all participants were explicitly told the experimental process and were informed that their data would be recorded. The participations were solicited, yet strictly voluntary.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chan, T.K.; Chin, C.S.; Chen, H.; Zhong, X.H. A comprehensive review of driver behavior analysis utilizing smartphones. IEEE Trans. Intell. Transp. 2019, 21, 4444–4475. [Google Scholar] [CrossRef]
  2. Wang, X.Y.; Liu, Y.Q.; Zhang, J.L. Dynamic identification method of people and vehicles based on travel time. J. Autom. Saf. Energ. 2017, 8, 38–45. [Google Scholar]
  3. Wang, X.Y.; Liu, Y.Q.; Wang, F.; Wang, J.Q.; Liu, L.P.; Wang, J.H. Feature extraction and dynamic identification of drivers’ emotions. Transp. Res. F Traffic Psychol. Behav. 2019, 62, 175–191. [Google Scholar] [CrossRef]
  4. Wang, X.Y.; Zhang, J.L.; Ban, X.G. Identification of Vehicle Driving Propensity Based on Dynamic Human-Vehicle-Environment Collaborative Deduction; Science Press: Beijing, China, 2013. [Google Scholar]
  5. Song, Y.Q. Comprehensive Evaluation Method of Driving Propensity; Shandong University of Technology: Zibo, China, 2012. [Google Scholar]
  6. Wang, M.S. The Transfer Mechanism of Vehicle Driving Propensity in Multi-Lane Dynamic Complex Environment; Shandong University of Technology: Zibo, China, 2014. [Google Scholar]
  7. Zhang, J.L.; Wang, X.Y.; Ban, X.G.; Cao, K. Prediction method of driver’s propensity adapted to driver’s dynamic feature extraction of affection. Adv. Mech. Eng. 2013, 5, 658103. [Google Scholar] [CrossRef]
  8. Wang, X.Y.; Liu, Y.Q.; Wang, J.Q.; Zhang, J.L. Study on influencing factors selection of driver’s propensity. Transp. Res. D 2019, 66, 35–48. [Google Scholar] [CrossRef]
  9. Wang, X.Y.; Liu, Y.Q.; Guo, Y.Q.; Xia, Y.Y.; Wu, C.Z. Transformation mechanism of vehicle cluster situations under dynamic evolution of driver’s propensity. Transp. Res. F Traffic Psychol. Behav. 2019, 65, 665–684. [Google Scholar] [CrossRef]
  10. Sun, Y.F. A Drunk Driving Identification Method Considering the Driver’s Propensity; Shandong University of Technology: Zibo, China, 2018. [Google Scholar]
  11. Martinez, C.M.; Heucke, M.; Wang, F.Y.; Gao, B.; Cao, D.P. Driving style recognition for intelligent vehicle control and advanced driver assistance: A survey. IEEE Trans. Intell. Transp. 2017, 46, 14–23. [Google Scholar] [CrossRef] [Green Version]
  12. Yan, F.; Liu, M.; Ding, C.; Wang, Y.; Yan, L. Driving style recognition based on electroencephalography data from a simulated driving experiment. Front. Psychol. 2019, 10, 1254. [Google Scholar] [CrossRef] [Green Version]
  13. Wang, W.; Xi, J.; Zhao, D. Driving style analysis using primitive driving patterns with Bayesian Nonparametric approaches. IEEE Trans. Intell. Transp. 2018, 20, 2986–2998. [Google Scholar] [CrossRef] [Green Version]
  14. Zhu, B.; Jiang, Y.; Zhao, J.; He, R.; Bian, N.; Deng, W. Typical-driving-style-oriented personalized adaptive cruise control design based on human driving data. Transp. Res. C 2019, 100, 274–288. [Google Scholar] [CrossRef]
  15. Feng, Y.; Pickering, S.; Chappell, E.; Iravani, P.; Brace, C. A support vector clustering based approach for driving style classification. Int. J. Mach. Learn. Comput. 2019, 9, 344–350. [Google Scholar] [CrossRef] [Green Version]
  16. Liu, T.; Yang, Y.; Huang, G.B.; Yeo, Y.K.; Lin, Z. Driver distraction detection using semi-supervised machine learning. IEEE Trans. Intell. Transp. Syst. 2015, 17, 1108–1120. [Google Scholar] [CrossRef]
  17. Chandrasiri, N.P.; Nawa, K.; Ishii, A. Driving skill classification in curve driving scenes using machine learning. J. Mod. Transp. 2016, 24, 196–206. [Google Scholar] [CrossRef] [Green Version]
  18. Constantinescu, Z.; Marinoiu, C.; Vladoiu, M. Driving style analysis using data mining techniques. Int. J. Comput. Commun. 2010, 5, 654–663. [Google Scholar] [CrossRef]
  19. Ma, H.; Xie, H.; Huang, D.; Xiong, S. Effects of driving style on the fuel consumption of city buses under different road conditions and vehicle masses. Transp. Res. D 2015, 41, 205–216. [Google Scholar] [CrossRef]
  20. Useche, S.A.; Cendales, B.; Alonso, F.; Pastor, J.C.; Montoro, L. Validation of the Multidimensional Driving Style Inventory (MDSI) in professional drivers: How does it work in transportation workers? Transp. Res. F Traffic Psychol. Behav. 2019, 67, 155–163. [Google Scholar] [CrossRef]
  21. Long, S.; Ruosong, C. Reliability and validity of the multidimensional driving style inventory in Chinese drivers. Traffic Inj. Prev. 2019, 20, 152–157. [Google Scholar] [CrossRef]
  22. Kasper, D.; Weidl, G.; Dang, T.; Breuel, G.; Tamke, A.; Wedel, A.; Rosenstiel, W. Obeject-oriented Bayesian networks for detection of lane change maneuvers. IEEE Intell. Transp. Syst. Mag. 2012, 4, 19–31. [Google Scholar] [CrossRef]
  23. Han, W.; Wang, W.; Li, X.; Xi, J. Statistical-based approach for driving style recognition using Bayesian probability with kernel density estimation. IET Intell. Transp. Syst. 2019, 13, 22–30. [Google Scholar] [CrossRef] [Green Version]
  24. Haag, A.; Goronzy, S.; Schaich, P.; Williams, J. Emotion recognition using bio-sensors: First steps towards an automatic system. In Tutorial and Research Workshop on Affective Dialogue Systems; Springer: Berlin/Heidelberg, Germany, 2004; pp. 36–48. [Google Scholar]
  25. Wang, H.; Zhang, C.; Shi, T.; Wang, F.; Ma, S. Real-time EEG-based detection of fatigue driving danger for accident prediction. Int. J. Neural Syst. 2015, 25, 1550002. [Google Scholar] [CrossRef]
  26. Healey, J.A.; Picard, R.W. Detecting stress during real-world driving tasks using phtsiological sensors. IEEE T Intell. Transp. 2005, 6, 156–166. [Google Scholar] [CrossRef] [Green Version]
  27. Deng, C.; Wu, C.; Lyu, N.; Huang, Z. Driving style recognition method using braking characteristics based on hidden Markov model. PLoS ONE 2017, 12, e0182419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Xue, Q.; Wang, K.; Lu, J.J.; Liu, Y. Rapid driving style recognition in car-following using machine learning and vehicle trajectory data. J. Adv. Transp. 2019, 2019. [Google Scholar] [CrossRef] [Green Version]
  29. Eboli, L.; Mazzulla, G.; Pungillo, G. How drivers’ characteristics can affect driving style. Tranps. Res. Procedia 2017, 27, 945–952. [Google Scholar] [CrossRef]
  30. Ren, G.; Zhang, Y.; Liu, H.; Zhang, K.; Hu, Y. A new lane-changing model with consideration of driving style. Int. J. Intell. Transp. 2019, 17, 181–189. [Google Scholar] [CrossRef]
  31. Miyajima, C.; Nishiwaki, Y.; Ozawa, K.; Wakita, T.; Itou, K.; Takeda, K.; Itakura, F. Driver modeling based on driving behavior and its evaluation in driver identification. Proc. IEEE 2007, 95, 427–437. [Google Scholar] [CrossRef]
  32. Alpar, O.; Stojic, R. Intelligent collision warning using license plate segmentation. J. Intell. Transp. Syst. 2016, 20, 487–499. [Google Scholar] [CrossRef]
  33. Mohammandnazar, A.; Arvin, R.; Khattak, A.J. Classifying travelers’ driving style using basic safety messages generated by connected vehicles: Application of unsupervised machine learning. Transp. Res. C 2021, 122, 102917. [Google Scholar] [CrossRef]
  34. Qi, G.; Du, Y.; Wu, J.; Xu, M. Leveraging longitudinal driving behaviour data with data mining techniques for driving style analysis. IET Intell. Transp. Syst. 2015, 9, 792–801. [Google Scholar] [CrossRef]
  35. Mantouka, E.G.; Barmpounakis, E.N.; Vlahogianni, E.I. Identifying driving safety profiles from smartphone data using unsupervised learning. Saf. Sci. 2019, 119, 84–90. [Google Scholar] [CrossRef]
  36. Suzdaleva, E.; Nagy, I. An online estimation of driving style using data-dependent pointer model. Transp. Res. C 2018, 86, 23–36. [Google Scholar] [CrossRef]
  37. Bejani, M.M.; Ghatee, M. A context aware system for driving style evaluation by an ensemble learning on smartphone sensors data. Transp. Res. C 2018, 89, 303–320. [Google Scholar] [CrossRef]
  38. Wu, M.; Zhang, S.; Dong, Y. A novel model-based driving behavior recognition system using motion sensors. Sensors 2016, 16, 1746. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Wang, W.; Xi, J.; Chong, A.; Li, L. Driving style classification using a semisupervised support vector machine. IEEE Trans. Hum. Mach. Syst. 2017, 47, 650–660. [Google Scholar] [CrossRef]
  40. Li, G.; Li, S.E.; Cheng, B.; Green, P. Estimation of driving style in naturalistic highway traffic using maneuver trasition probabilities. Transp. Res. C 2017, 74, 113–125. [Google Scholar] [CrossRef]
  41. Silva, I.; Eugenio Naranjo, J. A systematic methodology to evaluate prediction models for driving style classification. Sensors 2020, 20, 1692. [Google Scholar] [CrossRef] [Green Version]
  42. Zhao, X.H.; Ding, Y.; Yao, Y.; Zhang, Y.L.; Bi, C.F.; Su, Y.L. A multinomial logit model: Safety risk analysis of interchange area based on aggregate driving behavior data. J. Saf. Res. 2022, 80, 27–38. [Google Scholar] [CrossRef]
  43. Bian, Y.; Zhang, X.L.; Wu, Y.P.; Zhao, X.H.; Liu, H.; Su, Y.L. Influence of prompt timing and messages of an audio navigation system on driver behavior on an urban expressway with five exits. Accid. Anal. Prev. 2021, 157, 106155. [Google Scholar] [CrossRef]
  44. Guo, M.; Zhao, X.H.; Yao, Y.; Yan, P.W.; Su, Y.L.; Bi, C.F.; Wu, D.Y. A study of freeway crash risk prediction and interpretation based on risky driving behavior and traffic flow data. Accid. Anal. Prev. 2021, 160, 106328. [Google Scholar] [CrossRef]
  45. Qi, J.; Jiang, G.; Li, G.; Sun, Y.; Tao, B. Surface EMG hand gesture recognition system based on PCA and GRNN. Neural. Comput. Appl. 2020, 32, 6343–6351. [Google Scholar] [CrossRef]
  46. Zhou, J.R. Pattern Recognition and Artificial Intelligence: Based on MATLAB; Tsinghua Press: Beijing, China, 2018. [Google Scholar]
  47. Pan, W.C. Drosphila Optimization Algorithm: The Latest Evolutionary Computing Technology; Canghai Bookstore: Taipei, China, 2011. [Google Scholar]
  48. Chen, P.W.; Lin, W.Y.; Huang, T.H.; Pan, W.T. Using fruit fly optimization algorithm optimized grey model neural network to perform satisfaction analysis for e-business service. Appl. Math Inform. Sci. 2013, 7, 459–465. [Google Scholar] [CrossRef]
  49. Li, H.Z.; Guo, S.; Li, C.J.; Sun, J.Q. A hybrid annual power load forecasting model based on generalized regression neural network with fruit fly optimization algorithm. Knowl.-Based Syst. 2013, 37, 378–387. [Google Scholar] [CrossRef]
Figure 1. Basic information of the drivers.
Figure 1. Basic information of the drivers.
Sensors 22 04883 g001
Figure 2. Experimental vehicle.
Figure 2. Experimental vehicle.
Sensors 22 04883 g002
Figure 3. Flow chart of travel time T e acquisition algorithm.
Figure 3. Flow chart of travel time T e acquisition algorithm.
Sensors 22 04883 g003
Figure 4. Flow chart of the average speed V a v e and maximum speed V max acquisition algorithm.
Figure 4. Flow chart of the average speed V a v e and maximum speed V max acquisition algorithm.
Sensors 22 04883 g004
Figure 5. Flow chart of rapid acceleration times N a c c , rapid deceleration N d e c , normal acceleration time T a c c and normal deceleration time T d e c acquisition algorithm.
Figure 5. Flow chart of rapid acceleration times N a c c , rapid deceleration N d e c , normal acceleration time T a c c and normal deceleration time T d e c acquisition algorithm.
Sensors 22 04883 g005
Figure 6. Flow chart of the average acceleration A a v e and maximum acceleration A max acquisition algorithm.
Figure 6. Flow chart of the average acceleration A a v e and maximum acceleration A max acquisition algorithm.
Sensors 22 04883 g006
Figure 7. Characteristic value of each component.
Figure 7. Characteristic value of each component.
Sensors 22 04883 g007
Figure 8. Flow chart of the driving propensity identification method.
Figure 8. Flow chart of the driving propensity identification method.
Sensors 22 04883 g008
Figure 9. Flow chart of the FOA optimization GRNN model.
Figure 9. Flow chart of the FOA optimization GRNN model.
Sensors 22 04883 g009
Figure 10. Root mean squared error convergence.
Figure 10. Root mean squared error convergence.
Sensors 22 04883 g010
Figure 11. Driving propensity recognition APP and experimental smartphone.
Figure 11. Driving propensity recognition APP and experimental smartphone.
Sensors 22 04883 g011
Figure 12. Precision comparison of the FOA-GRNN, BPNN and GRNN identification models.
Figure 12. Precision comparison of the FOA-GRNN, BPNN and GRNN identification models.
Sensors 22 04883 g012
Figure 13. Recall comparison of the FOA-GRNN, BPNN and GRNN identification models.
Figure 13. Recall comparison of the FOA-GRNN, BPNN and GRNN identification models.
Sensors 22 04883 g013
Figure 14. F1 score comparison of the FOA-GRNN, BPNN and GRNN identification models.
Figure 14. F1 score comparison of the FOA-GRNN, BPNN and GRNN identification models.
Sensors 22 04883 g014
Table 1. Intermediate variables of travel time acquisition algorithm.
Table 1. Intermediate variables of travel time acquisition algorithm.
NameSymbolData TypeUnitVariable Description
Journey time T e intsSum of valid travel time within the navigation segment
Driving speed V n doublem/sEffective speed of the vehicle at the current time point
Driving time T intsEffective time of the current vehicle travel
Table 2. Intermediate variables of average and maximum velocity acquisition algorithm.
Table 2. Intermediate variables of average and maximum velocity acquisition algorithm.
NameSymbolData TypeUnitVariable Description
Driving speed V n doublem/sEffective speed of the vehicle at the current time point
Sum of speed V s u m doublem/sTotal effective driving speed in the navigation section
Average speed V a v e doublem/sRatio of the total effective driving speed to the travel time in the navigation segment
Maximum speed V max doublem/sMaximum effective speed in the navigation section
Table 3. Number of rapid acceleration and rapid deceleration times and duration of normal acceleration and normal deceleration to obtain the intermediate variables of the algorithm.
Table 3. Number of rapid acceleration and rapid deceleration times and duration of normal acceleration and normal deceleration to obtain the intermediate variables of the algorithm.
NameSymbolData TypeUnitVariable Description
Rapid acceleration times N a c c intnNumber of sudden acceleration times in the navigation section
Rapid deceleration times N d e c intnNumber of sudden deceleration times in the navigation section
Normal acceleration time T a c c intsTotal duration of normal acceleration behavior in the navigation segment
Normal deceleration time T d e c intsTotal duration of normal deceleration behavior in the navigation segment
Acceleration A n float m / s 2 Effective acceleration of the vehicle at the current time point
Start timesTimedouble Start time of the driving behavior event
End timeeTimedouble End moment when the driving behavior event occurred
Durationtimedouble Duration of driving behavior time
Table 4. Intermediate variables of average acceleration and maximum acceleration acquisition.
Table 4. Intermediate variables of average acceleration and maximum acceleration acquisition.
NameSymbolData TypeUnitVariable Description
Average acceleration A a v e float m / s 2 Average vehicle acceleration during normal acceleration time
Maximum acceleration A max float m / s 2 Maximum acceleration of the vehicle during normal acceleration time
Sum of acceleration A s u m float m / s 2 Accumulated sum of acceleration per second during normal acceleration time
Table 5. Driving characteristic variables and representation symbols.
Table 5. Driving characteristic variables and representation symbols.
NameSymbolNameSymbol
Age (year) a Rapid acceleration times N a c c
Driving age (year) D A Rapid deceleration times N d e c
Gender G Acceleration time T a c c
Journey time (s) T Deceleration time T d e c
Average speed (m/s) V a v e Average acceleration A a v e
Maximum speed (m/s) V max Maximum acceleration A max
Table 6. Partial experimental data.
Table 6. Partial experimental data.
Number a D A G   1 T V a v e V max N a c c N d e c T a c c T d e c A a v e A max
013712080611.0920.566484750.4351.943
3712080011.20205284730.4792.012
3712080211.1719.726587750.4781.735
023410175912.0521.117595760.7152.091
3410175412.121.396589720.5682.423
3410174711.75207491740.6722.271
03266169113.0221.1113995770.7632.792
266169812.9423.33111090750.8922.878
266170212.8921.948892780.7812.973
234012170712.8122.591097780.8922.562
4012169812.9423.611695820.8313.261
4012170212.8922.410792780.7853.178
243610181810.8219.725586830.4721.738
3610180811.0518.896393870.4181.351
3610182310.7518.894183880.3781.943
49285081510.8620.285287860.4281.561
285082310.7319.173089840.4961.672
285081310.8918.895283870.3791.398
504216170712.8122.510793780.7523.287
4216171112.7522.511699740.8093.012
4216170512.8523.058997800.8012.798
1 0 for female and 1 for male.
Table 7. Preliminary judgment result of driving propensity.
Table 7. Preliminary judgment result of driving propensity.
Type of Driving PropensityNumber of Driver
Aggressive03,08,12,13,20,23,27,30,33,35,41,44,46,47,50
Normal02,06,07,09,11,15,16,17,22,25,28,29,34,37,38,40,43,48
Conservative01,04,05,10,14,18,19,21,24,26,31,32,36,39,42,45,49
Table 8. Interpretation of total variance of each principal component.
Table 8. Interpretation of total variance of each principal component.
ComponentInitial EigenvaluesExtracted Loading Sum of Squares
TotalPercentage of VarianceCumulative PercentageTotalPercentage of VarianceCumulative Percentage
16.64055.333%55.333%6.64055.333%55.333%
21.96416.367%71.700%1.96416.367%71.700%
30.9227.683%79.382%0.9227.683%79.382%
40.5944.947%84.329%0.5944.497%84.329%
50.4783.983%88.311%0.4783.983%88.311%
60.4393.656%91.967%
70.3092.574%94.541%
80.2672.225%96.766%
90.1811.506%98.272%
100.1301.080%99.352%
110.0730.610%99.962%
120.0050.038%100.000%
Table 9. Score of each principal component.
Table 9. Score of each principal component.
Test SampleFirst Principal ComponentSecond Principal ComponentThird Principal ComponentFourth Principal ComponentFifth Principal Component
11.5103−1.33850.9640−3.3418−0.7096
21.1362−1.36880.82631.62090.0544
31.6318−1.33981.04070.12060.4822
41.4625−1.35420.93700.59880.8418
51.6846−1.29610.81461.07910.2296
1001−1.11981.17810.2660−1.0783−0.3280
1002−1.3763−0.00090.6690−0.09360.1397
1003−0.84100.08770.51440.8409−0.0655
1004−1.24610.03670.6016−0.3086−0.3139
1005−1.09380.1610.6866−0.12231.3106
1996−1.0287−0.16020.8467−0.77970.7990
1997−0.9700−0.1123−0.6619−0.8127−1.0078
1998−0.9648−0.13360.6619−0.8127−1.0078
1999−1.3439−0.18490.8605−0.50540.1205
2000−1.5796−0.22540.8340−0.9133−0.3929
Table 10. Input results for aggressive test samples.
Table 10. Input results for aggressive test samples.
NumberOutputNumberOutputNumberOutputNumberOutputNumberOutput
10.0132170.0072330.0000490.1339651.0283
20.0085180.0007340.2195500.0092660.0000
30.0000190.0000350.1078510.0000670.0000
40.0000200.0078360.0000520.1982680.0000
50.0268210.0000370.0000530.0000690.1392
60.0091220.0000381.9938540.0000700.0000
70.9932230.0000390.2193550.2012710.0062
80.0000240.0012400.0016560.0062720.0301
90.0073250.0035410.0000570.0032730.0000
100.0000260.0000420.0021580.0000740.0081
110.1026270.1067430.0039590.0000750.0000
120.0000280.0143440.0093600.0089760.1061
130.0023290.0000450.2792610.0002770.0102
140.0000300.0000460.0000620.1026780.0000
150.2004310.0000470.0000630.0401790.0000
160.0017320.0072480.0000640.0088800.0000
Table 11. The final verification results of 20 drivers in the real vehicle experiment.
Table 11. The final verification results of 20 drivers in the real vehicle experiment.
NumberAccuracyDriving PropensityNumberAccuracyDriving Propensity
1295.1%Aggressive2593.3%Normal
3492.9%Normal4495.1%Aggressive
0993.3%Normal3296.2%Conservative
4294.5%Conservative2692.9%Conservative
1792.2%Normal2794.5%Aggressive
0396.2%Aggressive4092.9%Normal
2195.1%Conservative3892.2%Normal
2892.9%Normal4796.2%Aggressive
3694.5%Conservative0495.1%Conservative
1995.1%Conservative1592.9%Normal
Table 12. GRNN driving propensity identification results.
Table 12. GRNN driving propensity identification results.
Number σ Accuracy/%Number σ Accuracy/%
15083.360.887.1
21584.670.586.3
31086.780.189.2
4585.480.189.2
5187.5100.0187.9
Table 13. BPNN driving propensity identification results.
Table 13. BPNN driving propensity identification results.
NumberlrgoalAccuracy/%
10.010.188.3
20.0190.8
30.00187.5
40.050.188.7
50.0189.6
60.00186.7
70.10.188.3
80.0191.3
90.00189.6
Table 14. FOA-GRNN driving propensity identification results.
Table 14. FOA-GRNN driving propensity identification results.
Identification ResultsAggressive (Pre-Judgment Result)Normal (Pre-Judgment Result)Conservative (Pre-Judgment Result)
Aggressive (real result)7631
Normal (real result)4724
Conservative (real result)1574
Table 15. Various evaluation indicators of the FOA-GRNN driving propensity identification model.
Table 15. Various evaluation indicators of the FOA-GRNN driving propensity identification model.
Evaluation IndicatorsAccuracy (%)Precision (%)Recall (%)F1-Score (%)
Aggressive92.593.839594.56
Normal92.5909090
Conservative92.593.6792.593.08
Table 16. GRNN driving propensity identification results.
Table 16. GRNN driving propensity identification results.
Identification ResultsAggressive (Pre-Judgment Result)Normal (Pre-Judgment Result)Conservative (Pre-Judgment Result)
Aggressive (real result)7262
Normal (real result)6668
Conservative (real result)3770
Table 17. Various evaluation indicators of the FOA-GRNN driving propensity identification model.
Table 17. Various evaluation indicators of the FOA-GRNN driving propensity identification model.
Evaluation IndicatorsAccuracy (%)Precision (%)Recall (%)F1-Score (%)
Aggressive85.8388.899089.44
Normal85.8383.5482.583.02
Conservative85.8387.587.587.5
Table 18. BPNN driving propensity identification results.
Table 18. BPNN driving propensity identification results.
Identification ResultsAggressive (Pre-Judgment Result)Normal (Pre-Judgment Result)Conservative (Pre-Judgment Result)
Aggressive (real result)7451
Normal (real result)5696
Conservative (real result)2672
Table 19. Various evaluation indicators of the BPNN driving propensity identification model.
Table 19. Various evaluation indicators of the BPNN driving propensity identification model.
Evaluation IndicatorsAccuracy (%)Precision (%)Recall (%)F1-Score (%)
Aggressive89.5891.3692.591.93
Normal89.5886.2586.2586.25
Conservative89.5891.149090.57
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Chen, L.; Shi, H.; Han, J.; Wang, G.; Wang, Q.; Zhong, F.; Li, H. A Real-Time Recognition System of Driving Propensity Based on AutoNavi Navigation Data. Sensors 2022, 22, 4883. https://doi.org/10.3390/s22134883

AMA Style

Wang X, Chen L, Shi H, Han J, Wang G, Wang Q, Zhong F, Li H. A Real-Time Recognition System of Driving Propensity Based on AutoNavi Navigation Data. Sensors. 2022; 22(13):4883. https://doi.org/10.3390/s22134883

Chicago/Turabian Style

Wang, Xiaoyuan, Longfei Chen, Huili Shi, Junyan Han, Gang Wang, Quanzheng Wang, Fusheng Zhong, and Hao Li. 2022. "A Real-Time Recognition System of Driving Propensity Based on AutoNavi Navigation Data" Sensors 22, no. 13: 4883. https://doi.org/10.3390/s22134883

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop