Next Article in Journal
Operation and Performance Assessment of a Hybrid Solar Heating and Cooling System for Different Configurations and Climatic Conditions
Previous Article in Journal
Parametric Study of a Lunar Base Power Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Conflict Management for Target Recognition Based on PPT Entropy and Entropy Distance

College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Authors to whom correspondence should be addressed.
Energies 2021, 14(4), 1143; https://doi.org/10.3390/en14041143
Submission received: 23 December 2020 / Revised: 8 February 2021 / Accepted: 17 February 2021 / Published: 21 February 2021

Abstract

:
Conflicting evidence affects the final target recognition results. Thus, managing conflicting evidence efficiently can help to improve the belief degree of the true target. In current research, the existing approaches based on belief entropy use belief entropy itself to measure evidence conflict. However, it is not convincing to characterize the evidence conflict only through belief entropy itself. To solve this problem, we comprehensively consider the influences of the belief entropy itself and mutual belief entropy on conflict measurement, and propose a novel approach based on an improved belief entropy and entropy distance. The improved belief entropy based on pignistic probability transformation function is named pignistic probability transformation (PPT) entropy that measures the conflict between evidences from the perspective of self-belief entropy. Compared with the state-of-the-art belief entropy, it can measure the uncertainty of evidence more accurately, and make full use of the intersection information of evidence to estimate the degree of evidence conflict more reasonably. Entropy distance is a new distance measurement method and is used to measure the conflict between evidences from the perspective of mutual belief entropy. Two measures are mutually complementary in a sense. The results of numerical examples and target recognition applications demonstrate that our proposed approach has a faster convergence speed, and a higher belief degree of the true target compared with the existing methods.

1. Introduction

Information is affected by various subjective factors and objective environment, and there is some uncertainty. How to measure uncertainty has become an open issue. Some theoretical tools have been proposed, including probability theory [1], fuzzy set theory [2,3], D-numbers [4,5], Z-numbers [6,7], rough set theory [8,9], Dempster-Shafer (D-S) evidence theory [10,11], fractal theory [12,13], etc. D-S evidence theory is one of the most effective tools among them. It not only allocates belief to the power set of propositions [14], but also has the acceptance of an incomplete model without prior probabilities [15]. For this reason, D-S evidence theory has been widely applied in risk analysis [16,17], uncertainty [18], fault diagnosis [19], decision making [20], and so on.
However, a counterintuitive result is generated when there is a high degree of conflict between the evidences. To address this issue, researchers have proposed a variety of improved methods which can be divided into modifying Dempster’s combination rule [21,22,23] and modifying the original evidence. For the first class, Smets et al. think that the conflict should be assigned to empty set [21]. Lefevre et al. propose a modified method which proportionally distributed the conflict information to the focal element sets [22]. Whereas, the flaw of the modification of Dempster’s combination rule is that the good performance is destructed, like commutativity and associativity. Therefore, many researchers are inclined to modify the original evidence. For the second class, the initial evidences are modified by corresponding weights and rational combined results can be achieved by using the classical Dempster’s rule. Therefore, the most commonly used method is the weighted evidence combination. Jiang et al.’s method obtains the weight of evidence by jointly using the distance of evidence and Deng entropy [24]. Tao et al. propose a modified average method to combine conflicting evidence based on belief entropy and induced ordered weighted averaging operator [25]. Li et al. define a new discount coefficient to modify the original evidence based on Belief Entropy and Negation [26]. In contrast, some existing methods [27,28,29] are based on belief entropy and use belief entropy itself to measure evidence conflict. Obviously, it is not convincing to characterize the evidence conflict only through belief entropy itself. Especially when the belief entropy of conflict evidence and normal evidence is close, the conflict evidence will point to a weight almost equal to that of normal evidence, and conflicts are not handled effectively.
To overcome the shortcomings of the methods, we propose a novel method based on an improved entropy and entropy distance, taking into account both the impacts of self-belief entropy and mutual belief entropy on conflict management. The improved belief entropy is used to quantify the uncertainty of the evidence, so as to measure the degree of conflict from a global perspective. PPT entropy introduces pignistic probability transformation function into Yan et al.’s entropy [27]. Compared with the state-of-the-art belief entropy [27,30,31,32,33,34], it can measure the uncertainty of evidence more accurately, and make full use of the intersection information of evidence to better estimate the degree of evidence conflict. Given the good performance of PPT entropy, this paper chooses it to measure the uncertainty of body of evidence (BOE). Entropy distance describes the entropy difference between evidence, so as to measure the degree of conflict from a local perspective. These two measures are mutually complementary in a sense.
The proposed method deals with conflicting evidence in a more comprehensive way, so that the conflict measurement could be more accurate and reasonable. No matter whether the belief entropy of conflict evidence and normal evidence is equal, close or a large difference, it can still identify conflict evidence and assign a smaller weight to conflict evidence. Not only the drawbacks of existing methods are overcome, but also it has a better convergence performance. Two numerical examples in experiments are illustrated that the novel method is feasible and superior in dealing with the conflicting evidence, where the belief degree of the correct hypothesis has 3.8% and 1.6% increase compared to existing methods, respectively. Furthermore, the belief degree of the true target increases to 98.8% in target recognition application. In this research, the support degree of BOE is a fractional form in which PPT entropy is treated as a numerator and the sum of entropy distances as a denominator. The greater the conflict between the evidences, the greater the entropy distance and the smaller the weight of conflicting evidence.
In this paper, two contributions can be summarized as follows:
  • We propose PPT entropy based on pignistic probability transformation function. It fully considers the influence of the intersection between propositions on uncertainty and makes the uncertainty measurement of evidence more accurate and range wider.
  • A novel method for conflict management is presented based on PPT entropy and entropy distance. It measures conflict between evidences from the perspective of belief entropy itself and mutual belief entropy. Not only it has a better convergence performance, but also the higher belief value of the correct hypothesis and true target is obtained.
To facilitate our discussion, Section 2 introduces some basic concepts. In Section 3, we propose PPT entropy and entropy distance. The property and requirements of behaviour of PPT entropy are discussed, and some examples are provided to illustrate the validity of our proposed belief entropy. Based on that, a novel method of conflict management is presented. In Section 4, an application in target recognition is shown to verify the effectiveness of our proposed method. Section 5 is the conclusion.

2. Preliminaries

In this section, some preliminaries are briefly introduced, including D-S evidence theory, several typical belief entropies and pignistic probability transform, for the purpose of understanding the descriptions in the rest of this paper.

2.1. D-S Evidence Theory

Suppose Θ is a set of mutually exclusive and exhaustive elements F i ( i = 1 , 2 , 3 , 4 , , N ) , and it can be defined as [32]
Θ = F 1 , F 2 , F 3 , F 4 , , F N
where Θ is called the frame of discernment (FOD), and F i is named single-element proposition or subset. We define 2 Θ as a power set which contains 2 N elements and can be described as
2 Θ = , F 1 , F 2 , , F N , F 1 , F 2 , F 1 , F 3 , , Θ
where ⌀ is an empty set in Equation (2), and the basic probability assignment (BPA) function m is defined as a mapping of the power set 2 Θ to [0,1].
m : 2 Θ [ 0 , 1 ]
which satisfies
m ( ) = 0 0 m ( A ) 1 A Θ m ( A ) = 1
where mass function m ( A ) represents the probability of support to A , and A is called focal element, proposition or subset. This paper researches classical D-S evidence theory, so the mass function m ( ϕ ) is equal to 0.
The belief function B e l ( A ) can be defined as [31]
B e l ( A ) = B A m ( B )
where B e l ( A ) represents the total belief to the proposition A. In D-S evidence theory, two BPAs can be combined with Dempster’ s rule of combination, defined as follows [35]:
m ( A ) = m 1 m 2 ( A ) = 1 1 K B C = A m 1 ( B ) m 2 ( C )
in which
K = B C = m 1 ( B ) m 2 ( C )
where ⊕ represents Dempster’s combination rule. K is called conflict coefficient, and its scope is [0, 1]. The bigger K is, the more conflict between two evidences is.

2.2. Entropy

Several typical belief entropies are briefly introduced.

2.2.1. Shannon Entropy

Shannon entropy is an uncertain measure of information volume and is denoted by [27]:
H = i = 1 N p i log 2 p i
where N is the number of basic states in the system, p i is the probability of state i and it satisfies i = 1 N p i = 1 . The larger the Shannon entropy H is, the higher the uncertainty degree is.

2.2.2. Deng Entropy

Shannon entropy has a great contribution to the measurement of uncertainty, but it has some limitations when there is the BPA. Because it measures uncertainty based on probability. In order to solve this problem, Deng proposes Deng entropy in the framework of D-S evidence theory. It is a generalization of Shannon entropy and is defined as [30]:
E d ( m ) = A Θ m ( A ) log 2 ( 2 | A | 1 ) A Θ m ( A ) log 2 m ( A )
where | A | is the cardinality of the proposition A, that is, the total number of single-element subsets contained in the proposition A.

2.2.3. Zhou et al.’s Entropy

Zhou et al.’s belief entropy considers the influence of the scale of BOE on uncertainty and is denoted by [31]:
E M d ( m ) = A Θ m ( A ) log 2 2 | A | 1 e | A | 1 | S | A Θ m ( A ) log 2 m ( A )
where | S | is the cardinality of the Θ , that is, the total number of single-element subsets contained in the FOD.

2.2.4. Yan et al.’s Entropy

Yan et al.’s belief entropy uses the belief function to extend method of measuring uncertain and is an improved method based on Zhou’s belief entropy as follows [27]:
H n ( m ) = A Θ m ( A ) log 2 2 | A | 1 e | A | 1 | S | A Θ m ( A ) log 2 m ( A ) + B e l ( A ) 2

2.3. Pignistic Probability Transform

Pignistic probability transform is defined as follows [36],
B e t P m ( A ) = B Θ | A B | | B | m ( B ) 1 m ( )
where | A B | is the cardinality of the intersection of A and B. The mass function m ( ) is equal to 0 in a closed world (i.e., in classical D-S evidence theory). So it can be simplified to the following form.
B e t P m ( A ) = B Θ | A B | | B | m ( B )

3. Proposed Method for Conflict Management

In evidence theory, a counterintuitive result is generated when combining the highly conflicting evidence. To address this problem, it is necessary to assign weights to the evidence reasonably. Some existing approaches put forward to use self-belief entropy to determine the weights of evidence. However, it fails to identify the conflicting evidence when the belief entropy of conflicting evidence and normal evidence is equal or close. In this paper, through comprehensively consider the influence of self-belief entropy and mutual belief entropy on the conflict, we proposed a novel method based on PPT entropy and entropy distance. In the following section, we first propose PPT entropy, and then entropy distance. Based on that, a novel method of conflict management is presented.

3.1. PPT Entropy

In D-S evidence theory, pignistic probability transform is described as allocating the belief to each proposition equally [37]. The essence is the influence of the intersection between propositions on proposition’s belief degree. In this literature, we introduce pignistic probability transformation function to extend the method of uncertainty measurement. The proposed belief entropy is denoted as follows:
H p ( m ) = A Θ m ( A ) log 2 m ( A ) + B e t P m ( A ) 2 2 | A | 1 e | A | 1 | S |
where B e t P m ( A ) is pignistic probability transform function as shown in Equation (11). The proposed belief entropy is named PPT entropy. We can infer from the Equation (12) that the PPT entropy is degenerated into Zhou et al.’s belief entropy when multi-element propositions are not in intersection. Furthermore, it is degenerated into Shannon entropy when there is only single-element propositions.

3.2. The Properties of PPT Entropy

3.2.1. Probability Consistency

When m is a Bayesian BPA, PPT entropy must be degenerated into Shannon entropy.
Proof. 
When | A | 1 , it is BetP m ( A ) = m ( A ) .
Then, we can further get:
H p ( m ) = A Θ m ( A ) log 2 m ( A ) + m ( A ) 2 2 | A | 1 e | A | 1 | S | = A Θ m ( A ) log 2 m ( A )
Hence, the probability consistency holds.  □

3.2.2. Set Consistency

Suppose that the PPT entropy satisfies the property of set consistency when exists a set A such that m ( A ) = 1 , which means that it must satisfy the equation:
H p ( m ) = log 2 ( | A | )
Proof. 
Suppose that the FOD is Θ = θ 1 , θ 2 , θ 3 and the BPA is m ( A ) = 1 , A = Θ (i.e., | A | = | Θ | = 3 ) .
We can get
H p ( m ) = 1 × log 2 1 + 3 3 × 1 2 2 3 1 e 3 1 3 = log 2 3.6 log 2 ( | A | ) = log 2 3
obviously
H p ( m ) log 2 ( | A | )
Hence, the set consistency does not hold. □

3.2.3. Range

Mathematically, the value range of PPT entropy is [ 0 , + ) .
First, PPT entropy is always non-negative.
Proof. 
We know
0 m ( A ) 1 0 m ( A ) + Bet P m ( A ) 2 1 0 e | A | 1 | S | e 3
if | A | = 1
H p ( m ) = A Θ m ( A ) log 2 m ( A ) 0
if | A | 2
2 | A | 1 3 e | A | 1 | S | / 2 | A | 1 1 H p ( m ) 0
thus:
H p ( m ) 0
Second, the maximum value of PPT entropy is infinite. □
The proposition A and the FOD Θ consist of at least one single-element subset and there is no superior limit, thus the range of | A | and | S | are [1,+). Moreover, the range of m ( A ) and B e t P m ( A ) are [ 0 , 1 ] . Obviously, the maximum value of PPT entropy is infinite according to the Equation (12).
Finally, we can conclude that the value range of PPT entropy is [ 0 , + ) .

3.2.4. Additivity

Suppose that the PPT entropy satisfies the additivity, which means that it satisfies the equation:
H p m Θ 1 × Θ 2 ) = H p m Θ 1 + H p m Θ 2
where Θ 1 × Θ 2 is the product space of the sets Θ 1 and Θ 2 and m is a B P A on Θ 1 × Θ 2 . It also need to satisfy m ( A × B ) = m ( A ) m ( B ) with A Θ 1 , B Θ 2
Proof. 
Assume X × Y is the product space of two FODs X = x 1 , x 2 , x 3 , Y = y 1 , y 2 . The marginal BPAs on X × Y are the following ones:
m 1 : m 1 x 1 , x 2 = 0.5 , m 1 x 3 = 0.2 , m 1 ( X ) = 0.3 m 2 : m 2 ( Y ) = 1 m : m z 11 , z 12 , z 21 , z 22 = 0.5 , m z 31 , z 32 = 0.2 , m ( X × Y ) = 0.3
where z i j = x i , y j , we can get
H p m 1 + H p m 2 = 2.061 + 0.863 = 2.924 H p ( m ) = 4.249
obviously
H p ( m ) > H p m 1 + H p m 2
Therefore, the additivity does not hold. □

3.2.5. Subadditivity

If the PPT entropy satisfies the following conditions:
H p m Θ 1 × Θ 2 H p m Θ 1 + H p m Θ 2
where m is a BPA on the space Θ 1 × Θ 2 , m Θ 1 and m Θ 2 are marginal BPAs on Θ 1 and Θ 2 , then it is said that PPT entropy satisfies the subadditivity.
According to the property of additivity, we have got H p ( m ) > H p m 1 + H p m 2 .
Therefore, the subadditivity does not hold.

3.2.6. Monotonicity

Given two FODs Θ 1 and Θ 2 , if Θ 1 Θ 2 , there exists H p Θ 1 H p Θ 2 , then it is said that PPT entropy satisfies the monotonicity.
Proof. 
When Θ 1 Θ 2
S 1 S 2 e | A | 1 S 1 e | A | 1 S 2
we can get
H p Θ 1 H p Θ 2
Therefore, the monotonicity holds. □
To analyze the proposed entropy, Table 1 shows the properties of different entropies. Although the PPT entropy satisfies the probability consistency and monotonicity which can be considered as two important properties for uncertainty measure, it does not satisfy set consistency, additivity and subadditivity, which will bring some challenges to the extension of the uncertainty measure on more general theories. In addition, its scope of application will be limited to a certain extent.

3.3. Requirements of Behaviour for PPT Entropy

The requirements of behaviour (RB) for uncertainty measures suggested by Abellan and Masegosa [40] could be expressed in the following way:
RB1: The calculation of uncertainty measure should be simple.
RB2: The uncertainty measure should reflect the uncertainty of conflict and non-specificity co-existing in the D-S evidence theory.
RB3: The uncertainty measure should be sensitive to change of the BPA.
RB4: The extension of the uncertainty measure in the D-S evidence theory on more general theories must be possible.
In the next section, we will discuss the above requirements of behaviour for PPT entropy.

3.3.1. Low Computing Complexity

The PPT entropy has a relatively simple calculation and it is only necessary to obtain the pignistic probability transform function B e t P m ( A ) according to the given BPA. When the mass assignments are only transferred to single-element propositions, the PPT entropy is degenerated into Shannon entropy and the calculation is simpler.

3.3.2. Concealment of Conflict and Non-Specificity

A simple transformation of Equation (12) is as follows.
H p ( m ) = A Θ m ( A ) log 2 2 2 | A | 1 e 1 | A | | S | A Θ m ( A ) log 2 m ( A ) + B Θ | A B | | B | m ( B )
where the last term of the above equation A Θ m ( A ) log 2 m ( A ) + B Θ | A B | | B | m ( B ) can not be further transformed into a form similar to Shannon entropy, so that it does not measure the uncertainty of conflict. Similarly, the first term A Θ m ( A ) log 2 2 2 | A | 1 e 1 | A | | S | can not be further transformed into a form similar to I [40] and couldn’t measure the uncertainty of non-specificity. In other words, PPT entropy couldn’t be converted into a linear combination of S * and I. Here, function S * is used as a conflict measure and function I as a non-specificity measure. I has the following expression:
I ( m ) = A Θ m ( A ) ln | A |
Therefore, PPT entropy has no clear separation between conflict and non-specificity.

3.3.3. Sensitivity to Changes in Evidence

The PPT entropy is sensitive to change of the BPA. A detailed analysis is presented in Example 5. The value of H p could change with the change of the BPA as shown in Figure 1.

3.3.4. Extension on More General Theories

Currently, the belief entropy based on Deng entropy in the D-S evidence theory framework are limited to the closed world where the FOD is assumed to be complete. Tang et al. propose the nonzero mass function of the empty set (i.e., m ( ) 0 ) which extends belief entropy theory to the open world [34]. The calculation of H p contains the m ( ) as shown in Equation (10), so the extension of PPT entropy on more general belief entropy theory is possible when there is m ( ) 0 . However, there are two key problems when it is extended on more general belief entropy theory. On the one hand, the PPT entropy only satisfies the property of probability consistency with respect to Klir’s five requirements, which is a huge challenge to extend belief entropy theory. On the other hand, the PPT entropy will be meaningless when it is m ( ) = 1 .

3.4. Examples

In this section, two counter examples and eight numerical examples (Examples 5–9 from Yang and Han’s paper [41]) are used to illustrate the validity of PPT entropy and compare it with other uncertainty measures including Deng entropy, Zhou et.al’s belief entropy and Yan et.al’s belief entropy.

3.4.1. Counter-Example 1

Example 1.
Suppose that the FOD is Θ = { A , B , C , D } and there is only intersection relationship between propositions in BOEs.
S 1 : m 1 ( A , B ) = 0.3 m 1 ( B , C ) = 0.3 m 1 ( B , D ) = 0.4
S 2 : m 2 ( A , B ) = 0.3 m 2 ( C , D ) = 0.3 m 2 ( B , D ) = 0.4
S 3 : m 3 ( A , B ) = 0.3 m 3 ( A , C ) = 0.3 m 3 ( B , D ) = 0.4
The belief entropy is calculated with Deng entropy as follows:
E d m 1 = 2 × 0.3 log 2 0.3 2 2 1 0.4 log 2 0.4 2 2 1 = 3.156 E d m 2 = 2 × 0.3 log 2 0.3 2 2 1 0.4 log 2 0.4 2 2 1 = 3.156 E d m 3 = 2 × 0.3 log 2 0.3 2 2 1 0.4 log 2 0.4 2 2 1 = 3.156
The belief entropy is calculated with Zhou et al.’s belief entropy as follows:
E M d m 1 = 2 × 0.3 log 2 0.3 2 2 1 e 2 1 4 0.4 log 2 0.4 2 2 1 e 2 1 4 = 2.795 E M d m 2 = 2 × 0.3 log 2 0.3 2 2 1 e 2 1 4 0.4 log 2 0.4 2 2 1 e 2 1 4 = 2.795 E M d m 3 = 2 × 0.3 log 2 0.3 2 2 1 e 2 1 4 0.4 log 2 0.4 2 2 1 e 2 1 4 = 2.795
The belief entropy is calculated with Yan et al.’s belief entropy as follows:
H n m 1 = 2 × 0.3 log 2 0.3 + 0.3 2 2 2 1 e 2 1 4 0.4 log 2 0.4 + 0.4 2 2 2 1 e 2 1 4 = 2.795 H n m 2 = 2 × 0.3 log 2 0.3 + 0.3 2 2 2 1 e 2 1 4 0.4 log 2 0.4 + 0.4 2 2 2 1 e 2 1 4 = 2.795 H n m 3 = 2 × 0.3 log 2 0.3 + 0.3 2 2 2 1 e 2 1 4 0.4 log 2 0.4 + 0.4 2 2 2 1 e 2 1 4 = 2.795
The belief entropy is calculated with our proposed belief entropy as follows:
H p m 1 = 0.6 log 2 0.3 + 0.3 + 1 2 × 0.3 + 1 2 × 0.4 2 2 2 1 e 2 1 4 0.4 log 2 0.4 + 0.4 + 1 2 × 0.3 + 1 2 × 0.3 2 2 2 1 e 2 1 4 = 2.213 H p m 2 = 0.6 log 2 0.3 + 0.3 + 1 2 × 0.4 2 2 2 1 e 2 1 4 0.4 log 2 0.4 + 0.4 + 1 2 × 0.3 + 1 2 × 0.3 2 2 2 1 e 2 1 4 = 2.362 H p m 3 = 0.3 log 2 0.3 + 0.3 + 1 2 × 0.3 + 1 2 × 0.4 2 2 2 1 e 2 1 4 0.3 log 2 0.3 + 0.3 + 1 2 × 0.3 2 2 2 1 e 2 1 4 0.4 log 2 0.4 + 0.4 + 1 2 × 0.3 2 2 2 1 e 2 1 4 = 2.400
In the Example 1 , for evidence S 1 , the intersection between three multi-element subsets is single element subset B. However, the intersection between two multi-elements subsets is single element subset B for evidence S 2 and S 3 . For evidence S 2 , the intersection between multi-element subsets is single element subset B and D , but it is single element subset A and B for evidence S 3 . What’s more, the probability of the basic probability assignment m ( B , D ) is higher than m ( A , B ) in BOEs. Based on the above analysis, it is obvious that the uncertainty of the three BOEs is not the same. However, the belief entropy of the three evidences is the same obtained by Deng entropy, Zhou et al.’s and Yan et al.’s belief entropy, which is contrary to common sense. Our proposed belief entropy can obtain a much more reasonable and satisfactory result, and the uncertainty of S 1 , S 2 and S 3 increases in turn. This is in line with human intuition as shown in Figure 2.
Factors influencing the degree of uncertainty of evidence include the cardinality of the subset, the scale of BOE and the intersection between subsets. The three uncertainty measures fail to distinguish Example 1. The reason is that Deng entropy only considers the cardinality of the subset, while Zhou’s entropy just considers the cardinality of the subset and the scale of BOE. Although Yan’s entropy considers intrinsic connection between subsets in BOE, it only makes full use of the information of inclusion relationship between subsets and ignores the intersection between subsets. When there is only an intersection relationship among the subsets and no inclusion relationship, Yan’s entropy fails to distinguish Example 1 and can be degenerated into Zhou’s entropy. It is the reason that the uncertain value is the same obtained by Yan’s entropy and Zhou’s belief entropy. The comparison results of four uncertainty measures are given in Table 2.

3.4.2. Counter-Example 2

Example 2.
Suppose the FOD is Θ = { A , B , C , D } ,There is the inclusion and intersection relationship between propositions in BOEs.
S 1 : m 1 ( A , C ) = 0.3 , m 1 ( B , C ) = 0.3 , m 1 ( A , C , D ) = 0.4 S 2 : m 2 ( A , D ) = 0.3 , m 2 ( B , C ) = 0.3 , m 2 ( A , C , D ) = 0.4 S 3 : m 3 ( A , C ) = 0.3 , m 3 ( B , C ) = 0.3 , m 3 ( A , B , C , D ) = 0.4 S 4 : m 4 ( A , D ) = 0.3 , m 4 ( B , C ) = 0.3 , m 4 ( A , B , C , D ) = 0.4
The belief entropy is calculated with Deng entropy as follows:
E d m 1 = 3.646 , E d m 2 = 3.646 , E d m 3 = 4.085 , E d m 4 = 4.085
The belief entropy is calculated with Zhou et al.’s belief entropy as follows:
E M d m 1 = 3.140 , E M d m 2 = 3.140 , E M d m 3 = 3.436 , E M d m 4 = 3.436
The belief entropy is calculated with Yan et al.’s belief entropy as follows:
H n m 1 = 2.957 , H n m 2 = 2.957 , H n m 3 = 3.113 , H n m 4 = 3.113
The belief entropy is calculated with our proposed belief entropy as follows:
H p m 1 = 2.487 , H p m 2 = 2.636 , H p m 3 = 2.715 , H p m 4 = 2.864
In the Example 2, evidence S 1 compared to evidence S 2 , The number of multi-elements subsets whose intersection is single element subset C is greater. It is the same for evidence S 3 and evidence S 4 . Obviously, the uncertainty of evidence S 1 is different from that of evidence S 2 , and the uncertainty of evidence S 3 is different from that of evidence S 4 .
The comparison results of different uncertainty measures are given in Table 3.
According to the results, we can conclude that the proposed belief entropy overcomes the limitations of the previous three methods, and could distinguish the uncertainty of four BOEs validly, on the contrary, the other three methods have failed. In addition, for the same BOE from Example 2, its degree of uncertainty calculated by four methods is monotonously decreasing. Because the latter method makes more use of the potential internal information in BOE.

3.4.3. Numerical Example 1

Example 3.
Suppose that the FOD is Θ = { A } and the BPA is m ( A ) = 1 . The values of Shannon entropy and the four entropies can be obtained as follows. The example is cited from [27].
H ( m ) = 1 × log 2 1 = 0 E d ( m ) = 1 × log 2 1 2 1 = 0 E M d ( m ) = 1 × log 2 1 2 1 e 1 1 1 = 0 H p ( m ) = H n ( m ) = 1 × log 2 1 + 1 2 ( 2 1 ) e 1 1 1 = 0
As we can see from the results, when the FOD has only one single-element proposition, the value of uncertainty is 0. The four entropies and Shannon entropy get the same result.

3.4.4. Numerical Example 2

Example 4.
Suppose that the FOD is Θ = { A , B , C } and the BPA is m ( A ) = m ( B ) = m ( C ) = 1 / 3 . The values of Shannon entropy and the four entropies can be obtained as follows. The example is cited from [27].
H ( m ) = 1 3 log 2 1 3 1 3 log 2 1 3 1 3 log 2 1 3 = 1.58 H p ( m ) = H n ( m ) = E M d ( m ) = E d ( m ) = 1.58
The results of Example 3 and 4 show that the four entropies and Shannon entropy are the same when the BPA only consists of single-element propositions. At the same time, it also illustrates that PPT entropy retains the characteristics of Shannon entropy.

3.4.5. Numerical Example 3

Example 5.
Suppose that the FOD is Θ = θ 1 , θ 2 . The BPA is m ( θ 1 ) = a , m ( θ 2 ) = b , m ( Θ ) = 1 a b , where a , b [ 0 , 0.5 ] . Here, we calculate E d , E M d , H n and H p values with the changes of a and b. The results are illustrated in Figure 1.
It is shown that the values of E M d , H n and H p are equal and smaller than that of E d when a = b = 0 ( m ( Θ ) = 1 ) . This makes sense, because E M d , H n and H p all consider the influence of the scale of FOD on the uncertain measurement. In addition, the values of the four uncertainty measures change at different a and b, and are decreasing in turn. It is rational result, and the reason is analyzed as follows. Deng entropy only considers the influence of the cardinality of proposition on uncertainty measure. Zhou et al.’s belief entropy considers the influence of the cardinality of proposition and the scale of BOE. Yan et al.’s belief entropy considers the influence of the cardinality of proposition, the scale of BOE and the inclusion relationship among propositions. On the basis of Yan et al.’s belief entropy, our proposed belief entropy also considers the influence of the intersection relationship between propositions. Compared with the other three uncertainty measures, PPT entropy could dig up more information of the interaction of the internal elements in BOE, so the uncertainty value is the smallest.

3.4.6. Numerical Example 4

Example 6.
Suppose that the FOD is Θ = θ 1 , θ 2 , θ 3 and the BPA is m ( Θ ) = 1 . We change the BPA step by step. In each step, m ( Θ ) decreases Δ = 0.05 and each mass function m ( θ i ) ,i= 1, 2, 3 increases Δ / 3 . Finally, m ( θ i ) = 1 / 3 . The belief entropy is calculated with the four uncertainty measures. The changes of uncertainty values are shown in Figure 3.
For the four uncertainty measures, the uncertainty values of a vacuous BPA are all larger than that of a Bayesian one. This is intuitive, because the vacuous BPA represents that the information is completely unknown to the information system, while a Bayesian one could provide more certain information. It shows that PPT entropy inherits the advantages of Deng entropy, Zhou et.al’s belief entropy and Yan et.al’s belief entropy. Furthermore, when there is m ( θ i ) = 1 / 3 , the uncertainty values of the four uncertainty measures is equal. It is also intuitive, because the four belief entropies are all degenerated into Shannon entropy when there are only single-element propositions in BOE as aforementioned in Example 4. With the change of the BPA in each step, H p is always at the minimum value among four uncertainty measures. Because it’s the only one that takes into account the effect of intersection between a multi-element proposition and single-element propositions on the uncertainty.

3.4.7. Numerical Example 5

Example 7.
The FOD is Θ = θ 1 , θ 2 , θ 3 . The BPA is m ( Θ ) = 1 , and we change it step by step. In each step, m ( Θ ) decreases Δ = 0.05 , and m ( θ 1 ) increases Δ = 0.05 . Repeat the process until m ( θ 1 ) = 1 . The belief entropy is calculated with the four uncertainty measures. The changes of uncertainty values are shown in Figure 4.
The values of E d , E M d , H n and H p all change with the change of the BPA in each step. The uncertainty values of a vacuous BPA are larger than that of a categorical one. It is intuitive, because a vacuous BPA means completely unknown, and a categorical one means to be pretty certain. When there is m ( θ 1 ) = 1 , the uncertainty values of different uncertainty measures all change to zero. It is also intuitive, because the four belief entropies are all degenerated into Shannon entropy when there is only one single-element proposition as aforementioned in Example 3. It once again illustrates that the PPT entropy retains the characteristics of Shannon entropy.

3.4.8. Numerical Example 6

Example 8.
Suppose that the FOD is Θ = θ 1 , , θ 8 . The BPA is m ( Θ ) = 1 , and we change it step by step. In each step, m ( Θ ) decreases Δ = 0.05 , and m ( B ) increases Δ = 0.05 . The cardinality of proposition B is 2, 4 and 6, respectively. Repeat the process until m ( B ) = 1 . The changes of values for the four uncertainty measures under the different | B | are shown in Figure 5.
As shown in Figure 5, when | B | has a smaller value, the values of E d , E M d , H n and H p will change faster at each step. Because the same mass assignments are assigned to a smaller cardinality proposition. When there is one multi-element proposition in BOE, i.e., m ( Θ ) = 1 or m ( B ) = 1 , the values of E M d , H n and H p is equal. However, E d is different from the other uncertainty measures. Because it is no interaction between propositions when there is only one multi-element proposition, and the scale of FOD plays a decisive role in uncertainty measure. Only E d does not consider the influence of the scale of FOD on the uncertain measurement. what’s more, the uncertainty value of H p is always the smallest, which illustrates that the PPT entropy has a less information loss than other uncertainty measures and the result is most accurate.

3.4.9. Numerical Example 7

Example 9.
Suppose that the FOD is Θ = θ 1 , , θ 10 . The BPA is m ( Θ ) = 1 a and m ( A ) = a . Given an a value, and we change the BPA step by step. | A | changes from 1 to 10, and increases by 1 in each step. We set a = 0.3 , 0.5 , 0.8 , respectively. Under the different a, the changes of values for the four uncertainty measures are shown in Figure 6.
As shown in Figure 6, the values of E d , E M d , H n and H p all increase with the increase of | A | at the first 8 steps and decrease or increase in the last. The reasons are described as follows. According to the monotonicity, the uncertainty of BOE which consists of two multi-element propositions is undoubtedly increasing at the first 8 steps. Comparatively, due to the BPA is shifted from two multi-element propositions to one multi-element proposition in the last step, the changes in uncertainty are inconsistent and may be large or small. If a has a larger value, they will increase even more, because relatively more mass assignments are assigned to a larger cardinality proposition. In addition, we can see that the values of H n and H p are getting closer and closer, and they almost overlap at some points, as a increases. Because the intersection between multi-element propositions has less influence on the uncertainty when the m ( Θ ) is decreasing.

3.4.10. Numerical Example 8

Example 10.
Suppose that the FOD is Θ = { 1 , 2 , , 14 , 15 } , and the BPA is shown as follows:
m ( 4 , 5 , 6 ) = 0.1 , m ( 7 ) = 0.1 , m ( A ) = 0.7 , m ( Θ ) = 0.1 where A represents a proposition, and the cardinality of proposition A is variable from 1 to 14. The example is cited from [27]. The belief entropy is calculated with the four uncertainty measures that is shown in Figure 7.
As we can see in Figure 7, with the cardinality of proposition A continuing to increase, the values of E d , E M d , H n and H p increase monotonically which further illustrates the monotonicity of PPT entropy. Furthermore, by taking into consideration of the intersection between propositions, the PPT entropy takes advantage of more valuable information in BOE and the uncertain degree is always smaller than the other three uncertainty measures, which ensures it to be more reasonable and effective for uncertainty measure.

3.5. Entropy Distance

In this paper, we introduce the concept of entropy distance. It is a novel distance measurement method and is used to measure the difference between belief entropy of evidences, thereby characterizing the degree of conflict between evidences from the perspective of mutual belief entropy. The more similar between BOEs, the smaller the entropy distance between BOEs.
Suppose that A 1 , A 2 , , A K are the focal element of the BPAs m i and m j , then the entropy distance between m i and m j is defined as follows:
d H m i , m j = max A K Θ H p m i A k H p m j A k
H p m i A k is belief entropy of the focal element A k , and d H m i , m j denotes the maximum value of the difference in belief entropy of the corresponding focal element. Entropy distance, as a novel distance measurement method, has the following two advantages than relative entropy [42]. On the one hand, the entropy distance satisfies the symmetry. On the other hand, it can measure the difference of two BOEs more accurately and in a wider range. Next, an example is used to verify the advantages of entropy distance.
Example 11.
Suppose that the FOD is Θ = { A , B , C } .
S 1 : m 1 ( A , B ) = 0.3 , m 1 ( A , C ) = 0.7 S 2 : m 2 ( A , B ) = 0.8 , m 2 ( A , C ) = 0.2 S 3 : m 3 ( A , B ) = 0.3 , m 3 ( C ) = 0.7 S 4 : m 4 ( A , B ) = 0.8 , m 4 ( C ) = 0.2 S 5 : m 5 ( A ) = 0.1 , m 5 ( B ) = 0 , m 5 ( C ) = 0.9 S 6 : m 6 ( A ) = 0.1 , m 6 ( B ) = 0.9 , m 6 ( C ) = 0
The distance value between m 1 and m 2 is calculated with relative entropy as follows:
KL ( m 1 m 2 ) = m 1 ( A ) log 2 m 1 ( A ) m 2 ( A ) = 0.3 log 2 0.3 0.8 + 0.7 log 2 0.7 0.2 = 0.841 KL ( m 2 m 1 ) = m 2 ( A ) log 2 m 2 ( A ) m 1 ( A ) = 0.8 log 2 0.8 0.3 + 0.2 log 2 0.2 0.7 = 0.771
The distance value between m 3 and m 4 is calculated with relative entropy as follows:
KL ( m 3 m 4 ) = 0.3 log 2 0.3 0.8 + 0.7 log 2 0.7 0.2 = 0.841
The distance value between m 5 and m 6 is calculated with relative entropy as follows:
KL ( m 5 m 6 ) = 0.1 log 2 0.1 0.1 + 0 × log 2 0 0.9 + 0.9 log 2 0.9 0 = N a N
The distance value between m 1 and m 2 is calculated with entropy distance as follows:
d H m 1 , m 2 = max A K Θ H p m 1 A k H p m 2 A k = max { 0.3 log 2 0.3 + 0.3 + 1 2 × 0.7 2 ( 2 2 1 ) e 2 1 3 + 0.8 log 2 0.8 + 0.8 + 1 2 × 0.2 2 ( 2 2 1 ) e 2 1 3 , 0.7 log 2 0.7 + 0.7 + 1 2 × 0.3 2 2 2 1 e 2 1 3 + 0.2 log 2 0.2 + 0.2 + 1 2 × 0.8 2 2 2 1 e 2 1 3 } = max { | 0.417 | , | 0.545 | } = 0.545 = d H m 2 , m 1
The distance value between m 3 and m 4 is calculated with entropy distance as follows:
d H m 3 , m 4 = max { | 1.088 | , | 0.104 | } = 1.088
The distance value between m 5 and m 6 is calculated with entropy distance as follows:
d H m 5 , m 6 = max { | 0 | , | 0.137 | , | 0.137 | } = 0.137
The distance value between m 1 and m 2 obtained by the entropy distance is the same as m 2 and m 1 , which proves that entropy distance satisfies the symmetry, however, relative entropy does not. The difference between m 1 and m 2 is intuitively different from the difference between m 3 and m 4 , but the two distance values obtained by the relative entropy are both equal to 0.841. The two distance values obtained by the entropy distance is not equal, which illustrates that our proposed method can distinguish two situations and measure the difference of two BOEs more accurately. Since the denominator is 0, the distance value between m 5 and m 6 can not be calculated with relative entropy. In contrast, the distance value between the two BOEs can be measured by entropy distance, which verifies that the entropy distance measures the difference of two BOEs in a wider range. The comparison results of two distance methods are given in Table 4.
Comparing to the existing methods based on belief entropy itself, entropy distance measures the conflict between evidences from a global point of view, entropy distance measures the conflict between evidences from a local perspective. With introducing entropy distance into conflict management, conflicts are measured from a global and local point of view, and conflict measurement could be more accurate and comprehensive. Even though the belief entropy of conflict evidence and normal evidence is equal or close, it can still effectively identify conflict evidence.

3.6. Proposed Method for Conflict Management Based on PPT Entropy and Entropy Distance

The flowchart of the proposed conflict management approach based on PPT entropy and entropy distance is given in Figure 8.
Suppose that Θ is the FOD and there are n pieces of evidences.
  • Calculate the belief entropy of focal elements and BOEs with Equation (12).
  • Construct entropy distance matrix with Equation (14).
    D n × n = 0 d H m 1 , m 2 d H m 1 , m n d H m 2 , m 1 0 d H m 2 , m n d H m n , m 1 d H m n , m 2 0
  • Calculate the support degree of BOEs with Equation (15).
    S u p m i = H p m i j = 1 n d H m i , m j
    S u p m i represents the support degree of BOE m i . It is a fractional form in which PPT entropy is treated as a numerator and sum of entropy distance as a denominator. Entropy distance is inversely proportional to the weight of evidence. This is consistent with the intuitive analysis results. Whereas, For the existing methods, the support degree of BOE is calculated by Equation (16) and is equal to belief entropy itself.
    S u p m i = H m i
    H m i represents different belief entropies.
  • Measure the weight value of BOEs with Equation (17).
    w i = S u p m i i = 1 n S u p m i
  • Get the modified BPA with Equation (18).
    m ( A ) = i = 1 n w i × m i ( A )
  • Use the Dempster’s rule of combination with Equation (4), we combine the modified BPA n − 1 times.

3.7. Experiments

3.7.1. Numerical Example 9

Example 12.
Suppose that in the same frame of discernment Θ = { A , B , C } , the system has obtained the data information from four different types of sensors, and the BPA of each sensor data is shown in Table 5. Intuitively, the evidence m 2 is highly conflicting with other evidence and the hypothesis A will be obtained the highest belief.
According to the proposed method, the specific calculation process is shown below.
Step 1: Calculate the belief entropy of focal elements and BOEs.
The belief entropy of focal elements can be obtained as follows:
H p m 1 ( A ) = 0.131 , H p m 1 ( B ) = 0.332 H p m 2 ( B ) = 0.131 , H p m 2 ( C ) = 0.332 H p m 3 ( A ) = 0.131 , H p m 3 ( B ) = 0.332 H p m 4 ( A ) = 0.131 , H p m 4 ( B ) = 0.332
The belief entropy of BOEs can be obtained as follows:
H p m 1 = 0.469 , H p m 2 = 0.469 , H p m 3 = 0.469 , H p m 4 = 0.469
Step 2: Construct the entropy distance matrix D 4 × 4 .
D 4 × 4 = 0 0.332 0 0 0.332 0 0.332 0.332 0 0.332 0 0 0 0.332 0 0
Step 3: Calculate the support degree of BOEs.
S u p m 1 = 1.413 , S u p m 2 = 0.471 , S u p m 3 = 1.413 , S u p m 4 = 1.413
Step 4: Measure the weight of the four BOEs.
w 1 = 0.3 , w 2 = 0.1 , w 3 = 0.3 , w 4 = 0.3
Step 5: Get the modified BPA.
m ( A ) = 0.81 , m ( B ) = 0.18 , m ( C ) = 0.01
Step 6: Use the Dempster’s rule of combination to combine the modified BPA 3 times.
m ( A ) = ( m m ) 1 m 2 m 3 ( A ) = 1 m ( B ) = ( m m ) 1 m 2 m 3 ( B ) = 0 m ( C ) = ( m m ) 1 m 2 m 3 ( C ) = 0
The fusion results of four BPAs with different methods are shown in the Table 6.
In this example, the belief entropy of four evidences is 0.469 according to the calculated results. Obviously, the belief entropy of conflict evidence and normal evidence is equal. As seen from the results, the existing methods can obtain correct result. However, the same weight is allocated to each evidence which makes conflicting evidence fail to be effectively handled in the subsequent fusion process. The reason is that the existing methods are only using belief entropy itself as the support degree to assign the weight of evidence.
Compared with existing methods, a smaller weight of conflicting evidence m 2 is obtained by our proposed approach, which reduces the impact of conflict evidence on subsequent evidence combination. It not only has better convergence performance, but also obtain the higher belief degree of the hypothesis A by 100 % . The feasibility of the proposed method of conflict management is verified.

3.7.2. Numerical Example 10

Example 13.
Suppose that in the same frame of discernment Θ = { A , B , C } , the system has obtained the data information from five different types of sensors [43], and the BPA of each sensor data is shown in Table 7. Intuitively, the evidence m 3 is highly conflicting with other evidence and the hypothesis A will be obtained the highest belief. The example is cited from [43].
The fusion results of five BPAs with different methods are shown in the Table 8.
As seen from the results, the convergence speed of the proposed method is faster and m ( a ) = 0.704 after three evidences including the conflict evidence m 3 are combined, which is higher than that of the existing approaches. What’s more, the combination results can promptly converge to the desired value with the increasing number of evidences, m ( a ) = 0.934 after four evidences are combined, and m ( a ) = 0.982 after five evidences are fused. The belief degree of the hypothesis A increases by 1.6 % compared to existing methods. The reason is that our proposed method comprehensively considers the impact of the self-belief entropy and mutual belief entropy on conflict management. It not only can measure the degree of conflict between evidences effectively, but also can strengthen the influence of normal evidence further and at the same time weaken the influence of conflicting evidence further. Additionally, the belief entropy of five evidences is 1.190, 1.238, 1.109, 1.238 and 1.238 by the calculation, respectively. Obviously, the belief entropy of a conflict evidence and normal evidences is close. So we can conclude from Table 6 and 8 that conflict can be effectively handled, whether the belief entropy of conflict evidence and normal evidence is equal or close. The robustness and superiority of the proposed approach is fully illustrated.

4. Application

In this section, the effectiveness of the proposed method in the application of target recognition is shown. Not only the results are consistent with existing method, but also the belief degree of the true target is improved. The example is cited [15,24,26,28,29].
Example 14.
In a multi-target recognition system, three targets are denoted as A , B andC. Suppose that there is a total of five sensors, respectively obtaining five pieces of evidences. The BPA of each sensor data is shown in Table 9. Intuitively, the evidence m 2 is highly conflicting with other evidence and the targetAwill be obtained the highest belief.
According to the proposed method, the specific calculation process is shown below.
Step 1: Calculate the belief entropy of focal elements and BOEs.
H p m 1 ( A ) = 0.527 , H p m 1 ( B ) = 0.518 , H p m 1 ( C ) = 0.521 , H p m 1 = 1.566 H p m 2 ( A ) = 0.000 , H p m 2 ( B ) = 0.137 , H p m 2 ( C ) = 0.332 , H p m 2 = 0.469 H p m 3 ( A ) = 0.338 , H p m 3 ( B ) = 0.269 , H p m 3 ( A , C ) = 0.612 , H p m 3 = 1.219 H p m 4 ( A ) = 0.357 , H p m 4 ( B ) = 0.332 , H p m 4 ( A , C ) = 0.624 , H p m 4 = 1.313 H p m 5 ( A ) = 0.340 , H p m 5 ( B ) = 0.332 , H p m 5 ( A , C ) = 0.552 , H p m 5 = 1.224
Step 2: Establish the entropy distance matrix D 5 × 5 .
D 5 × 5 = 0 0.527 0.612 0.624 0.552 0.527 0 0.612 0.624 0.552 0.612 0.612 0 0.063 0.063 0.624 0.624 0.063 0 0.072 0.552 0.552 0.063 0.072 0
Step 3: Calculate the support degree of BOEs.
S u p m 1 = 0.677 , S u p m 2 = 0.202 , S u p m 3 = 0.903 , S u p m 4 = 0.949 , S u p m 5 = 0.988
Step 4: Measure the weight of the five BOEs.
w 1 = 0.182 , w 2 = 0.054 , w 3 = 0.243 , w 4 = 0.255 , w 5 = 0.266
Step 5: Get the modified BPA.
m ( A ) = 0.515 , m ( B ) = 0.171 , m ( C ) = 0.060 , m ( A , C ) = 0.254
Step 6: Use the Dempster’s rule of combination to combine the modified BPA 4 times.
m ( A ) = 0.988 , m ( B ) = 0.001 , m ( C ) = 0.007 , m ( A , C ) = 0.004
The fusion results of five BPAs with different methods are shown in Table 10.
As can be seen from Table 10, the recognized target by our improved approach is consistent with existing method. On this basis, the belief degree of the true target is also improved from 98.2 % to 98.8 % , the greater the conflict between evidences, the greater the performance improvement as shown in Example 12 and 13. Two reasons for the effectiveness of the proposed method are given. On the one hand, we measure conflict between evidences from the overall and local point of view so that the conflicts can be accurately measured. On the other hand, the introduction of entropy distance makes the difference between evidences more precise,and conflicting evidences can be assigned a smaller weight than normal evidences. Furthermore, according to the calculation results, the belief entropy of five evidences is 1.566, 0.469, 1.219, 1.313 and 1.224, respectively. Obviously, there is a large difference in the belief entropy between a conflict evidence and normal evidences which is different from Example 12 and 13. This fully demonstrates that our proposed approach can deal with conflicts on a wider range effectively.
It should be pointed that when the sum of entropy distances j = 1 n d H m i , m j as the denominator is equal to zero, proposed method will be meaningless. In order to solve this problem, we just need to replace j = 1 n d H m i , m j with e j = 1 n d H m i , m j , the proposed method is effective again.

5. Conclusions

In this paper, we comprehensively consider the influences of the belief entropy itself and mutual belief entropy on conflict management and propose a novel approach based on PPT entropy and entropy distance. PPT entropy measures the conflict from the perspective of self-belief entropy. Compared with the state-of-the-art belief entropy, it can measure the uncertainty of evidence more accurately, and can make full use of the intersection information of evidence to estimate the degree of evidence conflict more reasonably. Entropy distance is a new distance measurement method and is used to measure the conflict between evidences from the perspective of mutual belief entropy. The combination of the two measures makes conflict measurement more accurate and comprehensive. Example results show that the proposed method can assign a smaller weight to conflicting evidence to minimize the influence of conflict on subsequent evidence combination, no matter whether the belief entropy of conflict evidence and normal evidence is equal, close or a large difference. It not only has a faster convergence speed, but also the belief degree of the correct hypothesis has 3.8% and 1.6% increase compared to existing methods in two experiments, respectively. Furthermore, our proposed method can efficiently manage conflict on a wider range as shown in target recognition application.
Considering this proposed method only obtains satisfactory results in the application of target recognition, follow-up studies will test the performance of the method in univariate time series classification applications. First, we choose the MultiLayer Perceptron, Fully Convolutional Network and Residual Network as three classifiers. Then, the univariate time series classification dataset from UCR is used to obtain the classification results on three classifiers. Finally, the classification results which have a conflict are processed by our proposed method.

Author Contributions

In this research activity, all the authors were involved in the data collection and preprocessing phase, developing the theoretical concept of the model, empirical research, results analysis and discussion, and manuscript preparation. All authors have agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC) Grant No. 61903373.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PPT   pignistic probability transformation
D-S   Dempster-Shafer
BOE   body of evidence
FOD   frame of discernment
RB   requirements of behaviour
NSFC   National Natural Science Foundation of China

References

  1. Jiang, W.; Huang, C.; Deng, X. A new probability transformation method based on a correlation coefficient of belief functions. Int. J. Int. Syst. 2019, 34, 1337–1347. [Google Scholar] [CrossRef]
  2. Liu, Z.; Xiao, F. An Evidential Aggregation Method of Intuitionistic Fuzzy Sets Based on Belief Entropy. IEEE Access 2019, 7, 68905–68916. [Google Scholar] [CrossRef]
  3. Alcantud, J.C.R.; Giarlotta, A. Necessary and possible hesitant fuzzy sets: A novel model for group561decision making. Inf. Fusion 2019, 46, 63–76. [Google Scholar] [CrossRef]
  4. Deng, X.; Jiang, W. Evaluating Green Supply Chain Management Practices Under Fuzzy Environment: A Novel Method Based on D Number Theory. Int. J. Fuzzy Syst. 2019, 21, 1389–1402. [Google Scholar] [CrossRef]
  5. Deng, X.; Jiang, W. A total uncertainty measure for D numbers based on belief intervals. Int. J. Int. Syst. 2019, 34, 3302–3316. [Google Scholar] [CrossRef] [Green Version]
  6. Zadeh, L.A. A Note on Z-numbers. Inf. Sci. 2011, 181, 2923–2932. [Google Scholar] [CrossRef]
  7. Liu, Q.; Tian, Y.; Kang, B. Derive knowledge of Z-number from the perspective of Dempster-Shafer evidence theory. Eng. Appl. Artif. Int. 2019, 85, 754–764. [Google Scholar] [CrossRef]
  8. Chen, D.; Zhang, X.; Wang, X.; Liu, Y. Uncertainty learning of rough set-based prediction under a holistic framework. Inf. Sci. 2018, 463, 129–151. [Google Scholar] [CrossRef]
  9. Parthalain, N.M.; Shen, Q.; Jensen, R. A Distance Measure Approach to Exploring the Rough Set Boundary Region for Attribute Reduction. IEEE Trans. Knowl. Date Eng. 2010, 22, 305–317. [Google Scholar] [CrossRef] [Green Version]
  10. Agarwal, H.; Renaud, J.E.; Preston, E.L.; Padmanabhan, D. Uncertainty quantification using evidence theory in multidisciplinary design optimization. Reliab. Eng. Syst. Saf. 2019, 85, 281–294. [Google Scholar] [CrossRef]
  11. Zadeh, L. A simple view of the Dempster–Shafer theory of evidence and its implication for the rule of combination. Int. AI Mag. 1986, 7, 85–90. [Google Scholar]
  12. Xiao, B.; Wang, S.; Wang, Y.; Jiang, G.; Zhang, Y.; Chen, H.; Liang, M.; Long, G.; Chen, X. Effective thermal conductivity of porous media with roughened surfaces by Fractal-Monte Carlo simulations. Fractals 2020, 28, 2050029. [Google Scholar] [CrossRef]
  13. Liang, M.; Fu, C.; Xiao, B.; Luo, L. A fractal study for the effective electrolyte diffusion through charged porous media. Int. J. Heat Mass Transf. 2019, 137, 365–371. [Google Scholar] [CrossRef]
  14. Luo, Z.; Deng, Y. A matrix method of basic belief assignment’s negation in Dempster-Shafer theory. IEEE Trans. Fuzzy Syst. 2019, 28, 2270–2276. [Google Scholar] [CrossRef]
  15. Xiao, F. Multi-sensor data fusion based on the belief divergence measure of evidences and the belief entropy. Inf. Fusion 2019, 46, 23–32. [Google Scholar] [CrossRef]
  16. Seiti, H.; Hafezalkotob, A.; Najafi, S.E.; Khalaj, M. A risk-based fuzzy evidential framework for FMEA analysis under uncertainty: An interval-valued DS approach. J. Int. Fuzzy Syst. 2018, 35, 1419–1430. [Google Scholar] [CrossRef]
  17. Yu, J.; Hu, M.; Wang, P. Evaluation and reliability analysis of network security risk factors based on D-S evidence theory. Artificial Intelligent Techniques and its Applications. J. Int. Fuzzy Syst. 2016, 34, 861–869. [Google Scholar]
  18. Song, Y.; Wang, X.; Lei, L.; Yue, S. Uncertainty measure for interval-valued belief structures. J. Int. Meas. Conf. 2016, 80, 241–250. [Google Scholar] [CrossRef]
  19. Wan, L.; Li, H.; Chen, Y. Rolling Bearing Fault Prediction Method Based on QPSO-BP Neural Network and Dempster–Shafer Evidence Theory. Energies 2020, 5, 1094. [Google Scholar] [CrossRef]
  20. Han, D.; Dezert, J.; Duan, Z. Evaluation of Probability Transformations of Belief Functions for Decision Making. IEEE Trans. Syst. Man Cybern. 2016, 46, 93–108. [Google Scholar] [CrossRef]
  21. Smets, P. Combination of evidence in the transferable belief model. IEEE Trans. Pattern Anal. 1990, 12, 447–458. [Google Scholar] [CrossRef]
  22. Lefevre, E.; Elouedi, Z. How to preserve the conflict as an alarm in the combination of belief functions? Dec. Support Syst. 2013, 56, 326–333. [Google Scholar] [CrossRef]
  23. Leung, Y.; Ji, N.N.; Ma, J.H. An integrated information fusion approach based on the theory of evidence and group decision-making. Inf. Fusion 2013, 14, 410–422. [Google Scholar] [CrossRef]
  24. Jiang, W.; Zhuang, M.; Qin, X.; Tang, Y. Conflicting evidence combination based on uncertainty measure and distance of evidence. Springerplus 2016, 5, 1217. [Google Scholar] [CrossRef] [Green Version]
  25. Tao, R.; Xiao, F. Combine Conflicting Evidence Based on the Belief Entropy and IOWA Operator. IEEE Access 2019, 7, 120724–120733. [Google Scholar] [CrossRef]
  26. Li, S.; Xiao, F.; Abawajy, J. Conflict management of evidence theory based on belief entropy and negation. IEEE Access 2020, 8, 37766–37774. [Google Scholar] [CrossRef]
  27. Yan, H.; Deng, Y. An Improved Belief Entropy in Evidence Theory. IEEE Access 2020, 8, 57505–57516. [Google Scholar] [CrossRef]
  28. Tang, Y.; Zhou, D.; Xu, S.; He, Z. A Weighted Belief Entropy-Based Uncertainty Measure for Multi-Sensor Data Fusion. Sensors 2017, 17, 392. [Google Scholar]
  29. Gao, X.; Liu, F.; Pan, L.; Deng, Y.; Tsai, S. Uncertainty measure based on Tsallis entropy in evidence theory. Int. J. Int. Syst. 2019, 34, 3105–31207. [Google Scholar] [CrossRef]
  30. Deng, Y. Deng entropy. Chaos 2016, 46, 93–108. [Google Scholar] [CrossRef]
  31. Zhou, D.; Tang, Y.; Jiang, W. A modified belief entropy in Dempster-Shafer framework. PLoS ONE 2017, 12, 832. [Google Scholar] [CrossRef] [PubMed]
  32. Cui, H.; Liu, Q.; Zhang, J.; Kang, B. An Improved Deng Entropy and Its Application in Pattern Recognition. IEEE Access 2019, 7, 18284–18292. [Google Scholar] [CrossRef]
  33. Wang, D.; Gao, J.; Wei, D. A New Belief Entropy Based on Deng Entropy. Entropy 2019, 21, 390. [Google Scholar] [CrossRef] [Green Version]
  34. Yongchuan, T.; Deyun, Z.; Felix, C. An Extension to Deng’s Entropy in the Open World Assumption with an Application in Sensor Data Fusion. Sensors 2018, 18, 1902. [Google Scholar]
  35. Yager, R.R. On the Dempster–Shafer framework and new combination rules. Inf. Sci. 1987, 62941, 93–137. [Google Scholar] [CrossRef]
  36. Smets, P.; Hsia, Y.T.; Saffiotti, A.; Kennes, R.; Xu, H.; Umkehrer, E. The transferable belief model. Lect. Notes Comput. Sci. 1991, 548, 91–96. [Google Scholar]
  37. Cai, Q.; Gao, X.; Deng, Y. Pignistic Belief Transform: A New Method of Conflict Measurement. IEEE Access 2020, 8, 15265–15272. [Google Scholar] [CrossRef]
  38. Pal, N.R.; Bezdek, J.C.; Hemasinha, R. Uncertainty measures for evidential reasoning. Int. J. Approx. Reason. 1992, 7, 165–183. [Google Scholar]
  39. Jousselme, A.L.; Liu, C.; Grenier, D.; Bossé, É. Measuring ambiguity in the evidence theory. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 2006, 36, 890–903. [Google Scholar] [CrossRef]
  40. Abellan, J.; Masegosa, A. Requirements for total uncertainty measures in Dempster–Shafer theory of evidence. Int. J. Gen. Syst. 2008, 37, 733–747. [Google Scholar] [CrossRef]
  41. Yang, Y.; Han, D. A new distance-based total uncertainty measure in the theory of belief functions. Knowl.-Based Syst. 2016, 94, 14–123. [Google Scholar] [CrossRef]
  42. Kullback, S.; Leibler; R, A. On information and sufficiency. Ann. Math. Stat. 1951, 22, 79–86. [Google Scholar] [CrossRef]
  43. Deng, Z.; Wang, J. A Novel Evidence Conflict Measurement for Multi-Sensor Data Fusion Based on the Evidence Distance and Evidence Angle. Sensors 2020, 20, 339. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. The change of uncertainty values in Example 5.
Figure 1. The change of uncertainty values in Example 5.
Energies 14 01143 g001aEnergies 14 01143 g001b
Figure 2. Uncertainty values of four entropies.
Figure 2. Uncertainty values of four entropies.
Energies 14 01143 g002
Figure 3. The change of uncertainty values in Example 6.
Figure 3. The change of uncertainty values in Example 6.
Energies 14 01143 g003
Figure 4. The change of uncertainty values in Example 7.
Figure 4. The change of uncertainty values in Example 7.
Energies 14 01143 g004
Figure 5. The change of uncertainty values in Example 8.
Figure 5. The change of uncertainty values in Example 8.
Energies 14 01143 g005
Figure 6. The change of uncertainty values in Example 9.
Figure 6. The change of uncertainty values in Example 9.
Energies 14 01143 g006
Figure 7. The change of uncertainty values in Example 10.
Figure 7. The change of uncertainty values in Example 10.
Energies 14 01143 g007
Figure 8. The flow chart of the proposed method.
Figure 8. The flow chart of the proposed method.
Energies 14 01143 g008
Table 1. The properties of different entropies.
Table 1. The properties of different entropies.
EntropyProbability
Consistency
Set
Consistency
Maximum
Entropy
AdditivitySubadditivityMonotonicity
Pal et al. [38]YesYesNoYesNoYes
Jousselme et al. [39]YesYesNoYesNoYes
Deng [30]YesNoNoNoNoYes
Zhou et al. [31]YesNoNoNoNoYes
Yan et al. [27]YesNoNoNoNoYes
Proposed entropyYesNoNoNoNoYes
Table 2. Uncertainty measure of Example 1 with four entropies.
Table 2. Uncertainty measure of Example 1 with four entropies.
Uncertainty S 1 S 2 S 3
Deng entropy [30]3.1563.1563.156
Zhou et al.’s belief entropy [31]2.7952.7952.795
Yan et al.’s belief entropy [27]2.7952.7952.795
Our proposed entropy2.2132.3622.400
Table 3. Uncertainty measure of Example 2 with four entropies.
Table 3. Uncertainty measure of Example 2 with four entropies.
Uncertainty S 1 S 2 S 3 S 4
Deng entropy [30]3.6463.6464.0854.085
Zhou et al.’s belief entropy [31]3.1403.1403.4363.436
Yan et al.’s belief entropy [27]2.9572.9573.1133.113
Our proposed entropy2.4872.6362.7152.864
Table 4. The comparison results of two distance methods.
Table 4. The comparison results of two distance methods.
Method S 1 , S 2 S 2 , S 1 S 3 , S 4 S 5 , S 6
relative entropy [42]0.8410.7710.841NaN
entropy distance0.5450.5451.0880.137
Table 5. The BPA of multi-sensor data.
Table 5. The BPA of multi-sensor data.
ABC
m 1 0.90.10
m 2 00.90.1
m 3 0.90.10
m 4 0.90.10
Table 6. The result of combining conflicting evidence with different methods in the Example 12.
Table 6. The result of combining conflicting evidence with different methods in the Example 12.
Method m 1 , m 2 m 1 , m 2 , m 3 m 1 , m 2 , m 3 , m 4
Existing methods [27,28,29] w 1 = w 2 = 0.5 w 1 = w 3 = 0.33 , w 2 = 0.34 w 1 = w 2 = w 3 = w 4 = 0.25
m ( a ) = 0.445 m ( a ) = 0.814 m ( a ) = 0.962
m ( b ) = 0.550 m ( b ) = 0.186 m ( b ) = 0.038
m ( c ) = 0.005 m ( c ) = 0.000 m ( c ) = 0.000
Proposed method w 1 = w 2 = 0.5 w 1 = w 3 = 0.4 , w 2 = 0.2 w 1 = w 3 = w 4 = 0.3 , w 2 = 0.1
m ( a ) = 0.445 m ( a ) = 0.955 m ( a ) = 1.000
m ( b ) = 0.550 m ( b ) = 0.045 m ( b ) = 0.000
m ( c ) = 0.005 m ( c ) = 0.000 m ( c ) = 0.000
Table 7. The basic probability assignment (BPA) of multi-sensor data.
Table 7. The basic probability assignment (BPA) of multi-sensor data.
ABC{A,C}{B,C}
m 1 0.650.050.250.050
m 2 0.550.1000.350
m 3 00.600.1000.30
m 4 0.550.1000.350
m 5 0.550.1000.350
Table 8. The result of combining conflicting evidence with different methods in the Example 13.
Table 8. The result of combining conflicting evidence with different methods in the Example 13.
Method m 1 , m 2 m 1 , m 2 , m 3 m 1 , m 2 , m 3 , m 4 m 1 , m 2 , m 3 , m 4 , m 5
Existing methods [27,28,29] m ( a ) = 0.843 m ( a ) = 0.576 m ( a ) = 0.899 m ( a ) = 0.966
m ( b ) = 0.008 m ( b ) = 0.228 m ( b ) = 0.026 m ( b ) = 0.004
m ( c ) = 0.095 m ( c ) = 0.181 m ( c ) = 0.067 m ( c ) = 0.026
m ( a , c ) = 0.054 m ( a , c ) = 0.009 m ( a , c ) = 0.008 m ( a , c ) = 0.004
m ( b , c ) = 0.000 m ( b , c ) = 0.006 m ( b , c ) = 0.000 m ( b , c ) = 0.000
Proposed method m ( a ) = 0.843 m ( a ) = 0.704 m ( a ) = 0.934 m ( a ) = 0.982
m ( b ) = 0.008 m ( b ) = 0.129 m ( b ) = 0.010 m ( b ) = 0.001
m ( c ) = 0.095 m ( c ) = 0.153 m ( c ) = 0.046 m ( c ) = 0.012
m ( a , c ) = 0.054 m ( a , c ) = 0.011 m ( a , c ) = 0.010 m ( a , c ) = 0.005
m ( b , c ) = 0.000 m ( b , c ) = 0.003 m ( b , c ) = 0.000 m ( b , c ) = 0.000
Table 9. The BPA of multi-sensor data.
Table 9. The BPA of multi-sensor data.
ABC{A, C}
m 1 0.410.290.300
m 2 00.900.100
m 3 0.580.0700.35
m 4 0.550.1000.35
m 5 0.600.1000.30
Table 10. The result of combining conflicting evidence with different methods in the Example 14.
Table 10. The result of combining conflicting evidence with different methods in the Example 14.
Method m 1 , m 2 m 1 , m 2 , m 3 m 1 , m 2 , m 3 , m 4 m 1 , m 2 , m 3 , m 4 , m 5
Existing methods [27,28,29] m ( a ) = 0.286 m ( a ) = 0.762 m ( a ) = 0.935 m ( a ) = 0.982
m ( b ) = 0.529 m ( b ) = 0.122 m ( b ) = 0.017 m ( b ) = 0.002
m ( c ) = 0.185 m ( c ) = 0.105 m ( c ) = 0.040 m ( c ) = 0.013
m ( a , c ) = 0.000 m ( a , c ) = 0.011 m ( a , c ) = 0.008 m ( a , c ) = 0.003
Proposed method m ( a ) = 0.286 m ( a ) = 0.746 m ( a ) = 0.950 m ( a ) = 0.988
m ( b ) = 0.529 m ( b ) = 0.136 m ( b ) = 0.009 m ( b ) = 0.001
m ( c ) = 0.185 m ( c ) = 0.108 m ( c ) = 0.031 m ( c ) = 0.007
m ( a , c ) = 0.000 m ( a , c ) = 0.010 m ( a , c ) = 0.010 m ( a , c ) = 0.004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, S.; Hou, Y.; Deng, X.; Ouyang, K.; Zhang, Y.; Zhou, S. Conflict Management for Target Recognition Based on PPT Entropy and Entropy Distance. Energies 2021, 14, 1143. https://doi.org/10.3390/en14041143

AMA Style

Xu S, Hou Y, Deng X, Ouyang K, Zhang Y, Zhou S. Conflict Management for Target Recognition Based on PPT Entropy and Entropy Distance. Energies. 2021; 14(4):1143. https://doi.org/10.3390/en14041143

Chicago/Turabian Style

Xu, Shijun, Yi Hou, Xinpu Deng, Kewei Ouyang, Ye Zhang, and Shilin Zhou. 2021. "Conflict Management for Target Recognition Based on PPT Entropy and Entropy Distance" Energies 14, no. 4: 1143. https://doi.org/10.3390/en14041143

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop