Elsevier

Control Engineering Practice

Volume 46, January 2016, Pages 142-156
Control Engineering Practice

The optimal design of industrial alarm systems based on evidence theory

https://doi.org/10.1016/j.conengprac.2015.10.014Get rights and content

Highlights

  • Alarm evidence is used to measure uncertainties of the monitored process variable.

  • An evidence updating rule is used to combine alarm evidence for generating alarms.

  • An optimization method is given to obtain optimal parameters of the alarm system.

  • The effectiveness of the system is demonstrated by experiment and industrial case.

Abstract

This paper presents a procedure for the optimal design of industrial alarm systems based on evidence theory to deal with epistemic and aleatory uncertainties of the monitored process variable. First, the upper and lower fuzzy thresholds are designed, and then the sampled value of the process variable is transformed into a piece of alarm evidence to measure the degrees of uncertainty about whether an alarm should be triggered or not by the sampled value. Second, a linear updating rule of evidence is recursively applied to combine the updated alarm evidence at t−1 step with the incoming alarm evidence at t step to generate the updated alarm evidence at t step. In the process of evidence updating, the weights of evidence for linear combination can be obtained by dynamically minimizing the distance between the updated alarm evidence and the true mode (i.e., “alarm” or “no-alarm”). An alarm decision can then be made according to a pignistic probability transformed from the updated alarm evidence at each time step. Finally, numerical experiments and an industrial case are given to show that the proposed procedure has a better performance than the classical design methods.

Introduction

In modern process industry, alarm systems are very important in detecting the abnormal status or failures of large plants by analyzing the monitored process variables. The generated alarms can alert operators to take timely actions (shutdown, degraded operation, etc.) to ensure that the plants do not suffer from worsening damages. According to the guideline of the Engineering Equipment and Materials Users’ Association (EEMUA) (2007), an operator should not receive more than six alarms per hour during the normal operation of the plant. However, in practice, this is rarely satisfied as the number of alarms each operator receives is usually far more than this standard (tens, hundreds or even thousands of alarms per hour) (Izadi, Shah, Shook, & Chen, 2009).The majority of these alarms are false or nuisance alarms that only distract the operator from the operation to the extent that the operator no longer trusts the alarms, even critical alarms may be ignored (Izadi, Shah, Shook, Kondaveeti, & Chen, 2009). Therefore, how to design better alarm systems and measure their performances have recently become urgent issues to be addressed in both industry and academic communities (Coleman, Branch, & Grace (1999), Bergquist, Ahnlund, & Larsson (2003), Jousselme & Maupin (2012), Xu, Zhou, Ji, & Wen (2013), Ahnlund & Bergquist (2003), Hora (1996), Cheng, Izadi, & Chen (2011), Jousselme, Grenier, & Bossé (2001), Xu, Liu, Sun, & Wen (2014), Elouedi, Mellouli, & Smets (2004)).

For a univariate alarm system, the basic alarm generation mechanism is solely based on a single trip point. Although this technique is simple and practical, it cannot effectively restrain false or nuisance alarms. Thereby, some well-known industry standard and guideline (e.g., EEMUA-191 and ANSI-ISA18.2) have introduced nuisance alarm reduction methods such as filtering, time delay, and dead band (Jousselme & Maupin (2012), Adnan, Izadi, & Chen (2011)). The filtering methods mainly include the moving average filter, the median filter and so on. They can filter random noises and eliminate abnormal or bad values in sampled signal (Cheng, Izadi, & Chen (2011), Jousselme, Grenier, & Bossé (2001)); the time delay method generates an alarm only when a few consecutive samples in a fixed time window all exceed the trip point. It can reduce false alarms significantly, but lead to delay in triggering a true alarm (Xu, Liu, Sun, & Wen (2014), Elouedi, Mellouli, & Smets (2004)); the dead band method applies two different thresholds for alarm raising and clearing respectively, which is effective in reducing chattering alarms (Coleman, Branch, & Grace (1999), Bergquist, Ahnlund, & Larsson (2003)). Bergquist et al. (2003) discussed the software design of the alarm cleanup toolbox via the mechanisms of these methods in industrial process control.

In practice, it is essential to define some indices for assessing the performances of alarm systems, and then use the defined indices to develop corresponding strategies to optimally design the parameters of alarm systems, such as trip point, the order of the filter and the number of sample delay. The ANSI-ISA18.2 standard (American National Standards Institute (ANSI), 2009) proposed three such indices: false alarm rate (FAR), missed alarm rate (MAR) and average alarm delay (AAD). FAR and MAR measure the accuracy of an alarm system in detecting the normal and abnormal conditions of a process variable. AAD measures the alarm latency or promptness of an alarm system (Xu, Liu, Sun, & Wen (2014), Elouedi, Mellouli, & Smets (2004)). Under the assumption that the statistical distributions of a process variable are known, Izadi, Shah, Shook, Kondaveeti, et al. (2009) gave a specific formula for calculating the FARs and the MARs of basic mechanism, filtering, time delay and dead band methods. Furthermore, the receiver operating characteristic curve (ROC) was also proposed to visualize the trade-off between FAR and MAR as the trip point changes (Jousselme & Maupin (2012), Cheng, Izadi, & Chen (2011), Jousselme, Grenier, & Bossé (2001), Ahnlund & Bergquist (2003), Hora (1996), Xu, Liu, Sun, & Wen (2014)). Given statistic distributions of a process variable and relatively simple structures of the filters, Cheng et al. (2011) designed general optimal filters in the forms of log-likelihood ratio, linear and quadratic filters respectively. Some numerical optimization design procedures have been proposed by analyzing ROC curves. Under the assumption that a process variable at different time moments is independent and identically distributed, Xu and Wang (2010) calculated the AADs of the basic alarm generation mechanism and time delay method using Markov chain. Xu et al. (2012) presented a systematic design procedure to choose the trip point and the number of sample delay of the alarm delay timer by considering these three indices, and the tradeoffs among them.

From the perspective of uncertain information processing, if the statistical distributions of a process variable are known, then the optimal design of its alarm system is essentially to deal with the uncertainty of the process variable by using probability theory. Oberkampf, Helton, Joslyn, Wojtkiewicz, and Ferson (2004) defined this kind of uncertainty as aleatory uncertainty describing the inherent variations in the physical system or the environment under consideration. It is also interpreted as unpredictable uncertainty, irreducible uncertainty, and stochastic uncertainty (Hora, 1996). The probability distribution is the representation most commonly used for describing the aleatory uncertainty of the process variable (Soundappan, Nikolaidis, Haftka, Grandhi, & Canfield, 2004). When substantial experimental or real data and expert knowledge are available for estimating the parameters of the distribution, it is beyond argument that a probability distribution is a suitable model for describing the aleatory uncertainty (Hora, 1996). Therefore probabilistic methods are suitable for dealing with the aleatory uncertainty. On the other hand, Oberkampf et al. (2004) also defined another kind of uncertainty, named as epistemic uncertainty, caused by some level of ignorance (or incomplete information or knowledge) about a system or its operating environment. This type of uncertainty can be reduced with an increase in knowledge or information. Hence it is also termed as reducible uncertainty, subjective uncertainty, or state-of- knowledge uncertainty (Soundappan et al., 2004).

In the industrial alarm system, the sources of the epistemic uncertainty of a process variable may include the following: (1) Because of adverse environment (such as high pressure and high temperature) and limitations of sensor performances (such as accuracy and sampling frequency), there may not be enough samples of the process variable for estimating its probability distributions under various operating conditions; (2) Due to limited understanding of a complex industrial plant, it can be difficult to rationally set sensors to monitor the initial conditions and all relevant factors that influence the outcome of the manufacturing process, or it is uneconomic to make great efforts to understand these factors only for designing an alarm system; (3) the monitoring data may be contaminated by some hardly predictable interferences (such as power line interference and electromagnetic interference etc.). Thereby, situations may arise when it is impossible to accurately estimate the distribution of the process variable and describe it analytically. In these cases, the methods in probability theory may no longer be valid for designing its alarm system. If a process variable can be modeled using a probability distribution with imprecise parameters, it can be regarded as having a mixture of epistemic and aleatory uncertainty. For example, the mean and the standard deviation of the distribution may vary with time but within a known interval (Soundappan et al., 2004). If an expert has no knowledge about the distributions of the process variable, and can only confirm that it takes values in a closed interval, then this process variable can be regarded as having only pure epistemic uncertainty (Hora, 1996). A more complicated case of the pure epistemic uncertainty is that multiple experts can provide different intervals, and these intervals may partially overlap with each other (Oberkampf et al., 2004).

As a result, there has recently been an increasing interest in the problems of representing, modeling and analyzing epistemic uncertainty, as well as mixtures of epistemic and aleatory uncertainties. In the recent forty years, the generalized information theory (GIT) has been developing rapidly and offering promising potential approaches to the problems. Fuzzy theory, evidence theory and possibility theory are three important components of GIT. Though some of them can only deal with pure epistemic uncertainty, most of them can deal with both. Compared with traditional probability theory, many of these theories are able to more accurately represent epistemic uncertainty or deal with imprecise probabilistic information. They are available supplementaries of probability theory. Engineering applications of some of these theories can be found in recent publications (Xu & Wang (2010), Mercier, Quost, & Denœux (2008), Izadi, Shah, Shook, Kondaveeti, & Chen (2009)).

One of the modern theories for uncertainty representation and analysis is the evidence theory, also known as Dempster–Shafer theory (DST) (Shafer (1976), Soundappan, Nikolaidis, Haftka, Grandhi, & Canfield (2004)). The advantage of using evidence theory lies in the fact that it can quantify the degree of uncertainty by using the Basic Belief Assignment (BBA, popularly known as evidence), the Belief measure (Bel), and the Plausibility measure (Pl) when the amount of available information and knowledge prevents the precise estimation of the probabilities in a distribution. Furthermore, the DST provides evidence combination and updating rules to aggregate evidence coming from different sources of information (Smets & Kennes (1994), Izadi, Shah, Shook, & Chen (2009), Zadeh (1965)). These fusion processes can effectively inference the effects of the uncertainty for more reliable decision-making in engineering applications (Ma, Liu, Dubois, & Prade (2011), Oberkampf, Helton, Joslyn, Wojtkiewicz, & Ferson (2004), Yang & Xu (2013)). This paper presents a DST-based optimal design method for the cases in which the process variable has the mixtures of epistemic and aleatory uncertainties or pure epistemic uncertainty. It provides an alarm evidence generation method from sample data of the process variable and an on-line alarm evidence updating and optimization method to dynamically obtain the optimal parameters of the designed alarm system. Due to the usage of the uncertainty description in the form of evidence and dynamic evidence updating and optimization mechanism, the proposed procedure has better performance than the moving average filter and the alarm delay timer, as demonstrated in a few comparative numerical experiments and an industrial case under the same test conditions. The rest of this paper is organized as follows. Section 2 is devoted to the definition and computation of FAR, MAR and AAD. Section 3 introduces the main concepts of evidence theory. The optimal design procedure of alarm systems is investigated in Section 4. Section 5 provides the experiments to illustrate the performance of the proposed approach. Finally, some concluding remarks are given in Section 6.

Section snippets

Performance indices for alarm system design

This section studies the general expressions of FAR, MAR and AAD for the basic alarm mechanism, and then introduces the probabilistic formulas of FAR, MAR and AAD respectively in filtering and time delay methods, which are used in the comparative analysis later in the paper.

Foundations of Dempster–Shafer evidence theory

This section introduces some main concepts of the DST and the necessary notions that will be used in the proposed approach. A more detailed explanation and some background information can be found in Shafer (1976) and Smets and Kennes (1994).

The optimal design of alarm system based on evidence updating

This section will present a new optimal design method by using evidence updating mechanism introduced in Section 3. It involves the following steps: (1) acquisition of the incoming alarm evidence from the sampled value of the process variable using the proposed lower and upper fuzzy thresholds; (2) the alarm evidence updating and the online optimization of the designed parameters including the discounting coefficient and the linear combination weights; (3) alarm decision according to the

Experimental results

Since the proposed method calculate the linear combination weights using the current alarm evidence and the previous two global alarm evidence respectively obtained at three consecutive steps, we compare the proposed method with 3-order moving average filter, 3-sample alarm delay timer by three typical experiments to show the performance superiority of the proposed method. Experiment 1 assumes that the process variable x only has aleatory uncertainty, namely, it is generated as a piecewise

Conclusion

This paper presents a procedure for the optimal design of alarm system based on evidence updating to deal with epistemic and aleatory uncertainties in the process variable. Firstly, the upper and lower fuzzy thresholds are designed with the form of membership functions, and then the sampled value of the process variable can be transformed into a piece of alarm evidence at each time step. Secondly, a linear updating rule of evidence is recursively used to combine the global alarm evidence at t−1

Acknowledgments

This work was supported by the NSFC (Nos. 61374123, 61433001, and 61573076) and the Zhejiang Provincial Natural Science Foundation of China (No. LQ14F030010).

References (37)

  • P. Smets et al.

    The transferable belief model

    Artificial Intelligence

    (1994)
  • P. Soundappan et al.

    Comparison of evidence theory and Bayesian theory for uncertainty modeling

    Reliability Engineering & System Safety

    (2004)
  • D.L. Xu et al.

    Inference and learning methodology of belief-rule-based expert system for pipeline leak detection

    Expert Systems with Applications

    (2007)
  • J.B. Yang et al.

    The evidential reasoning approach for MADA under both probabilistic and fuzzy uncertainties

    European Journal of Operational Research

    (2006)
  • J.B. Yang et al.

    Evidential reasoning rule for evidence combination

    Artificial Intelligence

    (2013)
  • D. Yong et al.

    Combining belief functions based on distance of evidence

    Decision Support Systems

    (2004)
  • L.A. Zadeh

    Fuzzy sets

    Information Control

    (1965)
  • Adnan, N. A., Izadi, I., & Chen, T. (2011). Computing detection delays in industrial alarm systems. In Proceedings of...
  • Cited by (0)

    View full text