Next Article in Journal
Using Different Qualitative Scales in a Multi-Criteria Decision-Making Procedure
Next Article in Special Issue
Asymptotic Diffusion Analysis of Multi-Server Retrial Queue with Hyper-Exponential Service
Previous Article in Journal
FastText-Based Local Feature Visualization Algorithm for Merged Image-Based Malware Classification Framework for Cyber Security and Cyber Defense
Previous Article in Special Issue
On Reliability of a Double Redundant Renewable System with a Generally Distributed Life and Repair Times
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Renewal Redundant Systems Under the Marshall–Olkin Failure Model. A Probability Analysis

1
Department of Mathematics, Kettering University, Flint, MI 48504, USA
2
Department of Applied Mathematics and Computer Modeling, Gubkin Russian State Oil and Gas University (Gubkin University), 119991 Moscow, Russia
3
Department of Applied Probability and Informatics, Peoples’ Friendship University of Russia (RUDN University), 6 Miklukho-Maklaya St, 117198 Moscow, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(3), 459; https://doi.org/10.3390/math8030459
Submission received: 24 February 2020 / Revised: 13 March 2020 / Accepted: 13 March 2020 / Published: 24 March 2020

Abstract

:
In this paper a two component redundant renewable system operating under the Marshall–Olkin failure model is considered. The purpose of the study is to find analytical expressions for the time dependent and the steady state characteristics of the system. The system cycle process characteristics are analyzed by the use of probability interpretation of the Laplace–Stieltjes transformations (LSTs), and of probability generating functions (PGFs). In this way the long mathematical analytic derivations are avoid. As results of the investigations, the main reliability characteristics of the system—the reliability function and the steady state probabilities—have been found in analytical form. Our approach can be used in the studies of various applications of systems with dependent failures between their elements.

1. Introduction

In 1967, Marshall and Olkin proposed a bivariate distribution, henceforth called (MO), with dependent components, defined via three independent Poisson processes that represent three types of shocks. Two of these act individually on each component and the third one acts simultaneously on both components. This model possesses the what is known as a bivariate lack of memory property—henceforth, BLMP. Many books and articles pay attention to BLMP and related bivariate exponential distributions exhibiting singularity along the main diagonal in R + 2 ; see, for example, Barlow et al. [1,2] among others. Many later articles complemented and extended the MO distribution, justifying their advantages to various data sets from engineering, medicine, insurance, finance, biology, risks, etc. Li and Pellerey in [3] generalized the BLMP considering independent non-Poisson random shocks. The corresponding joint distributions encapsulate “aging.” In 2014 the model was extended to the multidimensional case by Lin and Li [4]. As an added step, in 2015 Pinto and Kolev [5] introduced the extended BLMP model assuming the dependence between individual shocks, but keeping the third one independent of the previous two. Their motivation is that the individual shocks might be dependent if the items share a common environment.
Almost all of those investigations were focused on some generalizations of the bivariate or multivariate distributions, while lacking, respectively, the memory property and studies of their additional properties. They use the MO model only up for the first failure (where those distributions work) and do not include it in any reliability or maintenance process models which would follow up. The original MO idea is used in our present article to model a renewable, heterogeneous, two-component redundant standby system in operation, wherein components fail and still operate according to the MO model. The system-level characteristics in terms of its Laplace Stieltjes transform (LST) for this model are derived, by use of probability meaning of the LSTs and avoiding cumbersome, analytic mathematical details. Previously, in Kozyrev et al. [6], the Laplace Transforms (LT) of state probabilities were found by use of somewhat direct probability analysis. In this paper the stationary probabilities for such a system with a MO renewable failure model between its components are derived and investigated using common means of Markov chains. For this reason the assessments of their time dependent probabilities are not required. To our knowledge, the MO model in dynamic situations has not been discussed.
This paper is organized as follows. In the next Section 2 the problem set-up is described, and some notations in further use are introduced. The time dependent system characteristics within a cycle are presented in Section 3. In Section 4 the passages between states are studied by means of the probability meaning of the probability generating functions (PGF). Their mean values, variances and mutual co variances are found. In Section 5 these results are used to establish the mean sojourn times during a life cycle in each state. Finally, the last Section 6 is devoted to the determination of the stationary probabilities in this MO dynamic reliability system by use of their meaning in a finite, non-periodic Markov chain as presented on the books of Feller [7].
We conclude with a wish list for further possible research on similar reliability maintenance models, which could be based on the already existing models of extended MO distribution.

2. The Problem Setting and Notations

Consider a heterogeneous, two-component, redundant, hot, standby renewable system, wherein components fail according to the original MO model. For lifetimes T 1 and T 2 , the MO model is specified by the representation
( T 1 , T 2 ) = ( min ( A 1 , A 3 ) , min ( A 2 , A 3 ) ) ,
where non-negative continuous random variables A 1 and A 2 are the times to occurrence of independent “individual risk strikes” affecting individually, each of the two devices. The first risk strike affects only the first component; the second one affects only the second one; and the third type of risk strike represents the time to occurrence of the “common failure” A 3 that affects both components simultaneously, or just the working one, and leads to the failure of the entire system in any case. It is supposed that the risk strikes are governed by independent homogeneous Poisson processes; i.e., A i ’s in (1) are exponentially distributed with parameters α i ( i = 1 , 2 , 3 ) .
In dealing with a renewable model, we need to consider the system’s renovation after its partial and/or complete failure. Here it is assumed that after a partial failure (when only one component say i, fails) the repair of type i, with random duration B i ( i = 1 , 2 ) begins. This means that the system continues to function with the one working component. After a complete system failure a repair of the whole system (both components) begins, and lasts some random time; say, B 3 . It is assumed that the repair times B k ( k = 1 , 2 , 3 ) have cumulative distribution functions (CDF) B k ( x ) ( k = 1 , 2 , 3 ) respectively. All repair times are assumed independent from the other random duration.
Situations like this can be found in practice as real. Imagine two power stations providing energy in certain regions. Each may fail individually for some internal reason. But common failures may be due to some weather or other environmental conditions. Common failures must be repaired simultaneously by common services, and be started simultaneously at the same time for security reasons.
The system state space can be represented by E = { E 0 , E 1 , E 2 , E 3 } , where E 0 means that both components are working; E 1 shows that the first component is being repaired and the second one is working; E 2 indicates that the second component is being repaired and the first one is working; E 3 says that both components are in down states, and that the system has failed and is being repaired. To describe the system’s behavior we introduce a random process { J ( t ) , t 0 } which takes values in the phase space E, such that
J ( t ) = j , if   at   time   t the   system   is   in   state   E j ( j = 0 , 1 , 2 , 3 ) .
During this paper the method of additional events or the so-called “catastrophes and coloring method” will be used. It consists of introducing additional events, which allow us to propose probabilistic interpretations to LST and PDF in order to find some relations between system characteristics. Such an approach was well developed in the works of Danielyan and Dimitrov [8,9,10] applied in the study of characteristics of various priority queues. Another approach to the investigation of priority queues has been proposed in [11].
Further, for brevity, we will use the following notation:
-
α = α 1 + α 2 + α 3 is the summary risk intensity of the system failure.
-
b k = 0 x d B k ( x ) ) , ( k = 1 , 2 , 3 ) is the mean repair time of a k-th component and of the whole system when k = 3 .
-
β k ( s ) = 0 e s x d B k ( x ) , ( k = 1 , 2 , 3 ) are the LSTs of the repair times B k of each k-th component and of the whole system when k = 3 .
-
T = inf { t : J ( t ) = 3 } is the system lifetime. It starts with both components working, and ends with a failure of both components (either both hit by risk 3, or one fails on its own, or due to a strike from risk 3 while other one is in repair).
-
W = the system life cycle which represents the portion of time when the system starts after a whole repair or when both components start working (after one is repaired), and ends with the complete repair of the whole system.

3. Life Cycle and System Life Time

Since every life cycle W consists of a system work portion of time T and ends with the next system repair time B 3 , a repair type 3, it is true that W = T + B 3 , and T and B 3 are independent. Therefore, it holds that:
Lemma 1.
The LST ω ( s ) is solution of the equation
ω ( s ) = E e s W = α 3 α + s β 3 ( s ) + α 1 α + s β 1 ( s + α 2 + α 3 ) ω ( s ) + α 2 α + s β 2 ( s + α 1 + α 3 ) ω ( s ) + α 1 α + s α 2 + α 3 α 2 + α 3 + s [ 1 β 1 ( s + α 2 + α 3 ) ] β 3 ( s ) + α 2 α + s α 1 + α 3 α 1 + α 3 + s [ 1 β 2 ( s + α 1 + α 3 ) ] β 3 ( s )
Proof. 
In this proof we will use the probability meaning of the LST, and the exponential distributions of the risks. The probability meaning of the LST was originally introduced by Kesten and Runnenburg [12]. It became public in the book of Klimov [13] and extensively used in the monograph of Gnedenko et al. [14]. In what follows next, we explain the meaning.
Introduce a complement process S t of “catastrophes”—a Poisson process with parameter s > 0 , and let S be the time to its first occurrence. Then the LST
ω ( s ) = 0 e s x d W ( x ) = P ( S > W )
is the probability that during a time of duration W there will not be any “catastrophes.”
If we have two competing risks of parameters s and α , then probability that a risk of parameter α will happen first, and that no risk of parameter s will happen in the meantime, is
α α + s [ 1 ω ( α + s ) ] = 0 α α + s ( 1 e ( α + s ) x ) d W ( x ) .
Now, ω ( s ) is the probability that “no catastrophes” will happen during a cycle. The first line in the statement reflects the chance that one of the following sequences of independent events occurs:
(a1)
First comes a risk of type 3—no “catastrophes” before this time occurring, and then “no catastrophes” occurring in the ensuing repair time of duration B 3 that follows this break;
(a2)
The first risk that comes is of type 1—“no catastrophes” happen before this happens (probability of this is α 1 s + α ), and then “no catastrophes” and no other risks of type 2 or 3 occur during the time B 1 (probability of this is β 1 ( s + α 2 + α 3 ) ), and then in the following new cycle “no catastrophes” happen (probability of what equals ω ( s ) );
(a3)
Analogously to the sequence described in (a2), the first risk that comes is of type 2, and “no catastrophes” happen before it, and then “no catastrophes” and no other risks of type 1 or 3 happen, and then during the following new cycle “no catastrophes” happen;
(a4)
The first risk that comes is of type 1, and “no catastrophes” happen before it (probability of this is α 1 s + α ), and then “no catastrophes” but risks of type 2 or 3 happen during repair time of duration B 1 (probability is α 2 + α 3 s + α 2 + α 3 [ 1 β 1 ( s + α 2 + α 3 ] ), and then in the following repair of type 3 “no catastrophes” happen (probability is β 3 ( S ) );
(a5)
Analogously to the sequence described in (a4), the sequence starts with risk 2 occurring first, and then the sequence ends with repair of type 3 during which “no catastrophes” happen.
These are the five particular realizations (cases) of the event in which, during a time of duration W, no “catastrophes” happen. The total probability rule equals to the sum of the probabilities of its particular cases.
These derived relations hold for s > 0 but are valid for any real and complex values of s according to the theory of continuation of the analytic functions. ◻
Corollary 1.
The distribution of the system life cycle duration W is determined by its LST
ω ( s ) = { α 3 + i , j = 1 , 2 , i j 2 α i α j + α 3 α j + α 3 + s [ 1 β i ( s + α j + α 3 ) ] } β 3 ( s ) α + s α 1 β 1 ( s + α 2 + α 3 ) α 2 β 2 ( s + α 1 + α 3 ) .
Proof. 
By solving the equation obtained in the above Lemma, we get the statement in the Corollary. ◻
Corollary 2.
The life time of the system T is determined by its LST
τ ( s ) = α 3 + α 1 α 2 + α 3 α 2 + α 3 + s [ 1 β 1 ( s + α 2 + α 3 ) ] + α 2 α 1 + α 3 α 1 + α 3 + s [ 1 β 2 ( s + α 1 + α 3 ) ] α + s α 1 β 1 ( s + α 2 + α 3 ) α 2 β 2 ( s + α 1 + α 3 ) .
Proof. 
Use that
W = T + B 3
and T and B 3 are independent. Therefore, ω ( s ) = τ ( s ) β 3 ( s ) . Hence, τ ( s ) = ω ( s ) / β 3 ( s ) . Substitute here, ω ( s ) from Corollary 1, and get the presentation in the statement. ◻
Corollary 3.
The mean work time E [ T ] of the system during a cycle is determined by the expression
E [ T ] = 1 + α 1 α 2 + α 3 [ 1 β 1 ( α 2 + α 3 ) ] + α 2 α 2 + α 3 [ 1 β 2 ( α 1 + α 3 ] α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) .
Proof. 
Use that E [ T ] = ( 1 ) d τ ( s ) d s | s = 0 , and after some calculations get the statement. Calculations are significantly simplified if one does differentiation in s in the equation τ ( s ) d e n o m ( s ) = n u m ( s ) , where d e n o m ( s ) and n u m ( s ) are notations for denominator and numerator in the expression for τ ( s ) .
The mean work time E [ T ] of the system during a cycle is finite, when the right hand side in last expression is finite. ◻
Comment: 
If the repair times B 1 , B 2 are instant, then β i ( s ) = 1 , ( i = 1 , 2 ) and the only break is of type 3. Then E [ T ] = 1 / α 3 . If P { B i > 0 } > 0 , ( i = 1 , 2 ) , then 0 < β i ( s ) < 1 , ( i = 1 , 2 ) , and the numerator and denominator in E [ T ] are finite. Therefore, E [ T ] is always finite; hence, the life time of the system always has a finite expectation. Moreover, if b 3 < , the cycle has a finite duration, and a stationary regime is guaranteed.
Theorem 1.
If α i > 0 , ( i = 1 , 2 , 3 ) and 0 < b 3 < , then the process is stable, and the macro state stationary probabilities
lim t P { J ( t ) E 0 E 1 E 2 } = E [ T ] E [ T ] + b 3 ,
and
lim t P { J ( t ) E 3 } ) = b 3 E [ T ] + b 3
do exist for any distributions of the repair times B i , ( i = 1 , 2 ) .
Proof. 
In a long run, the system process is an alternating renewal process, where a work time of duration T and a down time of duration B 3 alternatively change. By the renewal theory for alternating times of finite expectations, the statement holds. ◻

4. Number of Passages between the States During a Cycle

In this section we will study the number of changes between the states during a cycle of the system. It uses another probability interpretation of the probability generating functions together with the LST when changes occur. Again, we use the probability meaning of the PGF’s combined with the LST, as referred above to the monograph of Gnedenko et al. [14].
Introduce the random variables (symbol # means “counts in the set”):
N i = # ( p a s s a g e s i n t o E i d u r i n g a c y c l e ) .
Call a passage “green” with a probability z i [ 0 , 1 ] independently of the color of other passages, and any other events. Then the function
ω ( z , s ) = E z 0 N 0 z 1 N 1 z 2 N 2 z 3 N 3 e s W = 0 k i = 0 i = 0 , 1 , 2 , 3 P N i = k i ( i = 1 , 2 , 3 ) z 0 k 0 z 1 k 1 z 2 k 2 z 3 k 3 e s x d W ( x )
can be interpreted for z i [ 0 , 1 ] and s > 0 as the probability that “during a cycle no catastrophes will happen, and all passages inside will be green.”
Notice that
ω ( 1 , s ) = ω ( s ) and ω ( z , 0 ) = ω ( z 0 , z 1 , z 2 , z 3 )
are the LST of the cycle duration, and the PGF of the number of passages in a cycle correspondingly. It is true:
Lemma 2.
The function ω ( z , s ) is solution of the equation
ω ( z , s ) = α 3 α + s z 3 β 3 ( s ) + α 1 α + s z 1 β 1 ( s + α 2 + α 3 ) z 0 ω ( z , s ) + α 2 α + s z 2 β 2 ( s + α 1 + α 3 ) z 0 ω ( z , s ) + α 1 α + s z 1 α 2 z 2 + α 3 α 2 + α 3 + s [ 1 β 1 ( s + α 2 + α 3 ) ] z 3 β 3 ( s ) + α 2 α + s z 2 α 1 z 1 + α 3 α 1 + α 3 + s [ 1 β 2 ( s + α 1 + α 3 ) ] z 3 β 3 ( s )
Proof. 
In this proof we use the probability meaning of the PGF ω ( z , s ) combined with the LST when introduce a complimentary process S ( t ) of “catastrophes”—a Poisson process with parameter s > 0 , and “the green colors” of all the passages, as defined above. Then ω ( z , s ) is the probability that during a time of duration W there will not be any “catastrophes,” and all the passages between the states are “green.”
We have two independent competing risks of parameters s and α ; then, the probability that a risk of parameter α will happen first, no risk of parameter s will happen and the particular passage is “green” (probability equals z) is
α α + s z [ 1 ω ( α + s ) ] = z 0 α e ( α + s ) x d W ( x ) ,
since only one passage (one count) may happen. Now, the probability ω ( z , s ) that during a certain duration W there will not be any “catastrophes,” and all the passages between the states are “green” is the probability of an event which has five particular cases. The first line in the statement reflects the chance that:
(a1)
First comes a risk of type 3—no “catastrophes” until it, and this passage E 0 E 3 is green; “no catastrophes” happen in the following repair time B 3 .
(a2)
The first risk that comes is of type 1, and “no catastrophes” happen before it; this passage is “green” (with probability z 1 ). Then, “no catastrophes” and no other risks of type 2 or 3 happen, and then in the following new cycle, the passage is “green,” “no catastrophes happen and all passages are “green” probabilities of z 0 ω ( z , s ) . This case explains line 2, presenting the probability of the second particular case.
(a3)
The same interpretation of this case as in (a2) when the first risk that comes is of type 2. No need to repeat details to explain line 3.
(a4)
Line 4 presents the particular case in which the first risk that comes is of type 1—“no catastrophes” happen before it, the passage E 0 E 1 is green; then “no catastrophes” but risks of type 2 or 3 happen and the passage E 1 E 2 is green (probability z 2 ). Then, during the following passage to repair type 3, “no catastrophes” happen, and the last passage is also “green” with probability z 3 ;
(a5)
The last line 5 reflects the probability of a particular case similar to that explained in line 4 when the first risk that comes about is of type 2. We skip a detailed explanation again.
These are the five particular cases of the event whose total probability equals the sum of probabilities of its particular cases. These relations hold for s > 0 and z i ( 0 , 1 ) , but are valid for any real and complex values of s and z i s according to the theory of analytic functions. ◻
Corollary 4.
The PGF of the number of passages in a cycle ω ( z 0 , z 1 , z 2 , z 3 ) is determined by the equation
ω ( z ) = α 3 z 3 + α 1 z 1 α 2 z 2 + α 3 α 2 + α 3 [ 1 β 1 ( α 2 + α 3 ) ] z 3 + α 2 z 2 α 1 z 1 + α 3 α 1 + α 3 [ 1 β 2 ( α 1 + α 3 ) ] z 3 α α 1 z 1 β 1 ( α 2 + α 3 ) z 0 α 2 z 2 β 2 ( α 1 + α 3 ) z 0 .
Proof. 
Solve the Equation (4) with respect to ω ( z , s ) first, and let s = 0 in the obtained expression; then get the statement. By the way, if in the expression obtained for ω ( z , s ) you put z = 1 , you find the result of Corollary 1. In this sense Lemma 2 presents a more detailed analysis of probabilities what happens within a cycle. ◻
Corollary 5.
(a0) The average number of visits in state E 0 during a cycle equals
E [ N 0 ] = α 1 β 1 ( α 2 + α 3 ) + α 2 β 2 ( α 1 + α 3 ) α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ;
(a1, a2) The average number of visits in state E i , ( i = 1 , 2 ) during a cycle equals
E [ N 1 ] ) = α 1 + α 2 α 1 α 1 + α 3 β 2 ( α 1 + α 3 ) α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ;
E [ N 2 ] = α 2 + α 1 α 2 α 2 + α 3 β 1 ( α 2 + α 3 ) α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ;
(a3) The average number of visits in state E 3 during a cycle equals
E [ N 3 ] = 1 .
Proof. 
It is well known that
E [ N i ] = z i ω ( z 0 , z 1 , z 2 , z 3 ) | z = 1 , ( i = 0 , 1 , 2 , 3 ) .
After taking the partial derivatives in expression for ω ( z ) and letting all z i s be equal to one, by solving the obtained equations with respect to E [ N i ] we get the stated expressions. Again, differentiation is simplified, if one multiplies by denominator both sides in result of Corollary 4, and then takes derivatives. ◻
No wonder E [ N 3 ] = 1 , since just once system may fail during a cycle, and this is the sure end of each cycle.
One might continue further, by finding the variances
V a r [ N i ] = 2 z i 2 ω ( z 0 , z 1 , z 2 , z 3 ) | z = 1 + z i ω ( z 0 , z 1 , z 2 , z 3 ) | z = 1 , z i ω ( z 0 , z 1 , z 2 , z 3 ) | z = 1 2 , ( i = 0 , 1 , 2 , 3 )
and mixed moments
E [ N i N j ] = 2 z i z j ω ( z 0 , z 1 , z 2 , z 3 ) | z = 1 , ( i , j = 0 , 1 , 2 , 3 ) .
Finally, the correlation coefficients
ρ ( N i N j ) = E [ N i N j ] E [ N i ] E [ N j ] V a r [ N i ] V a r [ N j ]
can be determined. By doing this we get the next result.
Theorem 2.
A 0 The variance of the number of passages into the state E 0 during a cycle equals
V a r ( N 0 ) = [ α 1 β 1 ( α 2 + α 3 ) + α 2 β 2 ( α 1 + α 3 ) ] × × 1 + α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ( α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ) 2 ;
A 1 The variance of the number of passages into the state E 1 during a cycle equals
V a r ( N 1 ) = [ α 1 + α 2 α 1 α 1 + α 3 β 2 ( α 1 + α 3 ) ] α α 1 α 2 2 α 1 + α 3 α 1 + α 3 β 2 ( α 1 + α 3 ) ( α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ) 2 ;
A 2 The variance of the number of passages into the state E 2 during a cycle equals
V a r ( N 2 ) = [ α 2 + α 1 α 2 α 2 + α 3 β 1 ( α 2 + α 3 ) ] α α 2 α 1 2 α 2 + α 3 α 2 + α 3 β 1 ( α 2 + α 3 ) ( α α 1 β 1 ( α 1 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ) 2 ;
A 3 The variance of the number of passages into the state E 3 during a cycle equals
V a r ( N 3 ) = 0 ;
A 01 The covariance between the numbers of passages in E 0 and E 1 during a cycle equals
C o v ( N 0 , N 1 ) = 2 ω ( z ) z 0 z 1 | z = 1 E [ N 0 ] E [ N 1 ] = α 1 β 1 [ 1 + α 1 β 1 ( α 2 + α 3 ) + α 2 β 2 ( α 1 + α 3 ) ] α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ;
A 02 The covariance between the numbers of passages in E 0 and E 2 during a cycle equals
C o v ( N 0 , N 2 ) = 2 ω ( z ) z 0 z 2 | z = 1 E [ N 0 ] E [ N 2 ] = α 2 β 2 [ 1 + α 1 β 1 ( α 2 + α 3 ) + α 2 β 2 ( α 1 + α 3 ) ] α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ;
A 12 The covariance between the numbers of passages in E 1 and E 2 during a cycle equals
C o v ( N 1 , N 2 ) = 2 ω ( z ) z 1 z 2 | z = 1 E [ N 1 ] E [ N 2 ] = α 1 β 1 ( α 2 + α 3 ) E [ N 2 ] + α 2 β 2 ( α 1 + α 3 ) E [ N 1 ] α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) + α 1 α 2 1 β 1 ( α 2 + α 3 ) α 2 + α 3 + 1 β 2 ( α 1 + α 3 ) α 1 + α 3 ( α α 1 β 1 ( α 1 + α 3 ) α 2 β 2 ( α 1 + α 3 ) ) 2 E [ N 1 ] E [ N 2 ] .
Proof. 
For proof, a detailed differentiation is needed, putting z = 1 and carefully implementing the necessary algebraic calculations to get declared results. We omit details, since it is a routine operation not deserving inclusion. ◻
Comment. 
Having the LST or the PGF of a distribution, one can investigate its asymptotic behaviour for small or large values of their arguments, applying Abelian or Tauberian Theorems, as recommended in the work of Omey and Willenkens [15]. Some useful applications of this approach can be found in Dimitrov [16]. In this way several useful approximations valid in large scope of situations could be found and practically used instead of detailed characteristics that follow from exact relationships.

5. Sojourn Times during a Life Cycle

To calculate the sojourn times G i in each state E i during a life cycle we will use the expressions relating numbers of visits in a state N i and the individual sojourn times g i ( i = 0 , 1 , 2 , 3 ) at each visit. It holds that
G i = k = 0 N i g i .
For our purposes, we are interested on the average sojourn times E [ G i ] in each state. Since we already know the distributions and mean times of the numbers N i , we need the mean times E [ g i ] only. The use of the Wald identity
E [ G i ] = E [ N i ] E [ g i ] ,
applied to (5) will give us the desired results.
Let look at the average sojourn times in each of the states.
Each stay in the state E 0 is the minimum of the three exponents of parameter α i ( i = 1 , 2 , 3 ) each. Therefore
E [ g 0 ] = 1 α 1 + α 2 + α 3 .
Each stay g i in the state E i , ( i = 1 , 2 ) is made either by a non-interrupted repair service B i , or by an interrupted by failure of the other component, or by the risk of type 3. We use the probability meaning of the LST g i ( s ) to express these relationships. Our demo is on the case of g 1 ( s ) . It holds that
g 1 ( s ) = β 1 ( s + α 2 + α 3 ) + α 2 + α 3 s + α 2 + α 3 [ 1 β 1 ( s + α 2 + α 3 ) ] .
This identity is expressing the probability g 1 ( s ) of no “catastrophes” during the repair time B 1 as a probability of the next particular cases: (1) None of the the three risks (catastrophes, interruptions of type 2 or 3 risks) occur, the probability of which is β 1 ( s + α 2 + α 3 ) ; and (2) some of the three competing risks occur, the probability of which is 1 β 1 ( s + α 2 + α 3 ) , and first comes either risk 2 or risk 3, the probability of which is α 2 + α 3 s + α 2 + α 3 .
Now using that E [ g 1 ] = ( 1 ) d g 1 ( s ) d s | s = 0 we get
E [ g 1 ] = 1 α 2 + α 3 [ 1 β 1 ( α 2 + α 3 ) ] .
Similar calculations will show us that
E [ g 2 ] = 1 α 1 + α 3 [ 1 β 2 ( α 1 + α 3 ) ] .
Combining the ideas and results in this section above with the results of Corollary 5, we come to the following:
Theorem 3.
(A0) The average sojourn time in state E 0 during a cycle equals
E [ G 0 ] = α 1 β 1 ( α 2 + α 3 ) + α 2 β 2 ( α 1 + α 3 ) α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) 1 α 1 + α 2 + α 3 ;
(A1) The average sojourn time in state E 1 during a cycle equals
E [ G 1 ] = α 1 + α 2 α 1 α 1 + α 3 β 2 ( α 1 + α 3 ) α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) × × 1 α 2 + α 3 [ 1 b 1 ( α 2 + α 3 ) ] ;
(A2) The average sojourn time in state E 1 during a cycle equals
E [ G 2 ] = α 2 + α 1 α 2 α 2 + α 3 β 1 ( α 2 + α 3 ) α α 1 β 1 ( α 2 + α 3 ) α 2 β 2 ( α 1 + α 3 ) × × 1 α 1 + α 3 [ 1 b 2 ( α 1 + α 3 ) ] ;
(A3) The average sojourn time in state E 3 during a cycle equals
E [ G 3 ] = E [ B 3 ] = b 3 .
An interesting dissection can be found if you compare the result of Corollary 3 and the last Theorem. It must be true that
E [ T ] = E [ G 0 ] + E [ G 1 ] + E [ G 2 ] ,
since both expressions represent the work time on average during a life cycle.

6. Stationary Probabilities

The transitions between the macro states E i in the considered process form a Markov chain with a finite number of states. According the theory (Feller, [7]). Such chains always have a stationary state and the stationary probabilities do exist. Namely, if π i ( t ) are the probabilities at the instant t, the processes
π i = lim t π i ( t ) , ( i = 0 , 1 , 2 , 3 )
are the stationary ones. We do not focus on the time dependent probabilities π i ( t ) , but use the meaning of the stationary probabilities π i . These are the portions of time in one unit of time, when the process spends in the state E i , no matter how many times the process changes its states. Hence
Theorem 4.
(P0) The Stationary probability to find the process in state E 0 when both components are functioning is
π 0 = E [ G 0 ] E [ T ] + E [ B 3 ] )
(P1) The Stationary probability to find the process in state E 1 when only component 1 is functioning is
π 1 = E [ G 1 ] E [ T ] + E [ B 3 ]
(P2) The Stationary probability to find the process in state E 2 when only component 2 is functioning is
π 2 = E [ G 2 ] E [ T ] + E [ B 3 ]
(P3) The Stationary probability to find the process in state E 3 when both components 1 and 2 are not functioning, and the whole system is under repair is
π 3 = E [ B 3 ] E [ T ] + E [ B 3 ]
where E [ G i ] and E [ T ] are determined by the expressions in Theorems 3 and Corollary 3.
Proof. 
The proof is a simple consequence of the rule
π i = E [ G i ] j = 0 3 E [ G j ] , ( i = 0 , 1 , 2 , 3 )
which is a consequence of the meaning of the stationary probabilities for regenerative processes. ◻

7. Conclusions

The probability interpretation method for LST and PGF was used for the analysis of a heterogeneous double redundant hot-standby renewable system under Marshall–Olkin failure maintenance model. A detailed analysis of the processes within a cycle uses the exponential character of the times of changes in the states and allows one to see explicit forms of reliability and maintenance characteristics involved. Some inner dependence is somewhat shown to be involved.
We believe that our approach revives the power of some old and recently infrequently used meanings of probability transformation functions in probability analysis. We encourage their future use in contemporary research.
Markov chain characteristics also have meanings, and this fact could be successfully used. We hope our analysis is a good example in this direction.
In our opinion, this approach can be successfully applied to studying n-component systems with various modifications of the Marshall–Olkin type of maintenance models with renewals, as well as in modeling of k-out-of-n reliability systems under similar to our assumptions.

Author Contributions

Conceptualization, B.D. and V.R.; Investigation, B.D. and T.M.; Methodology, V.R. All authors have read and agreed to the published version of the manuscript.

Funding

The publication has been prepared with the support of the “RUDN University Program 5-100” (with recipient Prof. V. Rykov (performed mathematical model development) and Dr. T. Milovanova (performed analytical calculations).

Acknowledgments

The publication has been prepared with the support of the “RUDN University Program 5-100” (with recipient V. Rykov (performed mathematical model development) and T. Milovanova (performed analytical calculations).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barlow, R.E.; Proshan, F. Statistical Theory of Reliability and Life Testing: Probability Models; To Begin With: Silver Spring, MD, USA, 1981. [Google Scholar]
  2. Marshall, A.; Olkin, I. A multivariate exponential distribution. J. Am. Stat. Assoc. 1967, 62, 30–44. [Google Scholar] [CrossRef]
  3. Li, X.; Pellerey, F. Generalized Marshall-Olkin distributions and related bivariate aging properties. J. Multivar. Anal. 2011, 102, 1399–1409. [Google Scholar] [CrossRef] [Green Version]
  4. Lin, J.; Li, X. Multivariate generalized Marshall-Olkin distributions and copulas. Methodol. Comput. Appl. Probab. 2014, 16, 53–78. [Google Scholar] [CrossRef]
  5. Pinto, J.; Kolev, N. Extended Marshall-Olkin model and its dual version. In Springer Series in Mathematics & Statistics 141; Cherubini, U., Durante, F., Mulinacci, S., Eds.; Springer: Berlin, Germany, 2015; pp. 87–113. [Google Scholar]
  6. Kozyrev, D.; Kolev, N.; Rykov, V. Reliability Function of Renewable System under Marshall-Olkin Failure Model. Reliab. Theory Appl. 2018, 13, 39–46. [Google Scholar]
  7. Feller, W. An Introduction to Probability Theory and its Applications; John Wiley & Sons Inc.: New York, NY, USA; London, UK; Sydney, Australia, 1966; Volume II. [Google Scholar]
  8. Danielyan, E.; Dimitrov, B. Service with priorities and preparation times. In Mathematical Questions in Production Control; Moscow State Univ.: Moscow, Russia, 1970; pp. 165–178. (In Russian) [Google Scholar]
  9. Danielyan, E.; Dimitrov, B. Service with Changing Priorities and Preparation Times; Scientific Notices of the Univ. of Erevan; University of Erevan: Yerevan, Armenian Soviet Socialist Republic, 1971; No. 1; pp. 3–10. (In Russian) [Google Scholar]
  10. Dimitrov, B.; Danielyan, E. Several Limit Theorems in Reliability and Queueing Theory; Mathematical Aspects of Industrial Control; Moscow State University: Moscow, Russia, 1970; No. 2; pp. 179–183. (In Russian) [Google Scholar]
  11. Jaiswall, H.K. Priority Queues; Academic Press: New York, NY, USA, 1968. [Google Scholar]
  12. Kesten, H.; Runnenburg, J.T. Priority in Waiting Line Problems; Koninklijke Netherlands Akademie van Wetenschappen: Amsterdam, The Netherlands, 1957; Volume 60, pp. 312–336. [Google Scholar]
  13. Klimov, G.P. Stochastic Queuing Systems; Nauka: Moscow, Russian, 1966. (In Russian) [Google Scholar]
  14. Gnedenko, B.; Danielyan, E.; Klimov, G.; Matveev, V.; Dimitrov, B. Prioritetnye Sistemy Obslujivania; Moscow State University: Moscow, Russia, 1973. [Google Scholar]
  15. Omey, E.; Willenkens, E. Abelian and Tauberian Theorems for the Laplace Transform of Functions in Several Variables. J. Multivar. Anal. 1989, 30, 292–306. [Google Scholar] [CrossRef] [Green Version]
  16. Dimitrov, B. Asymptotic Expansions of Characteristics for Queuing Systems of the Type M/G/1; Bulgarian Academy of Sciences: Sofia, Bulgaria, 1974; Volume XV, pp. 237–263. (In Bulgarian) [Google Scholar]

Share and Cite

MDPI and ACS Style

Dimitrov, B.; Rykov, V.; Milovanova, T. Renewal Redundant Systems Under the Marshall–Olkin Failure Model. A Probability Analysis. Mathematics 2020, 8, 459. https://doi.org/10.3390/math8030459

AMA Style

Dimitrov B, Rykov V, Milovanova T. Renewal Redundant Systems Under the Marshall–Olkin Failure Model. A Probability Analysis. Mathematics. 2020; 8(3):459. https://doi.org/10.3390/math8030459

Chicago/Turabian Style

Dimitrov, Boyan, Vladimir Rykov, and Tatiana Milovanova. 2020. "Renewal Redundant Systems Under the Marshall–Olkin Failure Model. A Probability Analysis" Mathematics 8, no. 3: 459. https://doi.org/10.3390/math8030459

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop