1 Introduction

Fuzziness, widely existing in real life and various systems, is a kind of uncertainty that is entirely different from contingency [1]. Fuzzy methods have wide application backgrounds in the study of trustworthiness, for example, (1) in fuzzy systems, the input, output and their cross linking state could be fuzzy; (2) in human behaviors, there also exists various kinds of fuzziness; (3) a software is unique, and trustworthiness behaviors of software are fuzzy in nature; (4) in control systems, we may have fuzzy control, design trustworthiness and intelligent control, etc. ; (5) in expert system research, it is necessary to take into consideration of the consistency, completeness, independence and redundancy of knowledge bases, but it is also important to consider fuzzy expert systems and the spread of non-trustworthiness factors. Therefore, it is significant to study the fuzzy trustworthiness system and its probability presentation theory, and this will benefit a lot to production and life practices.

Construction of fuzzy trustworthiness systems and the universal approximation of them are hot topics in the research of fuzzy system and fuzzy control theory. The most familiar construction of fuzzy trustworthiness system is divided into four processes [2]: (1) fuzzifier of the input variables; (2) construct the inference relationship; (3) fuzzy inference; (4) defuzzifier of the output variables set.

In the construction of fuzzy trustworthiness systems, the single fuzzifier of the input variable has been mentioned in most literatures until now. And fuzzy inference is CRI reasoning method [3, 4] or triple I method [5]. We must determine the fuzzy implication operator before constructing fuzzy inference relationship and fuzzy reasoning. The choice of fuzzy implication operator will have a huge impact on the fuzzy trustworthiness system. References [69] point out that, CRI reasoning method and triple I method have impact on the fuzzy trustworthiness system when only using the conjunction type fuzzy implication operators ,such as Mamdani implication and Larsen implication. The common defuzzification methods are: (a) center-average defuzzification method, (b) center-of-gravity defuzzification method, and (c) maximum defuzzification method. Researchers have constructed fuzzy systems by using the center-average defuzzification method [1014] and maximum defuzzification method [1517] until now. Because of involved some complex integral, researchers have not yet given the specific expression of the fuzzy trustworthiness system when constructing them by using center-of-gravity method. How to obtain the specific expression of this fuzzy trustworthiness system is one motivation in our paper.

As we know, the fuzzy trustworthiness system constructed by center-of-gravity method can be approximately reduced to some forms of interpolation [18] and this fuzzy trustworthiness system has meaning of probability theory. It is the best approximation of system under the sense of least-squares [19]. But how to determine the corresponding probability density function of fuzzy implication operator has not been resolved, and this is the another motivation in our paper.

In order to obtain the corresponding probability density function of fuzzy implication operator, we are starting from the input and output data and through restrictions on the fuzzy reasoning relationship then we succeeded in getting the corresponding probability density function of bounded product implication and Larsen square implication. We found that adopting different fuzzy implication operators will obtain different probability density functions, but the corresponding random variables have the same mathematical expectation and almost the same variance and covariance. Further, we derived the corresponding center-of-gravity fuzzy trustworthiness systems by using these two kinds of probability distributions and gave the sufficient conditions of universal approximation for them.

This article is organized as follows: Sect. 2 is Preliminary; Sect. 3 is the probability distributions and marginal probability distribution of bounded product implication and Larsen square implication; Sect. 4 discusses the numerical characteristic of these two distributions; Sect. 5 gives the center-of-gravity method fuzzy trustworthiness systems of these two probability distributions and the sufficient condition with universal approximation for these systems ; Sect. 6 is a overlook of application analysis of fuzzy trustworthiness systems and we have our conclusion in Sect. 7.

2 Preliminary

Suppose \(\{(x_{i},y_{i})\}_{(1\le i \le n)}\) is a group of input-output data, then

$$\begin{aligned} a=x_{1}<x_{2}<\cdots <x_{n}=b,\quad c=y_{1}<y_{2}<\cdots <y_{n}=d, \end{aligned}$$

or

$$\begin{aligned} a=x_{1}<x_{2}<\cdots <x_{n}=b,\quad c=y_{n}<y_{n-1}<\cdots <y_{1}=d. \end{aligned}$$

we construct a two-phase triangular wave by using these data, namely

$$\begin{aligned} A_{1}(x)= {\left\{ \begin{array}{ll} \displaystyle \frac{x_{2}-x}{x_{2}-x_{1}}, \quad x\in [x_{1},x_{2}]\\ 0, \quad other \end{array}\right. },\quad A_{n}(x)= {\left\{ \begin{array}{ll} \displaystyle \frac{x_{n}-x}{x_{n}-x_{n-1}}, \quad x\in [x_{1},x_{2}]\\ 0, \quad other \end{array}\right. }, \end{aligned}$$

for \(i=2,3,\ldots ,n-1\),

$$\begin{aligned} A_{i}(x)= {\left\{ \begin{array}{ll} \displaystyle \frac{x-x_{i-1}}{x_{i}-x_{i-1}}, \quad x\in [x_{i-1},x_{i}]\\ \displaystyle \frac{x_{i+1}-x}{x_{i+1}-x_{i}}, \quad x\in [x_{i},x_{i+1}]\\ 0, \quad other \end{array}\right. },\quad B_{i}(y)= {\left\{ \begin{array}{ll} \displaystyle \frac{y-y_{i-1}}{y_{i}-y_{i-1}}, \quad y\in [y_{i-1},y_{i}]\\ \displaystyle \frac{y_{i+1}-y}{y_{i+1}-y_{i}}, \quad y\in [y_{i},y_{i+1}]\\ 0, \quad other \end{array}\right. },\\ B_{1}(y)= {\left\{ \begin{array}{ll} \displaystyle \frac{y_{2}-y}{y_{2}-y_{1}}, \quad y\in [y_{1},y_{2}]\\ 0, \quad other \end{array}\right. },\quad B_{n}(y)= {\left\{ \begin{array}{ll} \displaystyle \frac{y_{n}-y}{y_{n}-y_{n-1}}, \quad y\in [y_{n-1},y_{n}]\\ 0, \quad other \end{array}\right. }, \end{aligned}$$

then \(x_{i},y_{i}\) is the peak point of \(A_{i}\) and \(y_{i}\), namely \(A_{i}(x_{i})=1,B_{i}(y_{i})=1,(i=1,2,\ldots ,n)\), and

  • when \(x\in [x_{i},x_{i+1}]\),    \(A_{i}(x)+A_{i+1}(x)=1,\quad A_{j}(x)=0(j \ne i,i+1),\)

  • when \(y\in [y_{i},y_{i+1}]\),    \(B_{i}(y)+B_{i+1}(y)=1,\quad B_{j}(y)=0(j \ne i,i+1).\)

Then we have fuzzy inference rule

$$\begin{aligned} if\ x\ is\ A_{i},\ \quad then\ y\ is\ B_{i}\quad (i=1,2,\ldots ,n). \end{aligned}$$
(1)

Let \(\theta \) be the fuzzy implication operator, then from (1) can get the fuzz relationship \(R(x,y)=\displaystyle \mathop {\vee }\nolimits _{i=1}^{n}\theta (A_{i}(x),B_{i}(y)).\)

Let \(q(x,y)={\left\{ \begin{array}{ll} R(x,y),(x,y)\in X\times Y\\ 0, \quad other \end{array}\right. },\) and \(H(2,n,\theta ,\vee ) \triangleq \int ^{+\infty }_{-\infty }\int ^{+\infty }_{-\infty }q(x,y)dxdy,\) when \(X=[a,b],Y=[c,d]\), we have

$$\begin{aligned} H(2,n,\theta ,\vee ) = \int ^{b}_{a}\int ^{d}_{c}R(x,y)dxdy. \end{aligned}$$
(2)

we call \(H(2,n,\theta ,\vee )\) H function which has parameter \(2,n,\theta ,\vee \), and “2” indicates q(xy) is binary function; n means the amount of inference rules; \(\theta \) is fuzzy implication operator, and \(\vee = \)“max”.

If \(H(2,n,\theta ,\vee )>0\), let

$$\begin{aligned} f(x,y)\triangleq \frac{q(x,y)}{H(2,n,\theta ,\vee )}, \end{aligned}$$
(3)

clearly, (a)\(f(x,y)\ge 0\); (b)\(\int ^{+\infty }_{-\infty }\int ^{+\infty }_{-\infty }f(x,y)dxdy=1\). So,f(xy) can be regarded as a random vector of the joint probability density function \((\xi ,\eta )\) [19].

Suppose R(xy) is fuzzy relationship determined by inference rules (1), and \(A^{*}(x)\) is a fuzzy single point of input variables, namely \(A^{*}(x^{\prime })={\left\{ \begin{array}{ll} 1,x^{\prime }=x\\ 0,x^{\prime }\ne x \end{array}\right. }\). Let \(B^{*}=A\circ R\), namely \(B^{*}(y)=\displaystyle \mathop {\vee }_{x^{\prime }\in X}(A^{*}(x^{\prime })\wedge R(x^{\prime },y))=R(x,y)\). Then

$$\begin{aligned} \bar{S}(x)=\frac{\int ^{d}_{c}yB^{*}(y)dy}{\int ^{d}_{c}B^{*}(y)dy} \end{aligned}$$
(4)

is a center-of-gravity method fuzzy trustworthiness system [18].

3 The Probability Distribution of Several Single-Input and Single-Output Fuzzy Trustworthiness System

If \(\{(x_{i},y_{i})\}_{1\le i\le n}\) satisfies:

$$\begin{aligned} a=x_{1}<x_{2}<\cdots <x_{n}=b, \quad c=y_{1}<y_{2}<\cdots <y_{n}=d. \end{aligned}$$

Establishing fuzzy relations according to the fuzzy inference rule (1) is one of the four processes of constructing a fuzzy trustworthiness system. If using the traditional method, then the fuzzy relation is \(R(x,y)=\displaystyle \mathop {\vee }\nolimits _{i=1}^{n}\theta (A_{i}(x),B_{i}(y))\).Since the fuzzy implication is conjunction implication, we have the following result:

if \(x\in [x_{i},x_{i+1}]\), then \(R(x,y)=\theta (A_{i}(x),B_{i}(y))\vee \theta (A_{i+1}(x),B_{i+1}(y))\).

Because the data is monotonic, we make some adjustments to the fuzzy reasoning relationship: when \(x\in [x_{i},x_{i+1}]\), let

$$\begin{aligned} R(x,y)={\left\{ \begin{array}{ll} \theta (A_{i}(x),B_{i}(y))\vee \theta (A_{i+1}(x),B_{i+1}(y)),\quad y\in [y_{i},y_{i+1}]\\ 0, \quad other. \end{array}\right. } \end{aligned}$$
(5)

We discuss the reasoning relationship determine by (5), then we have the following probability results:

Theorem 1

If \(\theta (a,b)=T_{m}(a,b)=(a+b-1)\vee 0\), let \(H=\displaystyle \frac{1}{3}\displaystyle \mathop {\sum }\nolimits _{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i}). D_{i}=\{(x,y)\in I_{i}\times J_{i} | y\le A_{i+1}(x)y_{i}+A_{i}(x)y_{i+1}\}, E_{i}=I_{i}\times J_{i}-D_{i}\), then

  1. (1)

    Probability density function f(xy) (called Bounded plot distribution) determined by formula (3) is

    $$\begin{aligned} f(x,y)={\left\{ \begin{array}{ll} \displaystyle \frac{1}{H}(A_{i}(x)+B_{i}(y)-1),\quad (x,y)\in D_{i}\\ \displaystyle \frac{1}{H}(A_{i+1}(x)+B_{i+1}(y)-1),\quad (x,y)\in E_{i}\\ 0, \quad other \end{array}\right. } \end{aligned}$$
    (6)
  2. (2)

    Marginal distribution functions of f(xy) are

    $$\begin{aligned} f_{\xi }(x)=\displaystyle \frac{1}{2T}(y_{i+1}-y_{i})(1-2A_{i}(x)A_{i+1}(x)),\quad x\in [x_{i},x_{i+1}](i=1,2,\ldots n).\nonumber \\ \end{aligned}$$
    (7)
    $$\begin{aligned} f_{\eta }(x)=\displaystyle \frac{1}{2T}(x_{i+1}-x_{i})(1-2B_{i}(y)B_{i+1}(y)),\quad y\in [y_{i},y_{i+1}](i=1,2,\ldots n).\nonumber \\ \end{aligned}$$
    (8)

Proof

(1) Because

$$\begin{aligned} B^{*}(y)= & {} (A_{i}(x)+B_{i}(y)-1)\vee (A_{i+1}(x)+B_{i+1}(y)-1)\vee 0\\= & {} (A_{i}(x)+B_{i}(y)-1)\vee (A_{i+1}(x)+B_{i+1}(y)-1), \end{aligned}$$

then

$$\begin{aligned} H(2,n,\theta ,\vee )= & {} \displaystyle \mathop {\sum }_{i=1}^{n-1}\int ^{x_{i+1}}_{x_{i}}\int ^{y_{i+1}}_{y_{i}}[(A_{i}(x)+B_{i}(y)-1)\vee (A_{i+1}(x)\\&+\,B_{i+1}(y)-1)]dydx. \end{aligned}$$

If \(A_{i}(x)+B_{i}(y)\ge 1\), then \(A_{i+1}(x)+B_{i+1}(y)\le 1\) and \(B_{i}(y)\ge 1-A_{i}(x)=A_{i+1}(x)\), namely \(\displaystyle \frac{y_{i+1}-y}{y_{i+1}-y_{i}}\ge A_{i+1}(x)\). Then \(y\le A_{i+1}(x)y_{i}+A_{i}(x)y_{i+1}\triangleq y^{*}_{i}\). So

$$\begin{aligned} \triangle _{i}&=\int ^{x_{i+1}}_{x_{i}}\int ^{y_{i+1}}_{y_{i}}[(A_{i}(x)+B_{i}(y)-1)\vee (A_{i+1}(x)+B_{i+1}(y)-1)]dydx\\&=\int ^{x_{i+1}}_{x_{i}}\left[ \int ^{y^{*}_{i}}_{y_{i}}(A_{i}(x)+B_{i}(y)-1)dy+\int ^{y_{i+1}}_{y^{*}_{i}}(A_{i+1}(x)+B_{i+1}(y)-1)dy\right] dx\\&=(y_{i+1}-y_{i})\int ^{x_{i+1}}_{x_{i}}(\displaystyle \frac{1}{2}-A_{i}(x)A_{i+1}(x))dx. \end{aligned}$$

Let \(A_{i+1}(x)=\displaystyle \frac{x-x_{i}}{x_{i+1}-x_{i}}=t\), then \(A_{i}(x)=1-t, dx=(x_{i+1}-x_{i})dt\). So

$$\begin{aligned} {\varDelta }_{i}=(x_{i+1}-x_{i})(y_{i+1}-y_{i})\int ^{1}_{0}\left[ \displaystyle \frac{1}{2}-t(1-t)\right] dt=\displaystyle \frac{1}{3}(x_{i+1}-x_{i})(y_{i+1}-y_{i}). \end{aligned}$$

Then \(H(2,n,\theta ,\vee )=\displaystyle \frac{1}{3}\sum \nolimits _{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\triangleq H.\) And we have,

$$\begin{aligned} f(x,y)={\left\{ \begin{array}{ll} \displaystyle \frac{1}{H}(A_{i}(x)+B_{i}(y)-1),\quad \exists i,(x,y)\in D_{i}\\ \displaystyle \frac{1}{H}(A_{i+1}(x)+B_{i+1}(y)-1),\quad \exists i,(x,y)\in E_{i}\\ 0, \quad other\end{array}\right. } \end{aligned}$$

(2)

$$\begin{aligned} f_{\xi }(x)&=\displaystyle \frac{1}{T}\left[ \int ^{y^{*}_{i}}_{y_{i}}(A_{i}(x)+B_{i}(y)-1)dy+\int _{y^{*}_{i}}^{y_{i+1}}(A_{i+1}(x)+B_{i+1}(y)-1)dy\right] \\&=\displaystyle \frac{1}{T}\left[ (A_{i}(x)-1)(y^{*}_{i}-y_{i})+\int ^{y^{*}_{i}}_{y_{i}}B_{i}(y)dy+(A_{i+1}(x)-1)(y_{i+1}-y^{*}_{i})\right. \\&\quad \,\,\left. +\,\int _{y^{*}_{i}}^{y_{i+1}}B_{i+1}(y)dy\right] . \end{aligned}$$

Because \(y^{*}_{i}-y_{i}=(y_{i+1}-y_{i})A_{i}(x),y_{i+1}-y_{i}^{*}=(y_{i+1}-y_{i})A_{i+1}(x)\), we make convert \(B_{i+1}(y)=\displaystyle \frac{y-y_{i}}{y_{i+1}-y_{i}}=t\) , then we get

$$\begin{aligned} f_{\xi }(x)= & {} \displaystyle \frac{y_{i+1}-y_{i}}{T}\left[ (A_{i}(x)-1)A_{i}(x)+\int ^{A_{i}(x)}_{0}(1-t)dt+(A_{i+1}(x)-1)A_{i+1}(x)\right. \\&\left. +\int ^{1}_{A_{i}(x)}tdt\right] \\= & {} \displaystyle \frac{y_{i+1}-y_{i}}{T}\left[ A^{2}_{i}-A_{i}(x)+A_{i}(x)-\displaystyle \frac{1}{2}A^{2}_{i}(x)+A^{2}_{i+1}(x)-A^{2}_{i+1}(x)\right. \\&\left. +\displaystyle \frac{1}{2}(1-A^{2}_{i}(x))\right] \\= & {} \displaystyle \frac{y_{i+1}-y_{i}}{2T}[1-2A_{i}(x)A_{i+1}(x)]. \end{aligned}$$

The others can be proved similarly.\(\square \)

Theorem 2

If \(\theta (a,b)=a^{2}b\), then

  1. (1)

    Probability density function f(xy) (called Larsen square distribution) determined by formula (3) is

    $$\begin{aligned} f(x,y)={\left\{ \begin{array}{ll} \displaystyle \frac{1}{C}A^{2}_{i}(x)B_{i}(y),\quad \exists i,(x,y)\in F_{i}\\ \displaystyle \frac{1}{C}A^{2}_{i+1}(x)B_{i+1}(y),\quad \exists i,(x,y)\in G_{i}\\ 0, \quad other \end{array}\right. } \end{aligned}$$
    (9)
  2. (2)

    Marginal density function of f(xy) is: for \(x\in [x_{i},x_{i+1})(i=1,2,\ldots ,n)\)

    $$\begin{aligned} f_{\xi }(x)= & {} \displaystyle \frac{1}{2C}(y_{i+1}-y_{i})\displaystyle \frac{(1-A_{i}(x)A_{i+1}(x))(1-3A_{i}(x)A_{i+1}(x))}{1-A_{i}(x)A_{i+1}(x)}, \quad \end{aligned}$$
    (10)
    $$\begin{aligned} f_{\eta }(y)= & {} \displaystyle \frac{1}{3C}(x_{i+1}-x_{i})\left( 1-\displaystyle \frac{B_{i}(y)B_{i+1}(y)}{1+2\sqrt{B_{i}(y)B_{i+1}(y)}}\right) (y\in [y_{i},y_{i+1}]),\qquad \end{aligned}$$
    (11)

    where \(F_{i}=\left\{ (x,y)\in I_{i}\times J_{i}|y\le \frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)} y_{i}+ \frac{A^{2}_{i}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}y_{i+1}\right\} ,G_{i}=(I_{i}\times J_{i})-F_{i}, K=\frac{1}{2}-\frac{1}{16}\pi ,and ~C=K\displaystyle \mathop {\sum }\nolimits _{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i}).\)

Proof

(1) \(H(2,n,\theta ,\vee )=\int ^{b}_{a}\int ^{d}_{c}p(x,y)dydx\), here when \(x\in [x_{i},x_{i+1}]\), we have \(p(x,y)=A^{2}_{i}(x)B_{i}(y)\vee A^{2}_{i+1}(x)B_{i+1}(y).\)

Then \(H(2,n,\theta ,\vee )=\displaystyle \mathop {\sum }_{i=1}^{n-1}\int ^{x_{i+1}}_{x_{i}}\int ^{y_{i+1}}_{y_{i}}(A^{2}_{i}(x)B_{i}(y)\vee A^{2}_{i+1}(x)B_{i+1}(y))dydx\).

For \(x\in [x_{i},x_{i+1}]\), when \(A^{2}_{i}(x)B_{i}(y)\ge A^{2}_{i+1}(x)B_{i+1}(y)=A^{2}_{i}(x)(1-B_{i}(y))\), then

$$\begin{aligned} B_{i}(y)=\displaystyle \frac{y_{i+1}-y}{y_{i+1}-y_{i}}\ge \displaystyle \frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}, \end{aligned}$$

so

$$\begin{aligned} y\le \displaystyle \frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}y_{i}+\displaystyle \frac{A^{2}_{i}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}y_{i+1}\triangleq \widetilde{y}_{i}. \end{aligned}$$

Then \( {\varDelta }^{*}=\int ^{y_{i+1}}_{y_{i}}(A^{2}_{i}(x)B_{i}(y)\vee A^{2}_{i+1}B_{i+1}(y))dy=\int _{y_{i}}^{\widetilde{y}_{i}}A^{2}_{i}(x)B_{i}(y)dy+\int ^{y_{i+1}}_{\widetilde{y}_{i}}A^{2}_{i+1}(x)B_{i+1}(y)dy=A^{2}_{i}(x)\int _{y_{i}}^{\widetilde{y}_{i}}B_{i}(y)dy+A^{2}_{i+1}(x)\int ^{y_{i+1}}_{\widetilde{y}_{i}}B_{i+1}(y)dy.\)

$$\begin{aligned} \int _{y_{i}}^{\widetilde{y}_{i}}B_{i}(y)dy=\displaystyle \frac{1}{y_{i+1}-y_{i}}\left[ y_{i+1}y-\displaystyle \frac{1}{2}y^{2}\right] \mid _{y_{i}}^{\widetilde{y_{i}}}=\displaystyle \frac{(\widetilde{y}_{i}-y_{i})(2y_{i+1}-y_{i}-\widetilde{y}_{i})}{2(y_{i+1}-y_{i})}. \end{aligned}$$

And

$$\begin{aligned} \widetilde{y}_{i}-y_{i}= & {} \left( \displaystyle \frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}-1\right) y_{i}+\displaystyle \frac{A^{2}_{i}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}y_{i+1}\\= & {} \displaystyle \frac{A^{2}_{i}(x)}{A^{2}_{i}(x)+\,A^{2}_{i+1}(x)}(y_{i+1}-y_{i}).\\&2y_{i+1}-y_{i}-\widetilde{y}_{i}=(y_{i+1}-y_{i})+y_{i+1}-\widetilde{y}_{i}=y_{i+1}-y_{i}\\&+\displaystyle \frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}(y_{i+1}-y_{i})\\= & {} \displaystyle \frac{A^{2}_{i}(x)+2A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}(y_{i+1}-y_{i}). \end{aligned}$$

So

$$\begin{aligned} \int _{y_{i}}^{\widetilde{y}_{i}}B_{i}(y)dy= & {} \displaystyle \frac{A^{2}_{i}(x)(A^{2}_{i}(x)+2A^{2}_{i+1}(x))}{2(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}(y_{i+1}-y_{i}).\\ \int ^{y_{i+1}}_{\widetilde{y}_{i}}B_{i}(y)dy= & {} \displaystyle \frac{1}{y_{i+1}-y_{i}}\int ^{y_{i+1}}_{\widetilde{y}_{i}}(y-y_{i})dy=\displaystyle \frac{(y_{i+1}-\widetilde{y}_{i})(y_{i+1}-2y_{i}+\widetilde{y_{i}})}{2(y_{i+1}-y_{i})}. \end{aligned}$$

And

$$\begin{aligned} y_{i+1}-\widetilde{y}_{i}= & {} \displaystyle \frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}(y_{i+1}-y_{i}).\\ y_{i+1}-2y_{i}+\widetilde{y}_{i}= & {} (y_{i+1}-y_{i})+\widetilde{y}_{i}-y_{i}=(y_{i+1}-y_{i})\left[ 1+\displaystyle \frac{A^{2}_{i}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}\right] \\= & {} \displaystyle \frac{2A^{2}_{i}(x)+A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}(y_{i+1}-y_{i}). \end{aligned}$$

So \(\int ^{y_{i+1}}_{\widetilde{y}_{i}}B_{i}(y)dy=\displaystyle \frac{A^{2}_{i+1}(x)(2A^{2}_{i}(x)+A^{2}_{i+1}(x))}{2(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}(y_{i+1}-y_{i})\).

Then

$$\begin{aligned} {\varDelta }= & {} \left[ \displaystyle \frac{A^{4}_{i}(x)(A^{2}_{i}(x)+2A^{2}_{i+1}(x))}{2(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}+ \displaystyle \frac{A^{4}_{i+1}(x)(2A^{2}_{i}(x)+A^{2}_{i+1}(x))}{2(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}\right] (y_{i+1}-y_{i})\\= & {} \displaystyle \frac{y_{i+1}-y_{i}}{2}\bullet \displaystyle \frac{A^{6}_{i}(x)+A^{6}_{i+1}(x)+ 2A^{2}_{i}(x)A^{2}_{i+1}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))}{(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}} \end{aligned}$$

so

$$\begin{aligned} {\varDelta }^{*}&=\int ^{x_{i+1}}_{x_{i}}\int ^{y_{i+1}}_{y_{i}}(A^{2}_{i}(x)B_{i}(y)\vee A^{2}_{i+1}(x)B_{i+1}(y))dydx\\ {}&= \displaystyle \frac{y_{i+1}-y_{i}}{2}\int ^{x_{i+1}}_{x_{i}}\displaystyle \frac{A^{6}_{i}(x)+A^{6}_{i+1}(x)+ 2A^{2}_{i}(x)A^{2}_{i+1}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))}{(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}dx. \end{aligned}$$

Let \(A_{i+1}(x)=\displaystyle \frac{x-x_{i}}{x_{i+1}-x_{i}}=t\), then \(dx=(x_{i+1}-x_{i})dt, A_{i}(x)=1-t.\) Then

$$\begin{aligned} {\varDelta }^{*}=\displaystyle \frac{(x_{i+1}-x_{i})(y_{i+1}-y_{i})}{2}\int ^{1}_{0}\displaystyle \frac{t^{6}+(1-t)^{6}+ 2(1-t)^{2}t^{2}((1-t)^{2}+t^{2})}{((1-t)^{2}+t^{2})^{2}}dt \end{aligned}$$

Easy to calculate

$$\begin{aligned} K=\displaystyle \frac{1}{2}\int ^{1}_{0}\displaystyle \frac{t^{6}+(1-t)^{6}+ 2(1-t)^{2}t^{2}((1-t)^{2}+t^{2})}{((1-t)^{2}+t^{2})^{2}}dt=\displaystyle \frac{1}{2}-\displaystyle \frac{1}{16}\pi . \end{aligned}$$

Then \(H(2,n,\theta ,\vee )=K\displaystyle \mathop {\sum }_{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\triangleq C.\) So,

$$\begin{aligned} f(x,y)={\left\{ \begin{array}{ll} \displaystyle \frac{1}{C}A_{i}^{2}(x)B_{i}(y),\quad \exists i,(x,y)\in F_{i}\\ \displaystyle \frac{1}{C}A_{i+1}^{2}(x)B_{i+1}(y),\quad \exists i,(x,y)\in G_{i}\\ 0, \quad other\end{array}\right. } \end{aligned}$$

here \(F_{i}=\left\{ (x,y)\in I_{i}\times J_{i}|y\le \displaystyle \frac{A_{i+1}^{2}(x)}{A_{i}^{2}(x)+A_{i+1}^{2}(x)}y_{i}+\displaystyle \frac{A_{i}^{2}(x)}{A_{i}^{2}(x)+A_{i+1}^{2}(x)}y_{i+1}\right\} ,\) \(G_{i}=(I_{i}\times J_{i})-F_{i}.\)

(2)

$$\begin{aligned} f_{\xi }(x)&=\displaystyle \frac{1}{C}\left[ \int _{y_{i}}^{\widetilde{y}_{i}}A^{2}_{i}(x)B_{i}(y)dy+\int _{\widetilde{y}_{i}}^{{y}_{i+1}}A^{2}_{i+1}(x)B_{i+1}(y)dy\right] \\&=\displaystyle \frac{1}{C}\left[ A^{2}_{i}(x)\int _{y_{i}}^{\widetilde{y}_{i}}B_{i}(y)dy+A^{2}_{i+1}(x)\int _{\widetilde{y}_{i}}^{{y}_{i+1}}B_{i+1}(y)dy\right] , \end{aligned}$$

where \(\widetilde{y}_{i}=\displaystyle \frac{A^{2}_{i+1}(x)y_{i}+A^{2}_{i}(x)y_{i+1}}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}.\) Let \(B_{i+1}(y)=\displaystyle \frac{y-y_{i}}{y_{i+1}-y_{i}}=t,\) then \(dy=(y_{i+1}-y_{i})dt, B_{i}(y)=1-t; \) when \( y=y_{i}, t=0;\) when \( y=y_{i+1}, t=1;\) when \( y=\widetilde{y}_{i}, t=\displaystyle \frac{A^{2}_{i}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}\triangleq A^{*}_{i}(x).\)

Let \(A^{*}_{i+1}(x)=\displaystyle \frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}\), then \(A^{*}_{i}(x)+A^{*}_{i+1}(x)=1\).

$$\begin{aligned} f_{\xi }(x)&=\displaystyle \frac{y_{i+1}-y_{i}}{C}[A^{2}_{i}(x)\int ^{A^{*}_{i}(x)}_{0}(1-t)dt+A^{2}_{i+1}(x)\int _{A^{*}_{i}(x)}^{1}tdt]\\&=\displaystyle \frac{y_{i+1}-y_{i}}{2C}[A^{2}_{i}(x)A^{*}_{i}(x)(1+A^{*}_{i+1}(x))+A^{2}_{i+1}(x)A^{*}_{i+1}(x)(1+A^{*}_{i}(x))]\\&=\displaystyle \frac{1}{2C}(y_{i+1}-y_{i})\displaystyle \frac{A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{4}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}\\&=\displaystyle \frac{(1-A_{i}(x)A_{i+1}(x))(1-3A_{i}(x)A_{i+1}(x))}{1-2A_{i}(x)A_{i+1}(x)}. \end{aligned}$$

Because \((x,y){\in } F_{i}\Leftrightarrow y \le \displaystyle \frac{A^{2}_{i}(x)y_{i}+A^{2}_{i+1}(x)y_{i+1}}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}\Leftrightarrow B_{i+1}(y)\le \displaystyle \frac{A^{2}_{i}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}\Leftrightarrow \displaystyle \frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)}\le \displaystyle \frac{B_{i}(y)}{B_{i+1}(y)}\Leftrightarrow \displaystyle \frac{A_{i+1}(x)}{1-A_{i+1}(x)} \le \displaystyle \frac{\sqrt{B_{i}(y)}}{\sqrt{B_{i+1}(y)}}\Leftrightarrow A_{i+1}(x)\le \displaystyle \frac{\sqrt{B_{i}(y)}}{\sqrt{B_{i}(y)}+\sqrt{B_{i+1}(y)}}\Leftrightarrow x\le \displaystyle \frac{\sqrt{B_{i}(y)}}{\sqrt{B_{i}(y)}+\sqrt{B_{i+1}(y)}}x_{i+1}+\displaystyle \frac{\sqrt{B_{i+1}(y)}}{\sqrt{B_{i}(y)}+\sqrt{B_{i+1}(y)}}x_{i}\le \widetilde{x}_{i}.\)

Let \(\widetilde{B}_{i}(y)=\displaystyle \frac{\sqrt{B_{i}(y)}}{\sqrt{B_{i}(y)}+\sqrt{B_{i+1}(y)}}\), then

$$\begin{aligned} f_{\eta }(y)&=\displaystyle \frac{1}{C}\left[ \int ^{\widetilde{x}_{i}}_{x_{i}}A^{2}_{i}(x)B_{i}(y)dx+\int _{\widetilde{x}_{i}}^{x_{i+1}}A^{2}_{i+1}(x)B_{i+1}(y)dx \right] \\&=\displaystyle \frac{1}{C}\left[ B_{i}(y)\int ^{\widetilde{x}_{i}}_{x_{i}}A^{2}_{i}(x)dx+B_{i+1}(y)\int _{\widetilde{x}_{i}}^{x_{i+1}}A^{2}_{i+1}(x)dx \right] . \end{aligned}$$

Let \(A_{i+1}(x)=t\), then \(\widetilde{x}_{i}-x_{i}=\widetilde{B}_{i}(y)(x_{i+1}-x_{i}).\) So

$$\begin{aligned} f_{\eta }(y)&=\displaystyle \frac{x_{i+1}-x_{i}}{C}\left[ B_{i}(y)\int ^{\widetilde{B}_{i}(y)}_{0}(1-t)^{2}dt+B_{i+1}(y)\int _{\widetilde{B}_{i}(y)}^{1}t^{2}dt \right] \\&=\frac{x_{i+1}-x_{i}}{3C}\left\{ B_{i}(y)[1-\widetilde{B}^{3}_{i+1}(y)]+B_{i+1}(y)[1-\widetilde{B}^{3}_{i+1}(y)]\right\} \\&=\frac{1}{3C}(x_{i+1}-x_{i})\left( 1-\displaystyle \frac{B_{i}(y)B_{i+1}(y)}{1+2\sqrt{B_{i}(y)B_{i+1}(y)}}\right) . \end{aligned}$$

\(\square \)

Notes 1: In fact,Theorem 1 and Theorem 2 establish a method to construct probability distribution from the known data structures.

4 The Numerical Characteristic of Two Distributions

This section studies the numerical characteristic of two distributions described in Sect. 3, including the mathematical expectation, variance and covariance. We have the following results.

Theorem 3

Suppose \((\xi ,\eta )\) obey bounded plot distribution, then

$$\begin{aligned}&\begin{aligned} (1) E(\xi )=\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i},\quad E(\eta )=\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i}, \end{aligned} \end{aligned}$$
(12)
$$\begin{aligned}&\begin{aligned} (2) D(\xi )\thickapprox \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}x_{i}x_{i+1}-\left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i}\right) ^{2},\quad D(\eta )\thickapprox \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}y_{i}y_{i+1}-\left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i}\right) ^{2}, \end{aligned} \end{aligned}$$
(13)
$$\begin{aligned}&\begin{aligned} (3) \mathrm {Cov} (\xi ,\eta )\thickapprox \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{z}_{i}-\left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i}\right) \left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i}\right) , \end{aligned} \end{aligned}$$
(14)

where

$$\begin{aligned} \omega _{i}= & {} \displaystyle \frac{(y_{i+1}-y_{i})(x_{i+1}-x_{i})}{\displaystyle \mathop {\sum }\nolimits _{k=1}^{n-1}(y_{k+1}-y_{k})(x_{k+1}-x_{k})},\quad \overline{x}_{i}=\displaystyle \frac{1}{2}(x_{i+1}+x_{i}),\quad \overline{y}_{i}=\displaystyle \frac{1}{2}(y_{i+1}+y_{i}), \\ z_{i}= & {} \frac{3}{5}\left( \displaystyle \frac{x_{i+1}y_{i+1}+x_{i}y_{i}}{2}\right) +\frac{2}{5}\left( \displaystyle \frac{x_{i+1}y_{i}+x_{i}y_{i+1}}{2}\right) . \end{aligned}$$

Proof

(1) \(E(\xi )=\displaystyle \frac{1}{2T} \mathop {\sum }\nolimits _{i=1}^{n-1}(y_{i+1}-y_{i})\int ^{x_{i+1}}_{x_{i}}x(1-2A_{i}(x)A_{i+1}(x))dx\)

$$\begin{aligned} \displaystyle \int ^{x_{i+1}}_{x_{i}}x(1-2A_{i}(x)A_{i+1}(x))dx= & {} (x_{i+1}-x_{i})\int ^{1}_{0}[x_{i}+(x_{i+1}-x_{i})t](1-2t+2t^{2})dt\\= & {} (x_{i+1}-x_{i})[x_{i}\int ^{1}_{0}(1-2t+2t^{2})dt\\&+\,(x_{i+1}-x_{i})\int ^{1}_{0}(1-2t^{2}+2t^{3})dt]\\= & {} \frac{1}{3}(x_{i+1}-x_{i})(x_{i+1}+x_{i}). \end{aligned}$$

Then

$$\begin{aligned} E(\xi )= & {} \displaystyle \frac{1}{2T}\cdot \frac{1}{3} \mathop {\sum }_{i=1}^{n-1}(y_{i+1}-y_{i})(x_{i+1}-x_{i})(x_{i+1}+x_{i})\\= & {} \mathop {\sum }_{i=1}^{n-1}\frac{(y_{i+1}-y_{i})(x_{i+1}-x_{i})}{\displaystyle \mathop {\sum }\nolimits _{k=1}^{n-1}(y_{k+1}-y_{k})(x_{k+1}-x_{k})}\cdot \frac{(x_{i+1}+x_{i})}{2} =\mathop {\sum }\nolimits _{i=1}^{n-1}\omega _{i}\overline{x}_{i}. \end{aligned}$$

Similarly,

$$\begin{aligned} E(\eta )=\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i} \end{aligned}$$

(2) \(E(\xi ^{2})=\displaystyle \frac{1}{2T}\displaystyle \mathop {\sum }\nolimits _{i=1}^{n-1}(y_{i+1}-y_{i})\int ^{x_{i+1}}_{x_{i}}x^{2}(1-2A_{i}(x)A_{i+1}(x))dx\)

$$\begin{aligned} \displaystyle \int ^{x_{i+1}}_{x_{i}}x^{2}(1-2A_{i}(x)A_{i+1}(x))dx= & {} (x_{i+1}-x_{i})\int ^{1}_{0}[x_{i}+(x_{i+1}-x_{i})t]^{2}(1-2t+2t^{2})dt\\= & {} (x_{i+1}-x_{i})\int ^{1}_{0}[x^{2}_{i}+2x_{i}(x_{i+1}-x_{i})t\\&+\,(x_{i+1}-x_{i})^{2}t^{2}](1-2t+2t^{2})dt\\= & {} (x_{i+1}-x_{i})[x^{2}_{i}\int ^{1}_{0}(1-2t+2t^{2})dt\\&+\,2x_{i}(x_{i+1}-x_{i})\int ^{1}_{0}(t-2t^{2}+2t^{3})dt\\&+\,(x_{i+1}-x_{i})^{2}\int ^{1}_{0}(t^{2}-2t^{3}+2t^{4})dt]\\= & {} \frac{2}{3}(x_{i+1}-x_{i})x_{i}x_{i+1}+\frac{7}{30}(x_{i+1}-x_{i})^{3}. \end{aligned}$$

Then

$$\begin{aligned} E(\xi ^{2})= & {} \displaystyle \frac{1}{2T}\cdot \frac{2}{3} \mathop {\sum }_{i=1}^{n-1}(y_{i+1}-y_{i})(x_{i+1}-x_{i})x_{i}x_{i+1}\\&+\,\frac{17}{30}\cdot \frac{1}{2T}\mathop {\sum }_{i=1}^{n-1}(y_{i+1}-y_{i})(x_{i+1}-x_{i})^{3}\\= & {} \mathop {\sum }_{i=1}^{n-1}\frac{(y_{i+1}-y_{i})(x_{i+1}-x_{i})}{\displaystyle \mathop {\sum }\nolimits _{k=1}^{n-1}(y_{k+1}-y_{k})(x_{k+1}-x_{k})}x_{i}x_{i+1}\\&+\frac{17}{20}\mathop {\sum }_{i=1}^{n-1}\frac{(y_{i+1}-y_{i})(x_{i+1}-x_{i})}{\displaystyle \mathop {\sum }\nolimits _{k=1}^{n-1}(y_{k+1}-y_{k})(x_{k+1}-x_{k})}(x_{i+1}-x_{i})^{2}\\= & {} \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}x_{i}x_{i+1}+\frac{17}{20}\mathop {\sum }_{i=1}^{n-1}\omega _{i}(x_{i+1}-x_{i})^{2}. \end{aligned}$$

Let \(\displaystyle {\varDelta }_{n}=\displaystyle \mathop {\max }_{1\le i\le n-1}(x_{i+1}-x_{i})^{2}.\) Then

$$\begin{aligned} \frac{17}{20}\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}(x_{i+1}-x_{i})^{2}\le \frac{17}{20}{\varDelta }_{n}. \end{aligned}$$

When n is sufficiently large, and \({\varDelta }_{n}\) is sufficiently small,

$$\begin{aligned} \displaystyle E(\xi ^{2})\approx \mathop {\sum }_{i=1}^{n-1}\omega _{i}x_{i}x_{i+1}. \end{aligned}$$

Then

$$\begin{aligned} D(\xi )\approx \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}x_{i}x_{i+1}-\left( \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i}\right) ^{2}. \end{aligned}$$

In a similar way, \(E(\eta ^{2})=\displaystyle \mathop {\sum }\nolimits _{i=1}^{n-1}\omega _{i}y_{i}y_{i+1}+\frac{17}{20}\displaystyle \mathop {\sum }\nolimits _{i=1}^{n-1}\omega _{i}(y_{i+1}-y_{i})^{2}\approx \displaystyle \mathop {\sum }\nolimits _{i=1}^{n-1}\omega _{i}y_{i}y_{i+1}.\)

Then \(D(\eta )\approx \displaystyle \mathop {\sum }\nolimits _{i=1}^{n-1}\omega _{i}y_{i}y_{i+1}-\left( \mathop {\sum }\nolimits _{i=1}^{n-1}\omega _{i}\overline{y}_{i}\right) ^{2}.\)

$$\begin{aligned} E(\xi \eta )= & {} \displaystyle \mathop {\sum }_{i=1}^{n-1}\mathop {\int \int }_{I_{i}\times J_{i}}xyf(x,y)dydx=\mathop {\sum }_{i=1}^{n-1}\int ^{x_{i+1}}_{x_{i}}x\int ^{y_{i+1}}_{y_{i}}y_{i}f(x,y)dydx\\= & {} \mathop {\sum }_{i=1}^{n-1}\frac{1}{T}\int ^{x_{i+1}}_{x_{i}}x\left[ \int ^{y^{*}_{i}}_{y_{i}}y(A_{i}(x)+B_{i}(y)-1)dy+\int _{y^{*}_{i}}^{y_{i+1}}y(A_{i+1}(x)\right. \\&\left. +\,B_{i+1}(y)-1)dy\right] dx\\&\int ^{y^{*}_{i}}_{y_{i}}y(A_{i}(x)+B_{i}(y)-1)dy=(y_{i+1}-y_{i})\\&\times \int ^{A_{i}(x)}_{0}[y_{i}+(y_{i+1}-y_{i})t][A_{i}(x)-1+1-t]dt\\= & {} (y_{i+1}-y_{i})[A_{i}(x)\int ^{A_{i}(x)}_{0}(y_{i}+(y_{i+1}-y_{i})t)dt\\&-\int ^{A_{i}(x)}_{0}(y_{i}t+(y_{i+1}-y_{i})t^{2})dt]\\= & {} (y_{i+1}-y_{i})\left[ \frac{1}{2}y_{i}A^{2}_{i}(x)+\frac{1}{6}(y_{i+1}-y_{i})A^{3}_{i}(x)\right] \\&\int _{y^{*}_{i}}^{y_{i+1}}y(A_{i}(x)+B_{i}(y)-1)dy=(y_{i+1}-y_{i})\\&\times \int ^{1}_{A_{i}(x)}[y_{i}+(y_{i+1}-y_{i})t][t-A_{i}(x)]dt\\= & {} (y_{i+1}-y_{i})\left[ \int ^{1}_{A_{i}(x)}y_{i}tdt\,-y_{i}A_{i}(x)(1-A_{i}(x))\right. \\&\left. +\int ^{1}_{A_{i}(x)}(y_{i+1}-y_{i})(t^{2}-A_{i}(x))dt\right] \\= & {} (y_{i+1}-y_{i})\left[ \frac{1}{3}y_{i+1}+\frac{1}{6}y_{i}+\frac{1}{2}y_{i}A^{2}_{i}(x)\right. \\&\left. -\frac{1}{2}(y_{i+1}+y_{i})A_{i}(x)+\frac{1}{6}(y_{i+1}-y_{i})A^{3}_{i}(x)\right] . \end{aligned}$$

Then

$$\begin{aligned} E(\xi \eta )= & {} \displaystyle \frac{1}{T}\mathop {\sum }_{i=1}^{n-1}(y_{i+1}-y_{i})\int ^{x_{i+1}}_{x_{i}}x\left[ \frac{1}{2}y_{i}A^{2}_{i}(x)+\frac{1}{6}(y_{i+1}-y_{i})A^{3}_{i}(x)+\frac{1}{3} y_{i+1}+\frac{1}{6}y_{i}\right. \\&\left. +\frac{1}{2}y_{i}A^{2}_{i}(x)-\frac{1}{2}(y_{i+1}+y_{i})A_{i}(x)+\frac{1}{6}(y_{i+1}-y_{i})A^{3}_{i}(x)\right] \\= & {} \displaystyle \frac{1}{T}\mathop {\sum }_{i=1}^{n-1}(y_{i+1}-y_{i})\int ^{x_{i+1}}_{x_{i}}x\Bigg [y_{i}A^{2}_{i}(x)+\frac{1}{3}(y_{i+1}-y_{i})A^{3}_{i}(x)\\&-\frac{1}{2}(y_{i+1}+y_{i})A_{i}(x) +\frac{1}{3} y_{i+1}+\frac{1}{6} y_{i}\Bigg ]dx. \end{aligned}$$

Let \(A_{i}(x)=\frac{x_{i+1}-x}{x_{i+1}-x_{i}}=t.\) Then \(dx=-(x_{i+1}-x_{i})dt.\) Then

$$\begin{aligned} E(\xi \eta )= & {} \displaystyle \frac{1}{T}\mathop {\sum }_{i=1}^{n-1}(y_{i+1}-y_{i})(x_{i+1}-x_{i})\int ^{1}_{0}[x_{i+1}-(x_{i+1}-x_{i})t]\Bigg [y_{i}t^{2}+\frac{1}{3}(y_{i+1}-y_{i})t^{3}\\&-\frac{1}{2}(y_{i+1}+y_{i})t+\frac{1}{3}y_{i+1}+\frac{1}{6}y_{i}\Bigg ]dt\\= & {} 3\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\Bigg [x_{i+1}\int ^{1}_{0}(y_{i}t^{2}+\frac{1}{3}(y_{i+1}-y_{i})t^{3}-\frac{1}{2}(y_{i+1}+y_{i})t+\frac{1}{3}y_{i+1}\\&+\, \frac{1}{6}y_{i})dt-(x_{i+1}-x_{i})\int ^{1}_{0}(y_{i}t^{3}+\frac{1}{3}(y_{i+1}-y_{i})t^{4}-\frac{1}{2}(y_{i+1}+y_{i})t^{2}+(\frac{1}{3}y_{i+1}\\&+\,\frac{1}{6}y_{i})t)dt\Bigg ]\\= & {} 3\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\left[ x_{i+1}\left( \frac{1}{3}y_{i}+\frac{1}{12}(y_{i+1}-y_{i}\right) -\frac{1}{4}(y_{i+1}+y_{i})\right. \\&\left. +\,\frac{1}{3}y_{i+1}+\frac{1}{6}y_{i}) -(x_{i+1}-x_{i})\left( \frac{1}{4}y_{i}+\frac{1}{15}(y_{i+1}-y_{i})-\frac{1}{6}(y_{i+1}+y_{i})\right. \right. \\&\left. \left. +\,\frac{1}{6}y_{i+1}+\frac{1}{12}y_{i}\right) \right] \\= & {} 3\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\left[ \frac{1}{10}(x_{i+1}y_{i+1}+x_{i}y_{i})+\frac{1}{15}(x_{i+1}y_{i}+x_{i}y_{i+1})\right] \\= & {} \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\left[ \left( \frac{x_{i+1}y_{i+1}+x_{i}y_{i}}{2}\right) -\frac{2}{5}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\right] \\= & {} \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{z}_{i}-\frac{2}{5}\mathop {\sum }_{i=1}^{n-1}\omega _{i}(x_{i+1}-x_{i})(y_{i+1}-y_{i}). \end{aligned}$$

Then \(\mathrm {Cov}(\xi ,\eta )=E(\xi \eta )-E(\xi )E(\eta )\thickapprox \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{z}_{i}-\left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i}\right) \left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i}\right) .\) \(\square \)

Theorem 4

Suppose \((\xi ,\eta )\) obey Larsen square distribution, then:

$$\begin{aligned}&\begin{aligned} (1) E(\xi )=\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i},E(\eta )=\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i}. \end{aligned} \end{aligned}$$
(15)
$$\begin{aligned}&\begin{aligned} (2) D(\xi )\thickapprox \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}x_{i}x_{i+1}-\left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i}\right) ^{2},D(\eta )\thickapprox \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}y_{i}y_{i+1}-\left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i}\right) ^{2}, \end{aligned} \end{aligned}$$
(16)
$$\begin{aligned}&\begin{aligned} (3) \mathrm {Cov} (\xi ,\eta )\thickapprox \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{z}_{i}-\left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i}\right) \left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i}\right) , \end{aligned} \end{aligned}$$
(17)

where \(\displaystyle \overline{x}_{i}=\frac{1}{2}(x_{i}+x_{i+1}),\overline{y}_{i}=\frac{1}{2}(y_{i}+y_{i+1}),\overline{z}_{i}=\frac{1}{2}(x_{i}y_{i}+x_{i+1}y_{i+1}).\)

Proof

$$\begin{aligned} E(\xi )=&\displaystyle \frac{1}{2C}\mathop {\sum }_{i=1}^{n-1}(y_{i+1}-y_{i})\int ^{x_{i+1}}_{x_{i}}x\frac{(1-A_{i}(x)A_{i+1}(x))(1-3A_{i}(x)A_{i+1}(x))}{1-2A_{i}(x)A_{i+1}(x)}dx\\ =&\frac{1}{2C}\displaystyle \mathop {\sum }_{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\int ^{1}_{0}[x_{i}+(x_{i+1}-x_{i})t]\frac{(1-t+t^{2})(1-3t+3t^{2})}{(1-2t+2t^{2})}dt\\ =&\frac{1}{2C}\displaystyle \mathop {\sum }_{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\left( \frac{1}{2}-\frac{\pi }{16}\right) (x_{i+1}+x_{i})\\ =&\displaystyle \mathop {\sum }_{i=1}^{n-1}\frac{(x_{i+1}-x_{i})(y_{i+1}-y_{i})}{\displaystyle \mathop {\sum }\nolimits _{k=1}^{n-1}(x_{k+1}-x_{k})(y_{k+1}-y_{k})}\frac{x_{i+1}+x_{i}}{2}=\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i}\\ E(\eta )=&\frac{1}{3C}\displaystyle \mathop {\sum }_{i=1}^{n-1}(x_{i+1}-x_{i})\int ^{y_{i+1}}_{y_{i}}y(1-\frac{B_{i}(y)B_{i+1}(y)}{1+2\sqrt{B_{i}(y)B_{i+1}(y)}})dy\\ =&\frac{1}{3C}\displaystyle \mathop {\sum }_{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\int ^{1}_{0}[y_{i}+(y_{i+1}-y_{i})t]\left[ 1-\frac{(t-t^{2})}{1+2\sqrt{(t-t^{2})}}\right] dt\\ =&\frac{1}{3C}\displaystyle \mathop {\sum }_{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\left( \frac{3}{4}-\frac{2}{32}\pi \right) (y_{i+1}+y_{i})\\ =&\displaystyle \mathop {\sum }_{i=1}^{n-1}\frac{(x_{i+1}-x_{i})(y_{i+1}-y_{i})}{\displaystyle \mathop {\sum }\nolimits _{k=1}^{n-1}(x_{k+1}-x_{k})(y_{k+1}-y_{k})}\frac{y_{i+1}+y_{i}}{2}=\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i} \end{aligned}$$

(2)Let \(k=\displaystyle \frac{1}{2}-\frac{1}{16}\pi \), then

$$\begin{aligned} E(\xi ^{2})=&\displaystyle \frac{1}{2C}\mathop {\sum }_{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\\&\times \int ^{1}_{0}[x_{i}+(x_{i+1}-x_{i})t]^{2}\frac{(1-t+t^{2})(1-3t+3t^{2})}{(1-2t+2t^{2})}dt\\ =&\displaystyle \frac{1}{2C}\mathop {\sum }_{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\left( -\frac{1}{8}x_{i}x_{i+1}\pi +\frac{17}{30}x_{i}x_{i+1}+\frac{13}{60}x_{i}^{2}+\frac{13}{60}x_{i+1}^{2}\right) \\ =&\displaystyle \frac{1}{2C}\mathop {\sum }_{i=1}^{n-1}\frac{(x_{i+1}-x_{i})(y_{i+1}-y_{i})}{\displaystyle \mathop {\sum }\nolimits _{k=1}^{n-1}(x_{k+1}-x_{k})(y_{k+1}-y_{k})}\left[ 2\left[ \frac{1}{2}- \frac{1}{16}\pi \right] x_{i}x_{i+1}\right. \\&+\left. \frac{13}{60}(x_{i}^{2}+x_{i+1}^{2}-2x_{i}x_{i+1})\right] \\ =&\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}x_{i}x_{i+1}+\frac{13}{120k}\mathop {\sum }_{i=1}^{n-1}\omega _{i}(x_{i+1}-x_{i})^{2}\approx \mathop {\sum }_{i=1}^{n-1}\omega _{i}x_{i}x_{i+1}. \end{aligned}$$

So, \(D(\xi )=E(\xi ^{2})-(E(\xi ))^{2}\approx \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}x_{i}x_{i+1}-\left( \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{x}_{i}\right) ^{2}\).

$$\begin{aligned} E(\eta ^{2})=&\frac{1}{3C}\displaystyle \mathop {\sum }_{i=1}^{n-1}(x_{i+1}-x_{i})(y_{i+1}-y_{i})\int ^{1}_{0}[y_{i}+(y_{i+1}-y_{i})t]^{2}\left[ 1-\frac{t-t^{2}}{1+2\sqrt{t-t^{2}}}\right] dt\\ =&\frac{1}{3C}\displaystyle \mathop {\sum }_{i=1}^{n-1}\frac{(x_{i+1}-x_{i})(y_{i+1}-y_{i})}{\displaystyle \mathop {\sum }\nolimits _{k=1}^{n-1}(x_{k+1}-x_{k})(y_{k+1}-y_{k})}\left[ 3\left( \frac{1}{2}-\frac{1}{16}\pi \right) y_{i}y_{i+1}\right. \\&\left. +\,3\left( \frac{5}{36}-\frac{3}{256}\pi \right) (y_{i+1}-y_{i})^{2}\right] \\ =&\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}y_{i}y_{i+1}+\left( \frac{5}{36}-\frac{3}{256}\pi \right) \frac{1}{k}(y_{i+1}-y_{i})^{2}\\ \thickapprox&\displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}y_{i}y_{i+1}. \end{aligned}$$

So \(D(\eta )=E(\eta ^{2})-(E(\eta ))^{2}\thickapprox \displaystyle \mathop {\sum }_{i=1}^{n-1}\omega _{i}y_{i}y_{i+1}-\left( \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{y}_{i}\right) ^{2}\).

$$\begin{aligned} E(\xi \eta )=&\displaystyle \mathop {\sum }_{i=1}^{n-1}\int ^{x_{i+1}}_{x_{i}}\int ^{y_{i+1}}_{y_{i}}xyf(x,y)dydx\\ =&\frac{1}{C}\displaystyle \mathop {\sum }_{i=1}^{n-1}[\int ^{x_{i+1}}_{x_{i}}x\int ^{\widetilde{y}_{i}}_{y_{i}}yA^{2}_{i}(x)B_{i}(y)dydx\\&+\int ^{x_{i+1}}_{x_{i}}x\int ^{y_{i+1}}_{\widetilde{y}_{i}}yA^{2}_{i+1}(x)B_{i+1}(y)dydx]\\ \triangle _{1}=&\int ^{x_{i+1}}_{x_{i}}x\int ^{\widetilde{y}_{i}}_{y_{i}}yA^{2}_{i}(x)B_{i}(y)dydx=\int ^{x_{i+1}}_{x_{i}}xA^{2}_{i}(x)\int ^{\widetilde{y}_{i}}_{y_{i}}yB_{i}(y)dydx\\ =&\int ^{x_{i+1}}_{x_{i}}xA^{2}_{i}(x)(y_{i+1}-y_{i})\int ^{A^{*}_{i}(x)}_{0}[y_{i}+(y_{i+1}-y_{i})t](1-t)dtdx\\ =&\,(y_{i+1}-y_{i})\int ^{x_{i+1}}_{x_{i}}[xA^{2}_{i}(x)[y_{i}(A^{*}_{i}(x)-\frac{1}{2}(A^{*}_{i}(x))^{2}]+(y_{i+1}-y_{i})\\&\times \left[ \frac{1}{2}(A^{*}_{i}(x))^{2} -\frac{1}{3}(A^{*}_{i}(x))^{3}\right] ]dx\\ =&\,\frac{7}{72}x_{i+1}y_{i}+\frac{19}{144}x_{i+1}y_{i+1}+\frac{1}{16}x_{i}y_{i+1}-\frac{1}{24}x_{i}y_{i}-\frac{\pi }{128}[x_{i}y_{i+1}+3x_{i+1}y_{i}\\&+5x_{i+1}y_{i+1}-5x_{i}y_{i}].\\ \triangle _{2}=&\int ^{x_{i+1}}_{x_{i}}x\int _{\widetilde{y}_{i}}^{y_{i+1}}yA^{2}_{i+1}(x)B_{i+1}(y)dydx\\&=\int ^{x_{i+1}}_{x_{i}}xA^{2}_{i+1}(x)\int _{\widetilde{y}_{i}}^{y_{i+1}}yB_{i+1}(y)dydx\\ =&\,(y_{i+1}-y_{i})\int ^{x_{i+1}}_{x_{i}}xA^{2}_{i+1}(x)\int _{A^{*}_{i}(x)}^{1}[y_{i}+(y_{i+1}-y_{i})t]tdtdx\\ =&\,(y_{i+1}-y_{i})\int ^{x_{i+1}}_{x_{i}}xA^{2}_{i+1}(x)[\frac{1}{2}y_{i}(1-(A^{*}_{i}(x))^{2})\\&+\frac{1}{3}(y_{i+1}-y_{i})(1-(A^{*}_{i}(x))^{3})]dx\\ =&\,\frac{7}{72}x_{i}y_{i+1}+\frac{19}{144}x_{i}y_{i}+\frac{1}{16}x_{i+1}y_{i}-\frac{1}{24}x_{i+1}y_{i+1}-\frac{\pi }{128}[x_{i+1}y_{i}+3x_{i}y_{i+1}\\&+5x_{i}y_{i}-5x_{i+1}y_{i+1}]. \end{aligned}$$

Then

$$\begin{aligned} \triangle _{1}+\triangle _{2}= & {} \displaystyle \frac{1}{16}(x_{i+1}y_{i}+x_{i}y_{i+1})+\frac{7}{72}(x_{i+1}y_{i}+x_{i}y_{i+1})+\frac{9}{144}(x_{i}y_{i}+x_{i+1}y_{i+1})\\&-\frac{1}{24}(x_{i}y_{i}+x_{i+1}y_{i+1})-\frac{\pi }{128}(4x_{i}y_{i+1}+4x_{i+1}y_{i})\\= & {} \left( \displaystyle \frac{1}{4}-\frac{\pi }{32}\right) (x_{i}y_{i}+x_{i+1}y_{i+1})+\frac{23}{144}(y_{i+1}-y_{i})(x_{i+1}-x_{i}). \end{aligned}$$

Then

$$\begin{aligned} E(\xi \eta )= & {} \displaystyle \frac{1}{C}\mathop {\sum }_{i=1}^{n-1}(y_{i+1}-y_{i})(x_{i+1}-x_{i})\left[ \left( \frac{1}{4}-\frac{\pi }{32}\right) (x_{i}y_{i}+x_{i+1}y_{i+1})\right. \\&\left. +\,\frac{23}{144}(y_{i+1}-y_{i})(x_{i+1}-x_{i})\right] \\= & {} \mathop {\sum }_{i=1}^{n-1}\omega _{i}\left[ \frac{x_{i}y_{i}+x_{i+1}y_{i+1}}{2}\right] +\frac{23}{144k}(y_{i+1}-y_{i})(x_{i+1}-x_{i})\\= & {} \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{z}_{i}+\frac{23}{144k}\mathop {\sum }_{i=1}^{n-1}\omega _{i}(y_{i+1}-y_{i})(x_{i+1}-x_{i})\approx \mathop {\sum }_{i=1}^{n-1}\omega _{i}\overline{z}_{i}. \end{aligned}$$

So \(\displaystyle \mathrm {Cov} (\xi ,\eta )\approx \mathop {\sum }\nolimits _{i=1}^{n-1}\omega _{i}\overline{z}_{i}-\left( \mathop {\sum }\nolimits _{i=1}^{n-1}\omega _{i}\overline{x}_{i}\right) \left( \mathop {\sum }\nolimits _{i=1}^{n-1}\omega _{i}\overline{y}_{i}\right) .\) \(\square \)

Notes 2: (1)From Theorem 3 and Theorem 4 we can see that, bounded integral distribution and Larsen square distribution have the same mathematical expectation and nearly the same variance and covariance.

(2) The above results are obtained in the suppose which is \(a=x_{1}<x_{2}<\ldots <x_{n}=b,c=y_{1}<y_{2}<\ldots <y_{n}=d\). And we can get the similar conclusion when \(a=x_{1}<x_{2}<\ldots <x_{n}=b,c=y_{1}>y_{2}>\ldots >y_{n}=d\).

5 Center-of-Gravity Fuzzy Trustworthiness System

Let f(xy) be the probability density function obtained in our study, we can know that from the discussion above,

$$\begin{aligned} \overline{S}(x)=\displaystyle \frac{\int ^{d}_{c}yB^{*}(y)dy}{\int ^{d}_{c}B^{*}(y)dy}=\frac{\int ^{d}_{c}yR(x,y)dy}{\int ^{d}_{c}R(x,y)dy}=\frac{\int ^{+\infty }_{-\infty }yf(x,y)dy}{\int ^{+\infty }_{-\infty }f(x,y)dy}, \end{aligned}$$
(18)

so we have the following theorem.

Theorem 5

If \(\theta =\) Bounded Product implication, where \(\overline{S}_{T}(x)=A^{*}_{i}(x)y_{i}+A^{*}_{i+1}(x)y_{i+1}\), where

$$\begin{aligned} A^{*}_{i}(x)= & {} \displaystyle \frac{\displaystyle \frac{2}{3}A^{3}_{i+1}(x)-A_{i+1}(x)+\frac{2}{3}}{1-2A_{i}(x)A_{i+1}(x)} \end{aligned}$$
(19)
$$\begin{aligned} A^{*}_{i+1}(x)= & {} 1-A^{*}_{i}(x)=\displaystyle \frac{\displaystyle \frac{2}{3}A^{3}_{i}(x)-A_{i}(x)+\frac{2}{3}}{1-2A_{i}(x)A_{i+1}(x)} \end{aligned}$$
(20)

and \(\overline{S}_{T}(x_{i})=\displaystyle \frac{2}{3}y_{i}+\frac{1}{3}y_{i+1},\overline{S}_{L}(x_{i+1})=\displaystyle \frac{1}{3}y_{i}+\frac{2}{3}y_{i+1}\).

Proof

From the theorem above we can know that:

$$\begin{aligned} \int ^{+\infty }_{-\infty }f(x,y)dy=f_{\xi }(x)=\frac{y_{i+1}-y_{i}}{T}\left[ \frac{1}{2}-A_{i}(x)A_{i+1}(x)\right] \end{aligned}$$

and

$$\begin{aligned} \displaystyle \int ^{+\infty }_{-\infty }yf(x,y)dy= & {} \frac{1}{T}\left[ \int ^{y^{*}_{i}}_{y_{i}}y(A_{i}(x)+B_{i}(x)-1)dy\right. \\&\left. +\,\int _{y^{*}_{i}}^{y_{i+1}} y(A_{i+1}(x)+B_{i+1}(x)-1)dy\right] \\= & {} \frac{y_{i+1}-y_{i}}{T}\left[ \left( A^{2}_{i}(x)-\frac{1}{3}A^{3}_{i}(x)-\frac{1}{2}A_{i}(x)+\frac{1}{6}\right) y_{i}\right. \\&\left. +\left( \frac{1}{3}A^{3}_{i}(x)-\frac{1}{2}A_{i}(x)+\frac{1}{3}\right) y_{i+1}\right] \end{aligned}$$

so

$$\begin{aligned} \overline{S}_{T}(x)= & {} \displaystyle \frac{\displaystyle \int ^{+\infty }_{-\infty }yf(x,y)dy}{\displaystyle \int ^{+\infty }_{-\infty }f(x,y)dy}=\displaystyle \frac{\displaystyle \left( A^{2}_{i}(x)-\displaystyle \frac{1}{3}A^{3}_{i}(x)-\displaystyle \frac{1}{2}A_{i}(x)+\frac{1}{6}\right) y_{i}}{\displaystyle \frac{1}{2}-A_{i}(x) A_{i+1}(x)}\\&+\,\displaystyle \frac{\displaystyle \left( \frac{1}{3}A^{3}_{i}(x)-\displaystyle \frac{1}{2}A_{i}(x)+\displaystyle \frac{1}{3}\right) y_{i+1}}{\displaystyle \frac{1}{2}-A_{i}(x) A_{i+1}(x)}=A^{*}_{i}(x)y_{i}+A^{*}_{i+1}(x)y_{i+1}, \end{aligned}$$

and

$$\begin{aligned} A^{*}_{i}(x)+A^{*}_{i+1}(x)= & {} \displaystyle \frac{1}{\displaystyle \frac{1}{2}-A_{i}(x)A_{i+1}(x)}\left[ A^{2}_{i}(x)-\frac{1}{3}A^{3}_{i}(x)-\frac{1}{2}A_{i}(x)\right. \\&\left. +\,\frac{1}{6}+\frac{1}{3}A^{3}_{i}(x)-\frac{1}{2}A_{i}(x)+\frac{1}{3}\right] =\frac{\displaystyle \frac{1}{2}-A_{i}(x)(1-A_{i}(x))}{\displaystyle \frac{1}{2}-A_{i}(x)A_{i+1}(x)}=1. \end{aligned}$$

And because

$$\begin{aligned}&\displaystyle A^{2}_{i}(x)-\frac{1}{3}A^{3}_{i}(x)-\frac{1}{2}A_{i}(x)+\frac{1}{6}=(1-A_{i+1}(x))^{2}-\frac{1}{3}(1-A_{i+1}(x))^{3}\\&\quad -\,\frac{1}{2}(1-A_{i+1}(x))+\frac{1}{6}= \frac{1}{3}A^{3}_{i+1}(x)-\frac{1}{2}A_{i+1}(x)+\frac{1}{3} \end{aligned}$$

we have

$$\begin{aligned} A^{*}_{i}(x)=\frac{\displaystyle \frac{2}{3}A^{3}_{i+1}(x)-A_{i+1}(x)+\frac{2}{3}}{1-2A_{i}(x)A_{i+1}(x)},\quad A^{*}_{i+1}(x)=\frac{\displaystyle \frac{2}{3}A^{3}_{i}(x)-A_{i}(x)+\frac{2}{3}}{1-2A_{i}(x)A_{i+1}(x)}. \end{aligned}$$

\(\square \)

We study the fuzzy trustworthiness system exported by Larsen square distribution below.

Theorem 6

\(\displaystyle \overline{S}_{L^{2}}(x)=\frac{\int ^{+\infty }_{-\infty }yf(x,y)dy}{\int ^{+\infty }_{-\infty }f(x,y)dy}=C^{*}_{i}(x)y_{i}+C^{*}_{i+1}(x)y_{i+1}\), where

$$\begin{aligned} C^{*}_{i+1}(x)=\displaystyle \frac{A^{6}_{i}(x)+2A^{2}_{i+1}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}{3(A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{2}_{i+1}(x))(A^{2}_{i}(x)+A^{2}_{i+1}(x))}, \end{aligned}$$
(21)

\(C^{*}_{i}(x)y_{i}+C^{*}_{i+1}(x)y_{i+1}\equiv 1\) and \(\displaystyle \overline{S}_{L^{2}}(x_{i})=\frac{2}{3}y_{i}+\frac{1}{3}y_{i+1},\overline{S}_{L^{2}}(x_{i+1})=\frac{1}{3}y_{i}+\frac{2}{3}y_{i+1}\).

Proof

$$\begin{aligned}&\displaystyle \int ^{+\infty }_{-\infty }f(x,y)dy=\int ^{y_{i+1}}_{y_{i}}f(x,y)dy\\&\quad =\frac{y_{i+1}-y_{i}}{2C}\frac{(1-A_{i}(x)A_{i+1}(x))(1-3A_{i}(x)A_{i+1}(x))}{1-2A_{i}(x)A_{i+1}(x)},\\&\displaystyle \int ^{+\infty }_{-\infty }yf(x,y)dy=\frac{1}{C}\left[ \int ^{\widehat{y}_{i}}_{y_{i}}yA^{2}_{i}(x)B_{i}(y)dy+\int _{\widehat{y}_{i}}^{y_{i+1}}yA^{2}_{i+1}(x)B_{i+1}(y)dy\right] \\&\quad =\frac{y_{i+1}-y_{i}}{C}\Bigg [[A^{2}_{i}(x)(A^{*}_{i}(x)-(A^{*}_{i}(x))^{2}+\frac{1}{3}(A^{*}_{i}(x))^{3})\\&\qquad +A^{2}_{i+1}(x)\left( \frac{1}{2}(1-A^{*}_{i}(x))^{2}\right) -\frac{1}{3}\left( 1-(A^{*}_{i}(x))^{3})\right) \Bigg ]y_{i}\\&\qquad +[A^{2}_{i}(x)\left( \frac{1}{2}(A^{*}_{i}(x)\right) ^{2}-\frac{1}{3}(A^{*}_{i}(x))^{3})+\frac{1}{3}A^{2}_{i+1}(x)(1-(A^{*}_{i}(x))^{3})]y_{i+1}]\\&\qquad =\frac{y_{i+1}-y_{i}}{C}[D_{i}(x)y_{i}+D_{i+1}(x)y_{i+1}]. \end{aligned}$$

Then

$$\begin{aligned} D_{i}(x)+D_{i+1}(x)= & {} A^{2}_{i}(x)\left[ A^{*}_{i}(x)-\frac{1}{2}(A^{*}_{i}(x))^{2}\right] +\frac{1}{2}A^{2}_{i+1}(x)[1-(A^{*}_{i}(x))^{2}]\\= & {} A^{2}_{i}(x)A^{*}_{i}(x)+\frac{1}{2}A^{2}_{i+1}(x)-\frac{1}{2}(A^{*}_{i}(x))^{2}(A^{2}_{i}(x)+A^{2}_{i+1}(x))\\= & {} \frac{2A^{4}_{i}(x)+A^{2}_{i+1}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))-A^{4}_{i}(x)}{2(A^{2}_{i}(x)+A^{2}_{i+1}(x))}\\= & {} \frac{(1-A_{i}(x)A_{i+1}(x))(1-3A_{i}(x)A_{i+1}(x))}{2(1-2A_{i}(x)A_{i+1}(x))}. \end{aligned}$$

So, \(C^{*}_{i}(x)+C^{*}_{i+1}(x)=1\). And

$$\begin{aligned} D_{i+1}(x)= & {} A^{2}_{i}(x)\left[ \frac{1}{2}(A^{*}_{i}(x))^{2}-\frac{1}{3}(A^{*}_{i}(x))^{3}\right] +\frac{1}{3}A^{2}_{i+1}(x)(1-(A^{*}_{i}(x))^{3})\\= & {} \frac{3A^{2}_{i}(x)A^{4}_{i}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))-2A^{8}_{i}(x)+2A^{2}_{i+1}(x)((A^{2}_{i}(x)+A^{2}_{i+1}(x))^{3}-A^{6}_{i}(x))}{6(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{3}}\\= & {} \frac{A^{8}_{i}(x)+3A^{6}_{i}(x)A^{2}_{i+1}(x)+2A^{8}_{i+1}(x)+6A^{4}_{i}(x)A^{4}_{i+1}(x)+6A^{2}_{i}(x)A^{2}_{i+1}(x)}{6(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{3}}\\= & {} \frac{A^{6}_{i}(x)+2A^{2}_{i+1}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}{6(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}, \end{aligned}$$

thus

$$\begin{aligned} C^{*}_{i+1}(x)= & {} \frac{2D_{i+1}(x)}{\displaystyle \frac{A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{4}_{i+1}(x)}{(A^{2}_{i+1}(x)+A^{2}_{i}(x))}}\\= & {} \frac{A^{6}_{i}(x)+2A^{2}_{i+1}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}{3(A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{4}_{i+1}(x))(A^{2}_{i}(x)+A^{2}_{i+1}(x))}\\= & {} \frac{A^{6}_{i}(x)+2A^{2}_{i+1}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}{3(1-A_{i}(x)A_{i+1}(x))(1-2A_{i}(x)A_{i+1}(x))(1-3A_{i}(x)A_{i+1}(x))}. \end{aligned}$$

So \(\overline{S}(x)=C^{*}_{i}(x)y_{i}+C^{*}_{i+1}(x)y_{i+1}\), \(\displaystyle C^{*}_{i+1}(x_{i+1})=\frac{2}{3};C^{*}_{i}(x_{i+1})=1-\frac{2}{3}=\frac{1}{3};C^{*}_{i+1}(x_{i})=\frac{1}{3};C^{*}_{i}(x_{i+1})=1-\frac{1}{3}=\frac{2}{3}\), so \(\overline{S}(x_{i})=\frac{2}{3}y_{i}+\frac{1}{3}y_{i+1},\overline{S}(x_{i+1})=\frac{1}{3}y_{i}+\frac{2}{3}y_{i+1}\). \(\square \)

We study the universal approximations of fuzzy trustworthiness systems \(\overline{S}_{T}(x)\) and \(\overline{S}_{L^{2}}(x)\). Suppose s(x) is a known system and \(s(x_{i})=y_{i}\). Let \(h=\displaystyle \mathop {\max }_{1\le i\le n-1}{\varDelta } x_{i}, \Vert s\Vert _{\infty }=\displaystyle \mathop {\max }_{x\in [a,b]}|s(x)|.\) Suppose \(F_{1}(x)=A_{i}(x)y_{i}+A_{i+1}(x)y_{i+1}, (x\in [x_{i},x_{i+1}]).\) From reference [11], we can know that: \(\displaystyle \Vert s-F_{1}\Vert \le \frac{1}{8}\Vert s^{''}\Vert _{\infty }h^{2}\). So we have

Theorem 7

When \(\overline{S}(x)\in \{\overline{S}_{T}(x),\overline{S}_{L^{2}}(x)\}\),

$$\begin{aligned} \Vert \overline{S}(x)-F_{1}(x)\Vert \le \frac{1}{3}\Vert s^{'}\Vert _{\infty }h. \end{aligned}$$
(22)

Proof

(1) When \(\overline{S}(x)=\overline{S}_{T}(x)\),

$$\begin{aligned} A^{*}_{i}(x)=\frac{\displaystyle \frac{1}{3}A^{3}_{i+1}(x)-A_{i+1}(x)+\frac{2}{3}}{1-2A_{i}(x)A_{i+1}(x)}. \end{aligned}$$

Then

$$\begin{aligned} A^{*}_{i}(x)-A_{i}(x)= & {} \frac{\displaystyle \frac{1}{3}A^{3}_{i+1}(x)-A_{i+1}(x)+\frac{2}{3}-A_{i}(x)+2A^{2}_{i}(x)A_{i+1}(x)}{1-2A_{i}(x)A_{i+1}(x)}\\= & {} \frac{2A^{2}_{i+1}(x)-1+6A^{2}_{i}(x)A_{i+1}(x)}{3[1-2A_{i}(x)A_{i+1}(x)]}\\= & {} \frac{[1-2A_{i}(x)]^{3}}{3[1-2A_{i}(x)A_{i+1}(x)]}. \end{aligned}$$

Then

$$\begin{aligned} A^{*}_{i+1}(x)-A_{i+1}(x)= & {} -\frac{[1-2A_{i}(x)]^{3}}{3[1-2A_{i}(x)A_{i+1}(x)]},\\ \overline{S}_{T}(x)-F_{1}(x)= & {} \frac{[1-2A_{i}(x)]^{3}}{3[1-2A_{i}(x)A_{i+1}(x)]}(y_{i}-y_{i+1}). \end{aligned}$$

Then

$$\begin{aligned} |\overline{S}_{T}(x)-F_{1}(x)|=\frac{|[1-2A_{i}(x)]^{3}|}{3[1-2A_{i}(x)A_{i+1}(x)]}|y_{i+1}-y_{i}|\le \frac{1}{3}|y_{i+1}-y_{i}|\le \frac{1}{3}\Vert s^{'}\Vert _{\infty }h. \end{aligned}$$

(2)

$$\begin{aligned} C^{*}_{i+1}(x)= & {} \displaystyle \frac{A^{6}_{i}(x)+2A^{2}_{i+1}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}{3(A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{4}_{i+1}(x))(A^{2}_{i}(x)+A^{2}_{i+1}(x))},\\ A^{*}_{i+1}(x)= & {} \frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)} \end{aligned}$$

then

$$\begin{aligned} C^{*}_{i+1}(x)-A^{*}_{i+1}(x)= & {} \displaystyle \frac{A^{6}_{i}(x)+2A^{2}_{i+1}(x)(A^{2}_{i}(x)+A^{2}_{i+1}(x))^{2}}{3(A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{4}_{i+1}(x))(A^{2}_{i}(x)+A^{2}_{i+1}(x))}\\&-\,\frac{A^{2}_{i+1}(x)}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}\\= & {} \frac{(A^{2}_{i}(x)-A^{2}_{i+1}(x)(A^{4}_{i}(x)+A^{4}_{i+1}(x))}{3(A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x))+A^{4}_{i+1}(x)) (A^{2}_{i}(x)+A^{2}_{i+1}(x))}.\\ C^{*}_{i}(x)-A^{*}_{i}(x)= & {} -(C^{*}_{i+1}(x)-A^{*}_{i+1}(x)). \end{aligned}$$

Then \(\overline{S}_{L^{2}}(x)-F_{1}(x)=(C^{*}_{i+1}(x)-\,A^{*}_{i+1}(x))(y_{i+1}-y_{i}).\) So

$$\begin{aligned}&|\overline{S}_{L^{2}}(x)-F_{1}(x)|=|C^{*}_{i+1}(x)-A^{*}_{i+1}(x)||y_{i+1}-y_{i}|\\&\quad =\displaystyle \frac{|A^{2}_{i}(x)-A^{2}_{i+1}(x)|(A^{4}_{i}(x)+A^{4}_{i+1}(x))}{3(A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x))+A^{4}_{i+1}(x))(A^{2}_{i}(x)+A^{2}_{i+1}(x))}|y_{i+1}-y_{i}| \,\le \,\frac{1}{3}|y_{i+1}\\&\qquad -\,y_{i}|\le \frac{1}{3}\Vert s^{'}\Vert _{\infty }h. \end{aligned}$$

And

$$\begin{aligned} C^{*}_{i+1}(x)-A_{i+1}(x)= & {} C^{*}_{i+1}(x)-A^{*}_{i+1}(x)+A^{*}_{i+1}(x)-A_{i+1}(x)\\= & {} \displaystyle \frac{(A^{2}_{i}(x)-A^{2}_{i+1}(x))(A^{4}_{i}(x)+A^{4}_{i+1}(x))}{3(A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{4}_{i+1}(x)) (A^{2}_{i}(x)+A^{2}_{i+1}(x))}\\&+\,\frac{A_{i}(x)A_{i+1}(x)[A_{i+1}(x)-A_{i}(x)]}{A^{2}_{i}(x)+A^{2}_{i+1}(x)}\\= & {} \frac{[A^{2}_{i}(x)-A^{2}_{i+1}(x)]}{3[A^{2}_{i}(x)+A^{2}_{i+1}(x)]}[\frac{A^{4}_{i}(x)+A^{4}_{i+1}(x)}{A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{4}_{i+1}(x)}\\&-\,3A_{i}(x)A_{i+1}(x)]\\= & {} \frac{[A^{2}_{i}(x)-A^{2}_{i+1}(x)]}{3[A^{2}_{i}(x)+A^{2}_{i+1}(x)]}\left[ 1-\frac{A^{2}_{i}(x)A^{2}_{i+1}(x)}{A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{4}_{i+1}(x)}\right. \\&\left. -\,3A_{i}(x)A_{i+1}(x)\right] \end{aligned}$$

Because of \(\displaystyle \left| \frac{[A^{2}_{i}(x)-A^{2}_{i+1}(x)]}{3[A^{2}_{i}(x)+A^{2}_{i+1}(x)]}\right| \le 1, \,\displaystyle 1\ge 1-\frac{A^{2}_{i}(x)A^{2}_{i+1}(x)}{A^{4}_{i}(x)+A^{2}_{i}(x)A^{2}_{i+1}(x)+A^{4}_{i+1}(x)}-3A_{i}(x)A_{i+1}(x)\ge 1-\frac{1}{3}-\frac{3}{4}=\frac{5}{12}>0.\)

Thus \(\displaystyle |C^{*}_{i+1}(x)-A_{i+1}(x)|\le \frac{1}{3}.\) So

$$\begin{aligned} |\overline{S}_{L^{2}}(x)-F_{1}(x)|=|C^{*}_{i+1}(x)-A_{i+1}(x)||y_{i+1}-y_{i}|\le \frac{1}{3}\Vert s^{'}\Vert _{\infty }h \end{aligned}$$

Then \(\displaystyle \Vert \overline{S}_{L^{2}}(x)-F_{1}(x)\Vert _{\infty }\le \frac{1}{3}\Vert s^{'}\Vert _{\infty }h.\) \(\square \)

Theorem 8

When \(\overline{S}(x)\in \{\overline{S}_{T}(x),\overline{S}_{L^{2}}(x)\}\),

$$\begin{aligned} \displaystyle \Vert s-\overline{S}\Vert _{\infty }\le \frac{1}{8}\Vert s^{\prime \prime }\Vert _{\infty }h^{2}+\frac{1}{3}\Vert s^{\prime }\Vert _{\infty }h. \end{aligned}$$
(23)

Proof

$$\begin{aligned} \displaystyle \Vert s-\overline{S}\Vert _{\infty }\le \Vert s-F_{1}\Vert _{\infty }+\Vert \overline{S}-F_{1}\Vert _{\infty }\le \frac{1}{8}\Vert s^{\prime \prime }\Vert _{\infty }h^{2}+\frac{1}{3}\Vert s^{\prime }\Vert _{\infty }h. \end{aligned}$$

Notes 3 (1) We get \(\overline{S}_{T}(x)\) and \(\overline{S}_{L^{2}}(x)\) are assumed s(x) to be monotonic function. For non-monotonic function, we can divide X into several intervals, and let s(x) are monotonic in each interval. Because \(\overline{S}_{T}(x)\) and \(\overline{S}_{L^{2}}(x)\) can approximate s(x) in each monotonic interval, \(\overline{S}_{T}(x)\) and \(\overline{S}_{L^{2}}(x)\) can approximate s(x) in entire X. For example, let \(\displaystyle s(x)=\sin x, X=[-\pi ,\pi ], X_{1}=[-\pi ,-\frac{\pi }{2}], X_{2}=(-\frac{\pi }{2},-\frac{\pi }{2}),X_{3}=[-\frac{\pi }{2},-\pi ]\). Then s(x) is monotonic function in \(X_{1},X_{2},X_{3}\). If \(\varepsilon =0.1\), then from Theorem 8 we can know that: \(n=17\) in \(X_{1}\) and \(X_{3}\), \(n=24\) in \(X_{2}\). So we let \(n=17\times 2+33=67\), then \(\overline{S}_{L^{2}}(x)\) can approximate s(x) with error not more than 0.1.

(2) From Theorem 8 we can know that: \(\overline{S}_{T}(x)\) and \(\overline{S}_{L^{2}}(x)\) have first-order approximation accuracy to s and they have the same error estimates upper bound. \(\square \)

Example 1

Let \(s(x)=\sin x,x\in [-3,3]\). Then \(\Vert S''\Vert _{\infty }=\Vert S'\Vert _{\infty }=1\).

If \(\varepsilon =0.1,\) from \(\displaystyle \frac{1}{8}\Vert S''\Vert h^{2}+\frac{1}{3}\Vert S'\Vert _{\infty }h<0.1\) we can know that \(n=24\).

For \(\varepsilon =0.1\), Figs. 1 and 2 show the simulation figure and the error estimates curves of \(\overline{S}_{T}(x)\) and \(\overline{S}_{L^{2}}(x)\) to s.

Fig. 1
figure 1

The simulation result of \(\overline{S}_{T}(x)\)

Fig. 2
figure 2

The simulation result of \(\overline{S}_{L^{2}}(x)\)

6 Application Analysis

The fuzzy trustworthiness theory and systems in this paper have wide application fields. It provides a completely new thought in theory and application study on such fields as fuzzy logic and neural networks [20],factor neural networks [2125],and fuzzy expert systems [26]. Fuzzy trustworthiness system, together with its theories, can be used in quantifiable description and measurement of trustworthy software [2731],and this would be a new direction of expand research on software trustworthiness metrics models. From the view of trustworthiness theory and fuzzy system, network attack and defence [3134], engineering system automatic design process [35, 36] and the design of fuzzy controller [37], etc., can also be explored, providing a new research method and direction for these fields.

7 Conclusion

This paper has researched bounded product implication and Larsen square implication and obtained two specific probability density functions. It is pointed out that these probability distributions have the same mathematical expectation and nearly the same variance and covariance although we have different probability density function expression by use of the different fuzzy implication operators. And we also got the center-of-gravity fuzzy trustworthiness systems of these two probability distributions, then we gave sufficient condition of the universal approximations for those fuzzy trustworthiness systems. As for application researches, fuzzy trustworthiness system can be applied in software trustworthiness metrics,network attack and defence, engineering system automatic design process and the design of fuzzy controller,etc.