Next Article in Journal
A Two-Stage Differential Evolution Algorithm with Mutation Strategy Combination
Next Article in Special Issue
A Type I Generalized Logistic Distribution: Solving Its Estimation Problems with a Bayesian Approach and Numerical Applications Based on Simulated and Engineering Data
Previous Article in Journal
Analysis of Free Vibration Characteristics of Cylindrical Shells with Finite Submerged Depth Based on Energy Variational Principle
Previous Article in Special Issue
Exponentiated Generalized Inverted Gompertz Distribution: Properties and Estimation Methods with Applications to Symmetric and Asymmetric Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Slash Truncation Positive Normal Distribution and Its Estimation Based on the EM Algorithm

1
Departamento de Ciencias Matemáticas y Físicas, Facultad de Ingeniería, Universidad Católica de Temuco, Temuco 4780000, Chile
2
Departamento de Matemática, Facultad de Ingeniería, Universidad de Atacama, Copiapó 1530000, Chile
3
Departamento de Matemática, Facultad de Ciencias, Universidad Católica del Norte, Antofagasta 1240000, Chile
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(11), 2164; https://doi.org/10.3390/sym13112164
Submission received: 9 October 2021 / Revised: 31 October 2021 / Accepted: 3 November 2021 / Published: 11 November 2021

Abstract

:
In this paper, we present an extension of the truncated positive normal (TPN) distribution to model positive data with a high kurtosis. The new model is defined as the quotient between two random variables: the TPN distribution (numerator) and the power of a standard uniform distribution (denominator). The resulting model has greater kurtosis than the TPN distribution. We studied some properties of the distribution, such as moments, asymmetry, and kurtosis. Parameter estimation is based on the moments method, and maximum likelihood estimation uses the expectation-maximization algorithm. We performed some simulation studies to assess the recovery parameters and illustrate the model with a real data application related to body weight. The computational implementation of this work was included in the tpn package of the R software.

1. Introduction

The modeling of non-negative data has grown exponentially, since many datasets have this characteristic. Distributions with support in the positive line are used widely in the engineering and reliability fields related to failure time (also known as lifetime data). The half-normal distribution (HN) is a very well-known model for non-negative data, discussed extensively in the literature. For instance, Rafiqullah et al. [1] used the HN model to analyze survival data related to breast cancer in Hispanic black and non-Hispanic black women. Bosch-Badia et al. [2] studied the applicability of the HN distribution to risk analysis traditionally performed using risk matrices. Tsizhmovska et al. [3] analyzed the length of sentences where one of their distributions was the HN.
Olmos et al. [4] generated an extension of the HN distribution, obtaining a distribution that captures atypical data, but with little flexibility, called the slashed half-normal distribution (SHN). Cooray and Ananda [5] generalized the HN distribution, obtaining a new flexible model, which they denominated the generalized half-normal (GHN) distribution, which includes the HN model as a particular case. Despite the flexibility offered, a major difficulty appears, commonly related to limitations on the use of atypical data. To solve the obstacle, Olmos et al. [6] proposed an extension of the GHN model, named the slashed generalized half-normal (SGHN) distribution. The main aim of the authors was to generate a model with a higher kurtosis that allows better modeling of positive data in the presence of outliers. Other authors have worked on a similar idea, e.g., Iriarte et al. [7], Reyes et al. [8], Olmos et al. [9], Segovia et al. [10], and Astorga et al. [11].
Gómez et al. [12] truncated the normal distribution, conditioning it to positive values, i.e., if X has a normal distribution, the authors studied X | X > 0 (see Jonhson et al. [13]), creating a distribution that they named the truncated positive normal distribution (TPN). A random variable (rv) Z follows a TPN distribution, denoted by Z T P N ( σ , λ ) , if its probability density function (pdf) is given by:
f ( z ; σ , λ ) = 1 σ Φ ( λ ) ϕ z σ λ , z > 0 ,
where ϕ denotes the pdf of the standard normal model, σ > 0 is a scale parameter, and λ R is a shape parameter.
On the other hand, the slash distribution is defined stochastically as the quotient between two independent rv, let us say Z and U, as follows:
X = Z U 1 q ,
where Z N ( 0 , 1 ) and U U ( 0 , 1 ) are independent and q > 0 is a shape parameter.
Olmos et al. [6] used this idea to propose an extension to the half-normal generalized model of Cooray and Ananda [1], called the slashed generalized half-normal distribution (SGHN). The density function of this rv is as follows:
f ( z ; σ , α , q ) = q 2 q / α π σ q Γ ( q + α 2 α ) z q + 1 G z 2 α , q + α 2 α , 1 2 σ 2 α , z > 0 ,
where σ , α , q > 0 and G ( · ; a ) is the cumulative distribution function (cdf) of the gamma distribution with shape parameter a and rate parameter one. We denote Z S G H N ( σ , α , q ) .
The objective of this paper is to propose an extension of the model proposed by Gómez et al. [12] using the “slash” procedure, utilizing a TPN ( σ , λ ) rv in the numerator. Thus, the new model, which we call the slash truncated positive normal (STPN), will become a direct competitor model for SGHN, since it creates heavier tails and, moreover, allows the fitting of atypical data.
The paper is organized as follows. Section 2 presents the pdf of the STPN distribution and some properties such as moments, the hazard function, and the kurtosis coefficient. Section 3 studies the inference for the proposed model. In particular, we discuss the moments estimator and the expectation-maximization (EM) [14] algorithm to find the maximum likelihood estimator. In addition, we offer the observed Fisher information using Louis’ method [15]. Section 4 shows a simulation study to assess the recovery parameters. Section 5 conducts a real data application, where the STPN is compared with other proposals in the literature. Finally, Section 6 presents the conclusions of the manuscript.

2. The Slash Truncation Positive Normal Model

In this section, we describe the stochastic representation of the STPN model, its pdf, and some basic properties of the model.

2.1. Stochastic Representation and Particular Cases

Definition 1.
An rv Y has an STPN distribution with parameters σ, λ, and q if it can be represented as the ratio:
Y = Z U 1 q
where U U ( 0 , 1 ) and Z T P N ( σ , λ ) are independent rvs, σ > 0 , λ R , and q > 0 . We denote it as Y S T P N ( σ , λ , q ) .
By construction, the following models are particular cases for the STPN distribution:
  • STPN ( σ , λ , q ) TPN ( σ , λ ) ;
  • STPN ( σ , λ = 0 , q ) SHN ( σ , q ) ;
  • STPN ( σ , λ = 0 , q ) HN ( σ ) .
Figure 1 summarizes the relationships among the STPN and its particular cases.

2.2. Density Function

Proposition 1.
Let Y S T P N ( σ , λ , q ) . Then, the pdf of Y is given by:
f Y ( y ; σ , λ , q ) = q σ Φ ( λ ) 0 1 w q ϕ y w σ λ d w , y > 0 ,
where σ > 0 is a scale parameter, λ R is a shape parameter, and q > 0 is a parameter related to the kurtosis of the distribution.
Proof. 
Using the representation in (4) and computing the Jacobian of the transformation for Y = Z / U 1 / q and W = U 1 / q , we obtain:
Z = Y W U = W q J = Z Y Z W U Y U W   =   w z 0 q w q 1 = q w q .
Therefore,
f Y , W ( y , w ) = | J | f Z , U ( y w , w q ) = q w q f X ( y w ) f U ( w q ) , = q σ Φ ( λ ) w q ϕ y w σ λ , 0 < w < 1 , z > 0 .
Marginalizing with respect to variable W, we obtain the density function corresponding to the rv Y, that is,
f Y ( y ; σ , λ , q ) = q σ Φ ( λ ) 0 1 w q ϕ y w σ λ d w .
An alternative way to obtain this pdf is by substituting u = y w σ λ , obtaining:
f Y ( y ; σ , λ , q ) = q σ q ( 2 π ) 1 / 2 Φ ( λ ) y q + 1 λ y / σ λ ( u + λ ) q e u 2 / 2 d u .
With t = u 2 2 in the last expression, we obtain:
f Y ( y ; σ , λ , q ) = q σ q ( 2 π ) 1 2 Φ ( λ ) y q + 1 K = 0 q k λ q k 2 k 2 1 2 Γ k + 1 2 G y σ λ 2 , k + 1 2 , 1 G λ 2 2 , k + 1 2 , 1 .
   □

2.3. Some Properties

In this section, we study some basic properties of the STPN distribution.
Proposition 2.
Let Y S T P N ( σ , λ , q ) . Then, the cdf of Y is given by:
F Y ( y ; σ , λ , q ) = q Φ ( λ ) 0 1 w q 1 Φ y w σ λ + Φ ( λ ) 1 d w , y > 0
Proof. 
It is immediate from the definition.    □
Proposition 3.
Let Y S T P N ( σ , λ , q ) . Then, the hazard function is given by:
H Y ( y ; σ , λ , q ) = q σ Φ ( λ ) 0 1 w q ϕ y w σ λ d w 1 q Φ ( λ ) 0 1 w q 1 Φ y w σ λ + Φ ( λ ) 1 d w , y > 0
Figure 2 shows the pdf, cdf, and hazard function for the STPN model with different combinations of parameters.
Proposition 4.
Let Y S T P N ( σ , λ , q ) . If  q + , then Y strongly converges to the rv Z T P N ( σ , λ ) .
Proof. 
Let Y STPN ( σ , λ , q ) . Then, Y can be written as Y = Z / U 1 / q , where Z TPN ( σ , λ ) and U U ( 0 , 1 ) . First, we studied the convergence in the probability of U 1 / q . It is clear that W = U 1 / q , then W B e t a ( q , 1 ) , so that E ( W 1 ) 2 = 2 ( q + 1 ) ( q + 2 ) . If q + , then E ( W 1 ) 2 0 . Therefore,
W = U 1 / q P 1 , as q + ,
where P denotes convergence in probability. Then, applying Slutsky’s theorem [16] to Y = Z / U 1 / q , we have:
Y D Z , as q + ,
where D denotes convergence in the distribution. In other words, for greater q values, Y strongly converges to the T P N ( σ , λ ) distribution.
   □
Proposition 5.
If Y | T = t T P N ( σ t 1 / q , λ ) and T U ( 0 , 1 ) , then Y S T P N ( σ , λ , q ) .
Proof. 
The marginal distribution of Y can be computed as:
f Y ( y ; σ , λ , q ) = 0 1 f Y | T ( y | t ) f T ( t ) d t = 0 1 1 σ Φ ( λ ) t 1 / q ϕ y σ t 1 / q λ d t .
With the transformation w = t 1 / q , we obtain Equation (5).    □
Remark 1.
Proposition 4 implies that for q + , the pdf of the STPN distribution converges to the pdf of the TPN model. Proposition 5 shows that the STPN distribution can also be seen as a scale mixture of the TPN model. This property is very important for obtaining random values from this model and for the application of an EM-type algorithm to estimate the parameters of the model.

2.4. Moments

The following proposition provides the moments of the STPN distribution.
Proposition 6.
Let Y S T P N ( σ , λ , q ) . Therefore, for  r = 1 , 2 , and q > r , the r-th moment of Y is given by:
μ r = E ( Y r ) = q σ r q r κ r ( λ ) ,
where κ r ( λ ) = 1 2 π Φ ( λ ) k = 0 t r k λ r k 2 ( k 1 ) / 2 Γ ( ( k + 1 ) , λ 2 / 2 ) .
Proof. 
Using the stochastic representation given in Equation (4), we have that:
μ r = E ( Y r ) = E ( Z r U r / q ) = E ( Z r ) E ( U r / q ) ,
where E ( U r / q ) = q q r , q > r , and E ( Z r ) = σ r 2 π Φ ( λ ) k = 0 t r k λ r k 2 ( k 1 ) / 2 Γ ( ( k + 1 ) , λ 2 / 2 ) are the moments of the TPN ( σ , λ ) model.    □
Corollary 1.
If Y S T P N ( σ , λ , q ) , then its first four moments are determined as follows:
  • μ 1 = E ( Y ) = q σ q 1 κ 1 ( λ ) , q > 1 ;
  • μ 2 = E ( Y 2 ) = q σ 2 q 2 κ 2 ( λ ) , q > 2 ;
  • μ 3 = E ( Y 3 ) = q σ 3 q 3 κ 3 ( λ ) , q > 3 ;
  • μ 4 = E ( Y 4 ) = q σ 4 q 4 κ 4 ( λ ) , q > 4 .
V a r ( Y ) = σ 2 q 1 q 2 κ 2 ( λ ) q ( q 1 ) 2 κ 1 2 ( λ ) , q > 2 .
Proof. 
It is immediate from Proposition 6.    □
Corollary 2.
Let Y S T P N ( σ , λ , q ) , then the asymmetry coefficient ( β 1 ) and the kurtosis coefficient ( β 2 ) are:
β 1 = q 2 { ( q 1 ) 3 ( q 2 ) κ 3 3 q κ 1 κ 2 ( q 1 ) 2 ( q 3 ) + 2 q 2 ( q 2 ) ( q 3 ) κ 1 3 } q ( q 3 ) { ( q 1 ) 2 κ 2 q ( q 2 ) κ 1 2 } 3 / 2 , q > 3 ,
β 2 = ( q 1 ) 3 ( q 2 ) 2 A + 3 ( q 2 ) ( q 3 ) ( q 4 ) q 2 B q ( q 3 ) ( q 4 ) { ( q 1 ) 2 κ 2 q ( q 2 ) κ 1 2 } 2 , q > 4
where A = ( q 3 ) ( q 1 ) κ 4 4 q ( q 1 ) ( q 4 ) κ 1 κ 3 and B = 2 ( q 1 ) 2 κ 1 2 κ 2 q ( q 2 ) κ 1 4 .
Proof. 
By the definition of the asymmetry and kurtosis coefficients, we have:
β 1 = μ 3 3 μ 2 μ 1 + 2 μ 1 3 ( μ 2 μ 1 2 ) 3 / 2 a n d β 2 = μ 4 4 μ 1 μ 3 + 6 μ 1 2 μ 2 3 μ 1 4 ( μ 2 μ 1 2 ) 2 .
Replacing μ 1 , μ 2 , μ 3 , and μ 4 obtained in Corollary 1, we have the result.    □
Remark 2.
Proposition 6 shows that the moments of the S T P N distribution depend essentially on the moments of the T P N distribution. Equations (6) and (8) show the effect of parameter q on the model; a lower value of q produces greater variance and kurtosis. Table 1 shows some values of the kurtosis coefficient of the S T P N distribution for different values of λ and q.
Figure 3 shows the mean, standard deviation, asymmetry coefficient, and kurtosis coefficient for the STPN ( σ = 1 , λ , q ) in terms of λ and q.

3. Inference

In this Section, we discuss a classical approach for the inference for the STPN distribution. In particular, we discuss the moments estimators and maximum likelihood (ML) estimation based on the EM algorithm.

3.1. Moments Estimators

The moments estimators result from the solution of the equation E ( Y j ) = Y j ¯ , for j = 1 , 2 , 3 , where Y j ¯ = n 1 i = 1 n y i j denotes the j-th sample moment. Solving E ( Y ) = Y ¯ , we have that:
σ = ( q 1 ) Y ¯ q ( λ + ξ ( λ ) ) .
Replacing this, we have the following nonlinear equations
Y 2 ¯ = Y ¯ 2 ( q 1 ) 2 ( λ 2 + λ ξ ( λ ) + 1 ) q ( q 2 ) ( λ + ξ ( λ ) ) 2 , and Y 3 ¯ = Y ¯ 3 ( q 1 ) 3 ( λ 3 + λ 2 ξ ( λ ) + 3 λ + 2 ξ ( λ ) ) q 2 ( q 3 ) ( λ + ξ ( λ ) ) 3 .
These equations can be solved using different software. For instance, in R [17], we can use the nleqslv function to obtain the moments estimators λ ^ M and q ^ M . The moments estimator σ ^ M is obtained by substitution in Equation (9).

3.2. Maximum Likelihood Estimation

Given y 1 , , y n , a random sample from the STPN ( σ λ , q ) distribution, the log-likelihood function for θ = ( σ , λ , q ) is given by:
( θ ) = n log ( q ) n log ( σ ) n log ( Φ ( λ ) ) + i = 1 n log G ( y i ) ,
where:
G ( y i ) = G ( y i , σ , λ , q ) = 0 1 w q ϕ y i w σ λ d w .
Deriving in relation to the components of θ , we obtain the following ML equations:
i = 1 n G 1 ( y i ) G ( y i ) = n σ , i = 1 n G 2 ( y i ) G ( y i ) = n ξ ( λ ) , and i = 1 n G 3 ( y i ) G ( y i ) = n q ,
where G 1 ( y i ) = G ( y i ) σ , G 2 ( y i ) = G ( y i ) λ , and G 3 ( y i ) = G ( y i ) q . For  j > 0 , we define:
a i ( j ) = a i ( σ , λ , j ) = 0 1 w j ϕ y i w σ λ d w , and b i ( j ) = b i ( σ , λ , j ) = 0 1 w j ln ( w ) ϕ y i w σ λ d w .
With those notations, the ML equations can also be written as:
i = 1 n y i 2 σ 3 a i ( q + 2 ) a i ( q ) i = 1 n y i λ σ 2 a i ( q + 1 ) a i ( q ) = n σ , i = 1 n y i σ a i ( q + 1 ) a i ( q ) n λ = n ξ ( λ ) , and i = 1 n b i ( q ) a i ( q ) = n q
Taking x i = y i / σ , ω 1 ( x i ) = a i ( q + 2 ) a i ( q ) , ω 2 ( x i ) = b i ( q ) a i ( q ) , and ω 3 ( x i ) = a i ( q + 1 ) a i ( q ) , the equations are equivalent to:
i = 1 n x i 2 ω 1 ( x i ) λ i = 1 n x i ω 3 ( x i ) = n , i = 1 n y i σ ω 3 ( x i ) n λ = n ξ ( λ ) , and i = 1 n ω 2 ( x i ) = n q .
The ML estimators can be obtained directly using numerical procedures. However, to increase the robustness of the procedure for obtaining those estimators, we also discuss an EM-type algorithm for estimation in the model.

3.3. EM Algorithm

The EM algorithm is a well-known tool for ML estimation in the presence of nonobserved (latent) data. For this particular problem, the algorithm takes advantage of the stochastic representation of the STPN model in Equation (4). Let W = U 1 / q . The representation of the model can be seen as Y i = Z i / W i , where W i Beta ( q , 1 ) .
In this context, the STPN distribution can also be written using the following hierarchical representation:
Y i | W i = w i i n d . T P N σ w i , λ , W i i n d . B e t a ( q , 1 ) , i = 1 , n .
In our context, y = [ y 1 , , y n ] and w = [ w 1 , , w n ] represent the observed and nonobserved data, respectively. The complete data are given by y c = [ y , w ] . We also denote c ( θ | y c ) as the complete log-likelihood function, which up to a constant is given by:
c ( θ | y c ) = n log q log ( σ ) log ( Φ ( λ ) ) n 2 λ 2 i = 1 n y i 2 w i 2 2 σ 2 + λ σ i = 1 n y i w i + q i = 1 n log ( w i ) .
Note that Q ( θ | θ ^ ( k ) ) = E ( c ( θ | y ) | y , θ = θ ^ ( k ) ) ; the expected value of c ( θ ) provided the observed data is given by:
Q ( θ | θ ^ ( k ) ) = n log q log ( σ ) log ( Φ ( λ ) ) n 2 λ 2 i = 1 n y i 2 w i 2 ^ ( k ) 2 σ 2 + λ σ i = 1 n y i w i ^ ( k ) + q i = 1 n log w i ^ ( k ) ,
where w i ^ ( k ) = E ( w i | y i , θ = θ ^ ( k ) ) , w i 2 ^ ( k ) = E ( w i 2 | y i , θ = θ ^ ( k ) ) , and log w i ^ ( k ) = E ( log w i | y i , θ = θ ^ ( k ) ) . In our context, w ^ i ( k ) , w ^ i 2 ( k ) and log w i ^ ( k ) do not have a closed form; they therefore need to be computed numerically. In short, the k-th step of the EM algorithm is detailed as follows:
  • E-step: For θ ^ ( k ) = ( σ ^ ( k ) , λ ^ ( k ) , q ^ ( k ) ) , the value for the vector of parameters at the k-step, compute w ^ i ( k ) , w ^ i 2 ( k ) , and log w i ^ ( k ) , for  i = 1 , , n ;
  • CM-Step I: Given λ ^ ( k ) and w ^ 1 ( k ) , , w ^ n ( k ) , update σ as follows:
    σ ^ ( k + 1 ) = i = 1 n y i w ^ i ( k ) n ξ λ ^ ( k ) + n λ ^ ( k ) ;
  • CM-Step II: Given σ ^ ( k + 1 ) and w ^ 1 ( k ) , , w ^ n ( k ) , update λ since the solution is obtained from the nonlinear equation.
    i = 1 n y i w ^ i ( k ) n = ξ λ ^ ( k ) 2 + 3 ξ λ ^ ( k ) λ ^ ( k ) + 2 λ ^ 2 ( k ) ;
  • CM-Step III: Given log w 1 ^ ( k ) , , log w n ^ ( k ) , update q as follows:
    q ^ ( k + 1 ) = n i = 1 n log w i ^ ( k ) .
The E-, CM-I, CM-II, and CM-III steps are repeated until an ad hoc criterion is satisfied. For instance, we considered ( θ ^ ( k + 1 ) ) ( θ ^ ( k ) ) < ϵ , for a fixed ϵ . In other words, the difference in the observed log-likelihood for successive steps is lower than a determined value. The initial values for the algorithm can be obtained, for instance, using the σ ^ M , λ ^ M , and q ^ M , moments estimators.

3.4. Observed Fisher Information Matrix

The variance of the estimators can be estimated based on the observed Fisher information matrix, say I ( θ ) = 2 ( θ ) / θ θ . In particular, we have that:
n I ( θ ) 1 θ θ ^ D N 3 ( 0 3 , I 3 ) , as   n + ,
where N 3 ( 0 3 , I 3 ) denotes the standard trivariate normal distribution. The computation of I ( θ ) is not trivial, because it involves the derivation of functions that depend on integrals. Taking advantage of the complete log-likelihood function, I ( θ ) can also be approximated by Louis’ method [14] as follows:
I ( θ ) = i = 1 n E B i ( θ ) | y , θ = θ ^ i = 1 n E S i ( θ ) S i ( θ ) | y , θ = θ ^ + 1 i , j n i j E S i ( θ ) y , θ = θ ^ E S j ( θ ) y , θ = θ ^ .
The details of the components of I ( θ ) are provided in the Appendix A.

3.5. Computational Aspects

The EM algorithm and Louis’ method to obtain the ML estimators and their standard errors for the STPN distribution are included in the tpn package [18] from R [17]. The following function can be used to obtain these results:
est.stpn(y, sigma0=NULL, lambda0=NULL, q0=NULL, prec = 0.001, max.iter = 1000)
where y is the response variable, sigma0, lambda0, q0 are the initial values for the algorithm (they are not defined by default), prec is the precision for the parameters, and max.iter is the maximum number of iterations to be applied for the algorithm. The tpn package also includes the functions dstpn, pstpn, and rstpn, which compute the pdf, cdf, and generation for the STPN distribution.

4. Simulation

In this section, we study the performance of the ML estimators using the EM algorithm for the STPN distribution under different scenarios. We considered two values for σ : 2 and 10; three values for λ : −1, 1, and 3; two values for q: 1.5 and 3; and four sample sizes: 50, 100, 200, and 500. For each combination of σ , λ , q and n (totaling 48 combinations), we drew 1000 replicates, and we used the tpn package to estimate the parameters based on the EM algorithm and estimated the standard deviations based on Louis’ method to estimate the observed Fisher information matrix. Table 2 summarizes the mean of the estimated bias for the 1000 replicates (bias), the mean of the standard errors (SEs), the root of the estimated mean-squared error (RMSE), and the estimated coverage probability based on the asymptotic distribution for the ML estimator using a 95% confidence level (CP). Note that the bias and the RMSE terms are reduced when the sample size is increased, suggesting that the estimators are consistent even in finite samples. The SE and RMSE terms are closer when the sample size is increased, suggesting that the standard errors are also consistently estimated. Finally, the CP terms converges to the nominal value when the sample size is increased, suggesting that the asymptotic distribution of the ML estimators also works well in finite samples.

5. Application

In this section, we present a real data application in order to illustrate the performance of the STPN model in comparison with other proposals in the literature. For this, a comparison was conducted utilizing the TPN distribution and the model proposed by Gómez et al. [19], which is a generalization of a TPN model, denominated the generalized TPN (GTPN). The density function of the GPTN model is given by:
f ( y ; σ , λ , α ) = α σ α Φ ( λ ) y α 1 ϕ y σ α λ ,
with x > 0 , σ , α > 0 , and λ R .
A real dataset of body fat was considered, which measured weight and various body circumferences (see http://lib.stat.cmu.edu/datasets/bodyfat (accessed on 8 October 2021)); for examination purposes, the weight variable (measured in pounds (lbs)) was chosen to conduct the application. When calculating basic statistics (Table 3 shows basic statistics), high kurtosis can be observed for the variable, suggesting the use of a distribution with heavy tails as the STPN.
Table 4 shows the estimated parameters for the three models considered. Based on the AIC [20] and BIC [21], the STPN model provides a better fit. In addition, Figure 4 shows the histogram for the data and the estimated pdf for all the models, where a better performance of the STPN model is shown. In order to check the better fit of the STPN model in comparison with the rest of the models, we also computed the quantile residuals (QRs). If the model is appropriate for the data, the QRs should be a sample from the standard normal model. This assumption can be validated with traditional normality tests such as the Anderson–Darling (AD), Cramér–von Mises (CVM), and Shapiro–Wilkes (SW) tests. Figure 5 suggests that the STPN model provides a better fit for this dataset.

6. Conclusions

This study presents a new distribution with positive support denominated the slash truncation positive normal. This distribution serves as a more general model compared to the TPN model, pursuing the increase of kurtosis in order to improve the modeling of positive databases with high kurtosis. The basic properties of the model were analyzed, and a simulation study was conducted implementing the EM algorithm. Finally, an application with real data was performed proving that the new model performs better than competing models.  

Author Contributions

Conceptualization, H.J.G. and D.I.G.; Data curation, H.J.G.; Formal analysis, D.I.G. and K.I.S.; Investigation, K.I.S.; Methodology, H.J.G., D.I.G. and K.I.S.; Software, H.J.G. and D.I.G.; Supervision, D.I.G. All authors have read and agreed to the published version of the manuscript.

Funding

The research of Hector J. Gómez was supported by Proyecto de Investigación de Facultad de Ingeniería, Universidad Católica de Temuco, UCT-FDI032020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used in Section 5 were duly referenced.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this Appendix, we explain the terms involved in the observed Fisher information matrix presented in Section 3.4. Let w i ^ = E [ w i | y i ] , w i 2 ^ = E [ w i 2 | y i ] , w i 3 ^ = E [ w i 3 | y i ] , w i 4 ^ = E [ w i 4 | y i ] , log ( w i ) ^ = E [ log ( w i ) | y i ] , log 2 ( w i ) ^ = E [ log 2 ( w i ) | y i ] , log ( w i ) * ^ = E [ w log ( w i ) | y i ] , and log ( w i ) ( 2 ) ^ = E [ w i 2 log ( w i ) | y i ] .
We also define B i = E B i ( θ ) | y , θ = θ ^ , D i = E S i ( θ ) S i ( θ ) y , θ = θ ^ , and F i j = E [ S i ( θ ) y , θ = θ ^ ] E [ S j ( θ ) y , θ = θ ^ ] .
The elements of B i are B i , 1 , 1 = 1 σ 2 3 y i 2 w i 2 ^ σ 4 + 2 λ w i ^ σ 3 , B i , 1 , 2 = B i , 2 , 1 = y i w i ^ σ 2 , B i , 1 , 3 = B i , 3 , 1 = 0 , B i , 2 , 2 = ξ ( λ ) [ λ + ξ ( λ ) ] 1 , B i , 2 , 3 = B i , 3 , 2 = 0 , and B i , 3 , 3 = 1 q 2 .
The elements of D i are:
D i , 1 , 1 = 1 α 2 y i 2 σ 4 w i 2 ^ + 2 λ σ 3 y i w i ^ + y i 4 σ 6 w i 4 ^ 2 λ y i 3 σ 5 w i 3 ^ + λ 2 y i 2 σ 4 w i 2 ^ , D i , 1 , 2 = D i , 2 , 1 = 1 σ ξ ( λ ) + y i σ w i ^ ( λ 2 1 + λ ξ ( λ ) ) y i 2 σ 3 w i 2 ^ ( ξ ( λ ) + λ ) + y i 3 σ 4 w i 3 ^ y i 2 λ σ 3 w i 2 ^ + λ σ , D i , 1 , 3 = D i , 3 , 1 = 1 σ q 1 σ log ( w i ) ^ + y i 2 q σ 3 w i 2 ^ + y i 2 σ 3 log ( w i ) ( 2 ) ^ λ y i σ 2 q w i ^ λ y i σ 2 log ( w i ) * ^ , D i , 2 , 3 = D i , 3 , 2 = ξ ( λ ) q ξ ( λ ) log ( w i ) ^ + y i q σ w i ^ + y i σ log ( w i ) * ^ λ q λ log ( w i ) ^ , D i , 2 , 2 = ξ 2 ( λ ) + y i 2 σ 2 w i 2 ^ + λ 2 2 σ ξ ( λ ) w i ^ + 2 λ ξ ( λ ) 2 λ y i σ w i ^ , a n d D i , 3 , 3 = 1 q 2 + 2 q log ( w i ) ^ + log 2 ( w i ) ^ .
Finally, the elements of F i j are given by:
F i , j , 1 , 1 = 1 σ 2 + y i 2 y j 2 σ 6 w i 2 ^ w j 2 ^ + y i y j λ 2 σ 4 w i ^ w j ^ y i 2 σ 4 w i 2 ^ y j 2 σ 4 w j 2 ^ λ y j 2 y i σ 5 w j 2 ^ w i ^ λ y i 2 y j σ 5 w i 2 ^ w j ^ + y i λ σ 3 w i ^ + y j λ σ 3 w j ^ , F i , j , 1 , 2 = 1 σ ξ ( λ ) y j σ 2 w j ^ + λ σ y i 2 σ 3 ξ ( λ ) w i 2 ^ + y i 2 y j σ 4 w i 2 ^ w j ^ λ y i 2 σ 3 w i 2 ^ + y i λ σ 2 ξ ( λ ) w i ^ y i y j λ σ 3 w i ^ w j ^ + λ 2 y i σ 2 w i ^ , F i , j , 1 , 3 = 1 σ q + y i 2 q σ 3 w i 2 ^ y i λ q σ 2 w i ^ 1 σ log ( w i ) ^ + y i 2 σ 3 w i 2 ^ log ( w j ) ^ y i λ σ 2 w i ^ log ( w j ) ^ , F i , j , 2 , 1 = 1 σ ξ ( λ ) y i σ 2 w i ^ + λ σ y j 2 σ 3 ξ ( λ ) w j 2 ^ + y j 2 y i σ 4 w j 2 ^ w i ^ λ y j 2 σ 3 w j 2 ^ + y j λ σ 2 ξ ( λ ) w j ^ y j y i λ σ 3 w j ^ w i ^ + λ 2 y j σ 2 w j ^ , F i , j , 2 , 2 = ξ 2 ( λ ) + y i y j σ 2 w i ^ w j ^ + λ 2 y i σ ξ ( λ ) w i ^ y j σ ξ ( λ ) w j ^ y i λ σ w i ^ y j λ σ w j ^ + 2 λ ξ ( λ ) , F i , j , 2 , 3 = ξ ( λ ) q ξ ( λ ) log ( w j ) ^ + y i q σ w i ^ + y i σ w i ^ log ( w j ) ^ λ q λ log ( w j ) ^ , F i , j , 3 , 1 = 1 σ q + y j 2 q σ 3 w j 2 ^ y j λ q σ 2 w j ^ 1 σ log ( w j ) ^ + y j 2 σ 3 w j 2 ^ log ( w i ) ^ y j λ σ 2 w j ^ log ( w i ) ^ , F i , j , 3 , 2 = ξ ( λ ) q ξ ( λ ) log ( w i ) ^ + y j q σ w j ^ + y j σ w j ^ log ( w i ) ^ λ q λ log ( w i ) ^ , a n d F i , j , 3 , 3 = 1 q 2 + log ( w j ) ^ log ( w i ) ^ + 1 q log ( w i ) ^ + 1 q log ( w j ) ^ .

Appendix B

In this section, we present the codes in R used to estimate the parameters for the STPN model in the real data application presented in Section 5.
require(tpn)
y<-c(154.25, 173.25, 154.00, 184.75, 184.25, 210.25, 181.00, 176.00, 191.00, 198.25,
 186.25, 216.00, 180.50, 205.25, 187.75, 162.75, 195.75, 209.25, 183.75, 211.75,
 179.00, 200.50, 140.25, 148.75, 151.25, 159.25, 131.50, 148.00, 133.25, 160.75,
 182.00, 160.25, 168.00, 218.50, 247.25, 191.75, 202.25, 196.75, 363.15, 203.00,
 262.75, 205.00, 217.00, 212.00, 125.25, 164.25, 133.50, 148.50, 135.75, 127.50,
 158.25, 139.25, 137.25, 152.75, 136.25, 198.00, 181.50, 201.25, 202.50, 179.75,
 216.00, 178.75, 193.25, 178.00, 205.50, 183.50, 151.50, 154.75, 155.25, 156.75,
 167.50, 146.75, 160.75, 125.00, 143.00, 148.25, 162.50, 177.75, 161.25, 171.25,
 163.75, 150.25, 190.25, 170.75, 168.00, 167.00, 157.75, 160.00, 176.75, 176.00,
 177.00, 179.75, 165.25, 192.50, 184.25, 224.50, 188.75, 162.50, 156.50, 197.00,
 198.50, 173.75, 172.75, 196.75, 177.00, 165.50, 200.25, 203.25, 194.00, 168.50,
 170.75, 183.25, 178.25, 163.00, 175.25, 158.00, 177.25, 179.00, 191.00, 187.50,
 206.50, 185.25, 160.25, 151.50, 161.00, 167.00, 177.50, 152.25, 192.25, 165.25,
 171.75, 171.25, 197.00, 157.00, 168.25, 186.00, 166.75, 187.75, 168.25, 212.75,
 176.75, 173.25, 167.00, 159.75, 188.15, 156.00, 208.50, 206.50, 143.75, 223.00,
 152.25, 241.75, 146.00, 156.75, 200.25, 171.50, 205.75, 182.50, 136.50, 177.25,
 151.25, 196.00, 184.25, 140.00, 218.75, 217.00, 166.25, 224.75, 228.25, 172.75,
 152.25, 125.75, 177.25, 176.25, 226.75, 145.25, 151.00, 241.25, 187.25, 234.75,
 219.25, 118.50, 145.75, 159.25, 170.50, 167.50, 232.75, 210.50, 202.25, 185.00,
 153.00, 244.25, 193.50, 224.75, 162.75, 180.00, 156.25, 168.00, 167.25, 170.75,
 178.25, 150.00, 200.50, 184.00, 223.00, 208.75, 166.00, 195.00, 160.50, 159.75,
 140.50, 216.25, 168.25, 194.75, 172.75, 219.00, 149.25, 154.50, 199.25, 154.50,
 153.25, 230.00, 161.75, 142.25, 179.75, 126.50, 169.50, 198.50, 174.50, 167.75,
 147.75, 182.25, 175.50, 161.75, 157.75, 168.75, 191.50, 219.15, 155.25, 189.75,
 127.50, 224.50, 234.25, 227.75, 199.50, 155.50, 215.50, 134.25, 201.00, 186.75,
 190.75, 207.50)
est.stpn(y)

References

  1. Rafiqullah, H.M.; Saxena, A.; Vera, V.; Abdool-Ghany, F.; Gabbidon, K.; Perea, N.; Shauna-Jeanne Stewart, T.; Ramamoorthy, V. Black Hispanic and Black Non-Hispanic Breast Cancer Survival Data Analysis with Half-normal Model Application. Asian Pac. J. Cancer Prev. 2014, 15, 9453–9458. [Google Scholar]
  2. Bosch-Badia, M.T.; Montllor-Serrats, J.; Tarrazon-Rodon, M.A. Risk Analysis through the Half-Normal Distribution. Mathematics 2020, 8, 2080. [Google Scholar] [CrossRef]
  3. Tsizhmovska, N.L.; Martyushev, L.M. Principle of Least Effort and Sentence Length in Public Speaking. Entropy 2021, 23, 1023. [Google Scholar] [CrossRef] [PubMed]
  4. Olmos, N.M.; Varela, H.; Gómez, H.W.; Bolfarine, H. An extension of the half-normal distribution. Stat. Pap. 2012, 53, 875–886. [Google Scholar] [CrossRef]
  5. Cooray, K.; Ananda, M.M.A. A generalization of the half-normal distribution with applications to lifetime data. Comm. Stat. Theory Methods 2007, 36, 1323–2157. [Google Scholar] [CrossRef]
  6. Olmos, N.M.; Varela, H.; Gómez, H.W.; Bolfarine, H. An extension of the generalized half-normal distribution. Stat. Pap. 2014, 55, 967–981. [Google Scholar] [CrossRef]
  7. Iriarte, Y.; Gómez, H.W.; Varela, H.; Bolfarien, H. Slashed Rayleigd distribution. Rev. Colomb. Estad. 2015, 38, 31–44. [Google Scholar] [CrossRef]
  8. Reyes, J.; Barranco-Chamorro, I.; Gallardo, D.I.; Gómez, H.W. Generalized modified slash Birnbaum-Saunders distribution. Symmetry 2018, 10, 724. [Google Scholar] [CrossRef] [Green Version]
  9. Olmos, N.M.; Osvaldo, V.; Gómez, Y.M.; Iriarte, Y.A. Confluent hypergeometric slashed-Rayleigh distribution: Properties, estimation and applications. J. Comput. Appl. Math. 2020, 328, 112548. [Google Scholar] [CrossRef]
  10. Segovia, F.A.; Gómez, Y.M.; Venegas, O.; Gómez, H.W. A Power Maxwell Distribution with Heavy Tails and Applications. Mathematics 2020, 8, 1116. [Google Scholar] [CrossRef]
  11. Astorga, J.M.; Reyes, J.; Santoro, K.I.; Venegas, O.; Gómez, H.W. A Reliability Model Based on the Incomplete Generalized Integro-Exponential Function. Mathematics 2020, 8, 1537. [Google Scholar] [CrossRef]
  12. Gómez, H.J.; Olmos, N.M.; Varela, H.; Bolfarine, H. Inference for a truncated positive normal distribution. Appl. Math. J. Chin. Univ. 2018, 33, 163–176. [Google Scholar] [CrossRef]
  13. Jonhson, N.L.; Kotz, S.; Balakrishnan, N. Continuos Univariate Distribution, 2nd ed.; Wiley: New York, NY, USA, 1995; Volume 2. [Google Scholar]
  14. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum Likelihood from Incomplete Data via the EM Algorithm. J. R. Stat. Soc. Ser. B 1977, 39, 1–38. [Google Scholar]
  15. Louis, T.A. Finding the observed information matrix when using the EM algorithm. J. R. Stat. Soc. Ser. B Methodol. 1982, 44, 226–233. [Google Scholar]
  16. Casella, G.; Berger, R.L. Statistical Inference; Duxbury: Pacific Grove, CA, USA, 2002. [Google Scholar]
  17. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021; Available online: https://www.R-project.org/ (accessed on 8 October 2021).
  18. Gallardo, D.I.; Gómez, H.J. tpn: Truncated Positive Normal Model and Extensions. R Package Version 1.0. 2021. Available online: https://cran.r-project.org/web/packages/tpn/index.html (accessed on 8 October 2021).
  19. Gómez, H.J.; Gallardo, D.I.; Osvaldo, V. Generalized truncation positive normal distribution. Symmetry 2019, 11, 1361. [Google Scholar] [CrossRef] [Green Version]
  20. Akaike, H. A new look at the statistical model identification. IEEE Trans. Auto Contr. 1974, 19, 716–723. [Google Scholar] [CrossRef]
  21. Schwarz, G. Estimating the dimension of a model. Ann. Stat. 1978, 6, 461–464. [Google Scholar] [CrossRef]
Figure 1. Particular cases for the STPN distribution.
Figure 1. Particular cases for the STPN distribution.
Symmetry 13 02164 g001
Figure 2. pdf, cdf, and hazard function for the STPN ( σ = 1 , λ = 2 , q ) model with different combinations of q and the STPN ( σ = 1 , λ , q = 2 ) model with different combinations of λ . (a) pdf of STPN ( σ = 1 , λ = 2 , q ) . (b) pdf of STPN ( σ = 1 , λ , q = 2 ) . (c) cdf of STPN ( σ = 1 , λ = 2 , q ) . (d) cdf of STPN ( σ = 1 , λ , q = 2 ) . (e) hazard function of STPN ( σ = 1 , λ = 2 , q ) . (f) hazard function of STPN ( σ = 1 , λ , q = 2 ) .
Figure 2. pdf, cdf, and hazard function for the STPN ( σ = 1 , λ = 2 , q ) model with different combinations of q and the STPN ( σ = 1 , λ , q = 2 ) model with different combinations of λ . (a) pdf of STPN ( σ = 1 , λ = 2 , q ) . (b) pdf of STPN ( σ = 1 , λ , q = 2 ) . (c) cdf of STPN ( σ = 1 , λ = 2 , q ) . (d) cdf of STPN ( σ = 1 , λ , q = 2 ) . (e) hazard function of STPN ( σ = 1 , λ = 2 , q ) . (f) hazard function of STPN ( σ = 1 , λ , q = 2 ) .
Symmetry 13 02164 g002
Figure 3. (a) Mean; (b) standard deviation; (c) asymmetry coefficient; (d) kurtosis coefficient for the STPN( λ , σ = 1 , q ) model.
Figure 3. (a) Mean; (b) standard deviation; (c) asymmetry coefficient; (d) kurtosis coefficient for the STPN( λ , σ = 1 , q ) model.
Symmetry 13 02164 g003
Figure 4. Fit of the distributions for the w e i g h t dataset.
Figure 4. Fit of the distributions for the w e i g h t dataset.
Symmetry 13 02164 g004
Figure 5. QRs for the fitted models in the w e i g h t dataset. The p-values for the AD, CVM, and SW normality tests are also presented to check if the QRs came from the standard normal distribution. (a) qq-plot STPN. (b) qq-plot TPN. (c) qq-plot GTPN.
Figure 5. QRs for the fitted models in the w e i g h t dataset. The p-values for the AD, CVM, and SW normality tests are also presented to check if the QRs came from the standard normal distribution. (a) qq-plot STPN. (b) qq-plot TPN. (c) qq-plot GTPN.
Symmetry 13 02164 g005
Table 1. Some values for the kurtosis coefficients of the STPN distribution for different values of λ and q.
Table 1. Some values for the kurtosis coefficients of the STPN distribution for different values of λ and q.
q
λ 571015 + (TPN)
−519.6810.288.588.047.76
−216.638.226.736.266.02
−114.887.025.645.225.00
013.195.724.454.063.87
112.704.823.543.183.00
215.234.933.342.932.76
535.3710.074.843.442.99
Table 2. Recovery parameters for the STPN distribution based on 1000 replicates for different combinations of parameters and sample size.
Table 2. Recovery parameters for the STPN distribution based on 1000 replicates for different combinations of parameters and sample size.
True Value n = 50 n = 100 n = 200 n = 500
σ λ q est.biasSERMSECPbiasSERMSECPbiasSERMSECPbiasSERMSECP
2−11.5 σ ^ −0.431.981.060.71−0.121.981.090.800.041.641.010.880.191.170.810.92
λ ^ 0.762.291.400.770.332.101.210.830.111.691.020.90−0.141.200.830.94
q ^ 0.090.570.540.940.080.420.430.930.060.290.320.960.030.180.160.97
3 σ ^ −0.701.280.970.62−0.341.300.860.78−0.091.130.780.860.070.780.660.94
λ ^ 0.941.641.350.720.451.491.080.830.191.180.850.90−0.050.810.680.94
q ^ −0.302.112.010.81−0.111.501.030.880.081.230.930.930.090.720.670.93
11.5 σ ^ 0.211.181.140.890.200.800.860.940.090.490.500.950.060.290.310.96
λ ^ 0.060.760.720.94−0.040.530.520.96−0.020.350.350.97−0.020.220.220.95
q ^ 0.230.630.720.960.100.330.390.960.040.210.250.960.030.130.140.95
3 σ ^ −0.010.830.670.880.090.580.530.940.070.390.410.950.030.230.240.95
λ ^ 0.110.600.560.950.010.410.390.96−0.010.290.300.96−0.010.180.180.96
q ^ 1.074.795.130.920.461.531.360.950.280.810.890.960.110.420.460.97
31.5 σ ^ 0.120.590.830.940.040.370.400.950.020.260.250.960.030.170.170.95
λ ^ 0.130.700.760.950.070.450.630.950.020.310.300.96−0.020.190.190.95
q ^ 0.140.390.500.960.060.220.230.970.030.160.160.950.020.100.100.95
3 σ ^ 0.010.450.480.950.040.310.310.960.020.210.210.960.010.130.130.95
λ ^ 0.160.570.630.970.020.370.370.960.020.250.250.950.010.160.160.95
q ^ 0.501.552.010.960.220.670.780.960.110.430.480.960.050.260.250.97
10−11.5 σ ^ −2.418.095.040.70−1.377.734.630.76−1.005.713.710.830.054.563.030.90
λ 0.841.891.340.750.521.711.080.810.341.250.850.860.040.970.660.92
q ^ 0.080.560.550.920.070.410.450.930.030.270.270.940.020.170.160.97
3 σ ^ −3.695.494.830.62−2.295.303.860.76−0.934.733.140.850.113.582.870.90
λ ^ 0.991.451.330.710.581.270.980.820.281.030.740.880.040.750.600.92
q ^ −0.302.052.030.81−0.191.330.950.870.071.180.920.920.150.760.730.94
11.5 σ ^ 1.246.095.890.890.873.894.520.930.372.402.660.930.251.421.490.95
λ ^ 0.020.770.720.95−0.020.520.520.96−0.000.350.370.95−0.020.210.220.96
q ^ 0.230.590.680.970.120.360.450.950.040.210.230.960.020.130.130.95
3 σ ^ −0.113.873.170.890.262.842.600.920.281.912.020.940.141.141.210.95
λ ^ 0.120.590.540.950.030.410.390.940.000.280.280.95−0.000.180.180.95
q ^ 1.285.656.160.910.411.551.750.940.300.871.030.960.100.420.450.97
31.5 σ ^ 0.442.743.030.930.221.851.910.950.151.271.360.940.090.810.850.95
λ ^ 0.140.670.890.960.050.440.470.960.020.300.310.96−0.000.190.200.95
q ^ 0.140.350.420.980.060.230.240.960.030.150.160.960.010.100.100.95
3 σ ^ 0.222.242.360.940.071.501.570.940.001.031.060.950.010.650.650.95
λ ^ 0.110.550.590.970.070.370.400.970.040.250.260.950.010.160.160.95
q ^ 0.691.983.140.950.210.670.820.960.100.420.450.960.040.250.260.96
Table 3. Descriptive statistics for the w e i g h t dataset.
Table 3. Descriptive statistics for the w e i g h t dataset.
Datasetn X ¯ S 2 b 1 b 2
Weight measured252 178.9 863.72 1.2 8.14
Table 4. Estimated parameters and their standard errors (in parentheses) for the STPN, TPN, and GTPN models for the w e i g h t dataset. The AIC and BIC are also presented.
Table 4. Estimated parameters and their standard errors (in parentheses) for the STPN, TPN, and GTPN models for the w e i g h t dataset. The AIC and BIC are also presented.
EstimatedSTPNTPNGTPN
σ 20.775 (1.811)29.331 (1.306)0.321 (0.088)
λ 7.825 (0.593)6.100 (0.279)14.689 (0.636)
q11.250 (2.132)--
α --0.426 (0.015)
A I C 2394.4562421.9782405.15
B I C 2405.0442429.0372415.738
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gómez, H.J.; Gallardo, D.I.; Santoro, K.I. Slash Truncation Positive Normal Distribution and Its Estimation Based on the EM Algorithm. Symmetry 2021, 13, 2164. https://doi.org/10.3390/sym13112164

AMA Style

Gómez HJ, Gallardo DI, Santoro KI. Slash Truncation Positive Normal Distribution and Its Estimation Based on the EM Algorithm. Symmetry. 2021; 13(11):2164. https://doi.org/10.3390/sym13112164

Chicago/Turabian Style

Gómez, Héctor J., Diego I. Gallardo, and Karol I. Santoro. 2021. "Slash Truncation Positive Normal Distribution and Its Estimation Based on the EM Algorithm" Symmetry 13, no. 11: 2164. https://doi.org/10.3390/sym13112164

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop