Next Article in Journal
Functional Chirality: From Small Molecules to Supramolecular Assemblies
Next Article in Special Issue
Mixtures of Semi-Parametric Generalised Linear Models
Previous Article in Journal
Harmonic and Vibration Analysis of Dual Three-Phase PMSM with One-Phase Open Circuit Fault
Previous Article in Special Issue
Capturing a Change in the Covariance Structure of a Multivariate Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mean Equality Tests for High-Dimensional and Higher-Order Data with k-Self Similar Compound Symmetry Covariance Structure

1
Departamento de Matemática, Facultad de Ciencias Económicas, Universidad Nacional de Cuyo, Mendoza 5500, Argentina
2
Department of Management Science and Statistics, The University of Texas at San Antonio, San Antonio, TX 78249, USA
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(2), 291; https://doi.org/10.3390/sym14020291
Submission received: 30 September 2021 / Revised: 2 January 2022 / Accepted: 8 January 2022 / Published: 1 February 2022
(This article belongs to the Special Issue Symmetry in Multivariate Analysis)

Abstract

:
An extension of the D 2 test statistic to test the equality of mean for high-dimensional and k-th order array-variate data using k-self similar compound symmetry (k-SSCS) covariance structure is derived. The k-th order data appear in many scientific fields including agriculture, medical, environmental and engineering applications. We discuss the property of this k-SSCS covariance structure, namely, the property of Jordan algebra. We formally show that our D 2 test statistic for k-th order data is an extension or the generalization of the D 2 test statistic for second-order data and for third-order data, respectively. We also derive the D 2 test statistic for third-order data and illustrate its application using a medical dataset from a clinical trial study of the eye disease glaucoma. The new test statistic is very efficient for high-dimensional data where the estimation of unstructured variance-covariance matrix is not feasible due to small sample size.

1. Introduction

We study the hypotheses testing problems of equality of means for high-dimensional and higher-order (multi-dimensional arrays) data. Standard multivariate techniques such as Hotelling’s T 2 distribution with one big unstructured variance-covariance matrix (assuming a large sample size) do not work for these higher-order data, as Hotelling’s T 2 distribution cannot incorporate any higher-order information into the test statistic and thus draws wrong conclusions [1]. Higher-order data are formed by representing the additional associations that are inherent in repetition across several dimensions. To obtain a better understanding of higher-order data, we first share a simple and interesting example of higher-order data:
  • Traditional multivariate (vector-variate) data are the first-order data. For example, a clinical trial study of glaucoma, where several factors ( m 1 ) such as intraocular pressure (IOP) and central corneal thickness (CCT) are effective in the diagnosis of glaucoma. This example is an illustration of the ( m 1 × 1 ) vector-variate first-order data.
  • When the first-order data are measured at various locations/sites or time points, the data become two-dimensional matrix-variate data, and we name it as second-order data. These data are also recognized as multivariate repeated measures data, or doubly multivariate data; e.g., multivariate spatial data or multivariate temporal data. In the above example of the clinical trial study, an ophthalmologist or optometrist diagnoses glaucoma by measuring IOP and CCT ( m 1 ) in both the eyes ( m 2 ) . So, we see how the vector-variate first-order dataset discussed in the previous paragraph becomes a ( m 1 × m 2 ) matrix-variate second-order dataset by measuring m 1 variables repeatedly over another dimension.
  • When the second-order data are measured at various sites, or over various time points, the data become three-dimensional array-variate data, and we name it as third-order data. In addition, these are recognized as triply multivariate data, e.g., multivariate spatio-temporal data or multivariate spatio-spatio data. In the previous example, if the IOP and CCT ( m 1 = 2 ) are measured in both eyes ( m 2 = 2 ) as well as over, say, three time points ( m 3 = 3 ) , the dataset would become third-order data.
  • When the third-order data are measured at various directions, the data become four-dimensional ( m 1 × m 2 × m 3 × m 4 ) array-variate fourth-order data, e.g., multivariate directo-spatio-temporal data or multivariate directo-spatio-spatio data.
  • When the fourth-order data are measured at various depths, the data become five-dimensional ( m 1 × m 2 × m 3 × m 4 × m 5 ) array-variate fifth-order data, and so on, e.g., multivariate deptho-directo-spatio-temporal data.
In the above glaucoma data example, the dataset has m 1 variables, and these m 1 variables are repeatedly measured over various dimensions adding higher-order information to the dataset. Now, the question is, what is the higher-order information? Higher-order information is embedded in the higher-order covariance structures that are formed by signifying the additional associations that are inherent in repetition of the variables across several dimensions. The other question is how can we measure and capture the higher-order information? For this, one needs to understand how to read these structured higher-order data and how to use the appropriate variance-covariance structure to incorporate the higher-order information that are integral to the higher-order data.
Higher-order data have been studied by many authors in the last 20 years using various variance-covariance structures to reduce the number of unknown parameters, which is very important for high-dimensional data. Second-order data are studied using matrix-variate normal distribution [2,3]. Second-order data can also be analyzed vectorially using a two-separable (Kronecker product) variance-covariance structure [4,5], or a block compound symmetry (BCS), also named a block exchangeable (BE) or a 2-SSCS covariance structure [6]. Two-separable covariance structure for second-order data has two covariance matrices, one for each order of the data; in other words, one covariance matrix for within-subject information and the other covariance matrix for between-subject information. Combining the covariance structures of within-subject information and between-subject information results in a second-order model for second-order data. Ignoring this information often influences the test statistic, and if not properly taken care of this information, test statistic will end up yielding wrong conclusions [1]. To obtain a picture of the third-order data, see [7]. Manceur and Dutilleul [7] used tensor normal distribution with double separable covariance structure. 2-SSCS and 3-SSCS covariance structures are useful tools for the analyses of the second- and third-order datasets, respectively. Manceur and Dutilleul [7] also studied fourth-order data with four-separable covariance structure. In the same way, k-th order data can be analyzed vectorially with some structured variance-covariance matrix to integrate the higher-order information into the model, e.g., k-separable covariance structure [8,9] for the k-th order data. However, k-separable covariance structure may not be appropriate for all datasets; thus, we investigate some other structure, namely, k-SSCS covariance structure (defined in Section 3) for the k-th order data in this article. See [10].
The high-dimensionality of a dataset needs to exploit the structural properties of the data to reduce the number of estimated degrees of freedom for more accurate conclusion for the k-th order data, and k-SSCS covariance structure is one of them. For example, for the third-order glaucoma data, the number of unknown parameters in the ( 12 × 12 ) -dimensional unstructured variance-covariance matrix is 78, whereas the number of unknown parameters for 3-SSCS covariance structure is just 9 and thus may help in providing the correct information about the true association of the structured third-order data. The data quickly become high-dimensional with the increase in the order of the data, and thus, the variance-covariance matrix becomes singular for small samples, and thus testing of mean is not possible. This necessitates the development of new statistical methods with a suitable structured variance-covariance matrix, which can integrate the existing correlations information of the higher-order data into the test statistic and can take care of the high-dimensionality of the data as well.
Rao [11,12] introduced 2-SSCS covariance structure while classifying genetically different groups. Olkin and Press [13] examined a circular stationary model. The problem of estimation in balanced multilevel models with a block circular symmetric covariance structure was studied by Liang et al. [14]. Olkin [15] studied the hypothesis testing problem of the equality of the mean vectors of multiple populations of second-order data using a 2-SSCS covariance structure, which is reminiscent of a model of Wilks [16]. Arnold [17] studied normal testing problems that mean is patterned when the variance-covariance matrix has a 2-SSCS structure. Arnold [17] also studied multivariate analysis of variance problem when the variance-covariance matrix has a 2-SSCS structure. Arnold [18] later developed linear models with a 2-SSCS structure as the error matrix for one matrix-variate observation. Roy et al. [19] and Žežula et al. [1] studied the hypotheses testing problems on the mean for the second-order data using a 2-SSCS covariance structure. There are few studies on third-order data using the 3-SSCS covariance structure. See Leiva and Roy [20] for classification problems and Roy and Fonseca [21] for linear models with a 3-SSCS covariance structure on the error vectors. Recently, Žežula et al. [22] studied the mean value test for third-order data using a 3-SSCS covariance structure.
A majority of the above-mentioned authors only studied the second-order matrix-variate data and used a 2-SSCS covariance structure where the exchangeability (invariance) property in one factor was present. However, we obtain datasets these days with more than one factor, and the assumption of exchangeability on the levels of factors is appropriate for these datasets. A k-SSCS structured matrix results from the exchangeability property of the k 1 factors of a dataset. Employing a 2-SSCS covariance structure would be wrong for the datasets with more than one factor. One may construct a second-order data from a k-th order data by summing the observations, however, it would result in a loss of detailed information of particular characteristics that may be of interest. One may also consider matricization of the k-th order data to a second-order data and then using the 2-SSCS covariance structure, but then, once again, all the correlation information will be wiped out. So, the development of new statistical methods are in demand to handle the k-th order data using k-SSCS variance-covariance matrix.
The aim of this paper is to derive a test statistic for mean for high-dimensional k-th order data using k-SSCS covariance matrix by generalizing D 2 test statistics developed in Žežula et al. [1]. In doing so, we exploit the distributions of the eigenblocks of the k-SSCS covariance matrix. We obtain the test statistic to test the mean for one sample case, paired samples case and two independent samples case. We show through Remark 2 that our generalized D 2 test statistic for the k-th order data generalizes the test statistic for the second-order data, and we derive the test statistic for the third-order data, which is largely motivated by the work of Žežula et al. [22] and in Remark 2 as well.
This article is organized as follows. In Section 2, we set up some preliminaries about some matrix notations and definitions related to block matrices. Section 3 has the definition of k-SSCS covariance matrix. Section 3 has properties of the k-SSCS covariance matrix, such as Jordan algebra. Section 4 discusses the estimation of the eigenblocks and their distributions. The test for the mean for one population is proposed in Section 5. Tests for the equality of means for two populations are proposed in Section 6, and an example of a dataset exemplifying our proposed method is presented in Section 7. Finally, Section 8 concludes with some discussion and the scope for the future research.

2. Preliminaries

Let m g , for g = 1 , , k , be natural numbers greater than 1 , and p i , j be given by:
p i , j = g = i j m g if 0 j i < k 1 if j i = 1 0 if j i = 2 ,
with i = 1 , , k . We denote by F g the set F g = 1 , , m g , for g = 1 , , k .
Definition 1.
We say that a matrix A k p 1 , k × p 1 , k is a k-th order block matrix according to the factorization p 1 , k = g = 1 k m g to point out that it can be expressed as k different “natural" partitioned matrix forms, that is:
A k p 1 , k × p 1 , k = A f k , , f k + 1 k j ; f k * , , f k + 1 k j * p 1 , j × p 1 , j f k , f k * F k ; ; f j + 1 , f j + 1 * F j + 1 : j = 0 , , k 1 .
Note that for the case j = 0 , the matrix A is a ( k × k ) dimensional matrix with 1 × 1 blocks. Clearly, both m 1 2 and m 2 2 for second-order data, and m 1 2 , m 2 2 and m 3 2 for third-order data, and so on. Next, we define matrix operators that will be useful tools in working with these k-th order block matrices, where k 2 . Let M p 1 , g denote the set of p 1 , g × p 1 , g -matrices.
Definition 2.
Let BS p 1 , g and BT p 1 , g denote the p 1 , g - S u m and p 1 , g - T r a c e block operators from M p 1 , h to M p 1 , g for 1 g h k , respecitvely, where M p 1 , h will always be evident from the context. These block operators applied to a matrix
G k p 1 , g p g + 1 , h × p 1 , g p g + 1 , h = G f , f * p 1 , g × p 1 , g f , f * F h , g + 1 = F h × × F g + 1 = × j = 1 h g F h + 1 j
give the following p 1 , g × p 1 , g -matrices:
BS p 1 , g G k = f F h , g + 1 f * F h , g + 1 G f , f * a n d BT p 1 , g G k = f F h , g + 1 G f , f .
The subindex p 1 , g in these block matrix operators represents p 1 , g × p 1 , g - dimensional blocks in a partitioned square matrix G , and thus their use results in p 1 , g × p 1 , g - dimensional matrices. Many useful properties of these block operators, which we will use later in this article, are examined in Leiva and Roy [10]. For any natural number a > 1 , we use the following additional notations:
Q a a × a = I a P a ,
and P a = 1 a J a ,
where J a = 1 a 1 a , with 1 a be the a × 1 vector of ones, and I a = e a , 1 , , e a , a being the a × a identity matrix with e a , i the ith column vector of I a . Observe that P a and Q a are idempotent matrices and mutually orthogonal to each other, that is:
P a 2 = P a , Q a 2 = Q a and P a Q a = 0 .
For a fixed natural number k 2 , let R k , j be the p 1 , k × p 1 , k -matrix
R k , j + 1 = R k , j + 1 * I m 1 ,
where, the symbol ⊗ represents the Kronecker product operator and for each j = 1 , , k 1 ,
R k , j + 1 * p 2 , k × p 2 , k = h = 1 k j + 1 I m k + 1 h Q m j + 1 h = k j 1 k 1 P m k + 1 h = I p j + 2 , k Q m j + 1 P m j , m 2 ,
with
P m i , m i * = h = k i 1 k i * 1 P m k + 1 h = P m i P m i 1 P m i * if i i * 1 if i < i *
and h = 1 0 I m k + 1 h = 1 = h = k k 1 P m k + 1 h . Also, let R k , k + 1 be the p 1 , k × p 1 , k -matrix such that
R k , k + 1 = R k , k + 1 * I m 1 ,
where
R k , k + 1 * p 2 , k × p 2 , k = h = 1 k 1 P m k + 1 h = P m k , m 2 .

3. Properties of the Self Similar Compound Symmetry Covariance Matrix

Let x r ; f be an m 1 variate vector of measurements on the rth replicate (individual) at the f = f k , , f 2 F = F k , 2 = F k × × F 2 = × g = k 2 F g factor combination. Let x r be the p 1 , k = j = 1 k m j variate vector of all measurements corresponding to the rth sample unit of the population, that is, x r = x r ; 1 , , 1 , , x r , m k , , m 1 . Thus, the unstructured covariance matrix Γ k has q = p 1 , k p 1 , k + 1 / 2 unknown parameters that can be large for random values of either of the m j ’s. Consequently, if the data are high-dimensional, k-SSCS covariance matrix (defined bellow in Definition 3) with number of unknown parameters k m 1 m 1 + 1 / 2 is a good choice if the exchangeable feature is present in the data.
Definition 3.
We say that x r has a k-SSCS covariance matrix if Γ k = cov x r is of the form:
Γ k = j = 1 k 1 I p j + 1 , k J p 2 , j U k , j U k , j + 1 + J p 2 , k U k , k ,
where U k , j , for j = 1 , , k , are m 1 × m 1 -matrices called SSCS-component matrices, with the assumption that J p 2 , 1 is equal to the real number 1.
An alternative expression to (8) is:
Γ k = j = 1 k 1 I p j + 1 , k J p 2 , j T k , j + h = 1 k 1 J m k + 1 h T k , k ,
with
T k , k = U k , k ,
and T k , j = U k , j U k , j + 1 for j = 1 , , k 1 or equivalently U k , k = T k , k , and U k , i = h = i k T k , h for i = 1 , , k 1 .
The covariance matrix Γ k given in (8) is called k-self similar compound symmetry covariance matrix because if we consider the p 1 , k -dimensional vector x = x 1 , , 1 , , x m k , , m 1 with a k-SSCS covariance matrix Γ k and for each fixed g ϵ 2 , , k 1 , we also consider the partition of x in p 1 , g -subvectors. Then, its corresponding covariance matrix Γ k is partitioned in p 1 , g × p 1 , g -submatrices, which is k + 1 g -SSCS matrix Γ k + 1 g * (see Leiva and Roy [10]) as follows:
Γ k p 1 , k × p 1 , k = Γ k + 1 g * = j = 1 k + 1 g 1 I p j + 1 , k + 1 g J p 2 , j U k + 1 g , j * U k + 1 g , j + 1 * + J p 2 , k + 1 g U k + 1 g , k + 1 g * ,
where U k + 1 g , 1 * is the g-SSCS matrix given by:
U k + 1 g , 1 * = f = 1 g 1 I p j + 1 , g J p 2 , j U k , f U k , f + 1 + J p 2 , k U k , g , and U k + 1 g , j * = h = 1 g 1 J m g + 1 h U k , g + j 1 for j ϵ 2 , , k + 1 g .
The existence of Γ k 1 can be proved using the principle of Mathematical Induction and to derive its expression as well. For the expression of Γ k 1 , we need matrices Δ k , j for j = 1 , , k , which are defined as follows:
Δ k , j = i = 1 j p 2 , i U k , i U k , i + 1 ,
where U k , k + 1 = 0 and p 2 , 1 = 1 . Note that
Δ k , j = U k , 1 U k , 2 i f j = 1 Δ k , j 1 + p 2 , j U k , j U k , j + 1 i f j = 2 , , k .
It can be proved that if matrices Δ k , j , j = 1 , , k are non-singular, then Γ k 1 exists and is given by:
Γ k 1 = j = 1 k 1 I p j + 1 , k J p 2 , j 1 p 2 , j Δ k , j 1 Δ k , j 1 1 + J p 2 , k 1 p 2 , k Δ k , k 1 Δ k , k 1 1 ,
(see Leiva and Roy [10]), where the symbol Δ k , 0 1 indicates the m 1 × m 1 zero matrix ( Δ k , 0 1 = 0 m 1 × m 1 ) . It is worthwhile to note that the structure of Γ k 1 is the same as the structure of Γ k , that is, it has the k-SSCS structure given in (9) with (10) and (11) and
Γ k 1 = j = 1 k 1 I p j + 1 , k J p 2 , j T k , j + J p 2 , k T k , k ,
where, in this formula Γ k 1 , T k , j is as follows
T k , j = 1 p 2 , j Δ k , j 1 Δ k , j 1 1 , for each j = 1 , , k .
Using a similar inductive arguments, it can be proved that:
Γ k = j = 1 k Δ k , j p j + 1 , k p j + 2 , k ,
where the matrices Δ k , j are given by (12), and it is assumed that p k + 1 , k = 1 and p k + 2 , k = 0 . The matrices Δ k , j , j = 1 , , k are the k eigenblocks of the k-SSCS covariance structure. See Lemma 4 of Leiva and Roy [10] for proof. The matrix Γ k can be written as the following sum of k orthogonal parts:
Γ k = j = 1 k R k , j + 1 * Δ k , j
and if Γ k 1 exists, then it can be written as:
Γ k 1 = j = 1 k R k , j + 1 * Δ k , j 1 ,
where R k , j + 1 * is given in (5), for each j = 1 , , k 1 , and, for j = k , R k , k + 1 * is given in (7).
The conventional Hotelling’s T 2 statistic to test the mean is based on the unbiased estimate of the unstructured variance-covariance matrix, which follows a Wishart distribution. Nevertheless, the unbiased estimate of the k-SSCS covariance matrix does not follow a Wishart distribution, and thus the test statistic to test the equality of mean does not follow Hotelling’s T 2 statistic. We thus make a canonical transformation of the data to block diagonalize the k-SSCS covariance matrix, and show that a scalar multiple of the estimates of the diagonal blocks (eigenblocks) follow independent Wishart distributions and use this property in our advantage to obtain test statistics to test the mean for the k-th order data ( k 2 ) . We see from Leiva and Roy [10] that the k-SSCS matrix Γ k given by (8) can be transformed into an m 1 × m 1 -block diagonal matrix (blocks in the diagonal are m 1 × m 1 -matrices) by pre- and post-multiplying Γ k by appropriate orthogonal matrices.
For 1 j < g < i k , let I m i , m j denote the m i m i 1 m j identity matrix, that is:
I m i , m j = I m i m i 1 m j = I m i I m i 1 I m j ,
and let
H m i , m g p g , i × p g , i = H m i H m i 1 H m g ,
where H m h is an m h × m h Helmert matrix for each h = 2 , , k , i.e., each H m h is an orthogonal matrix whose first column is proportional to 1 m h . Then:
L h = H m h , m 2 I m 1
is an orthogonal matrix (note that L h are not function of either of the U i ’s), and in particular
L k = H m h , m 2 I m 1 = I m k L k 1 H m k I m k 1 , m 1 .
Lemma 4 of Leiva and Roy [10] states and proves the block diagonalization result of the k-SSCS matrix Γ k by using the orthogonal matrix L k as defined in (16), that is:
L k Γ k L k = diag D f ; f = f k , f k 1 , , f 2 F = diag D f k , f k 1 , , f 2 ; f k , f k 1 , , f 2 F k × F k 1 × × F 2 ,
where, for each j = 1 , , k , the m 1 × m 1 -diagonal matrices D f = D f k , f k 1 , , f 2 are given by:
D f k , f k 1 , , f 2 = Δ k , j i f f 2 = 1 , , f j = 1 , f j + 1 1 ,
where f k + 1 1 is not taken into consideration, that is:
D f k , f k 1 , , f 2 = Δ k , k if f 2 = 1 , , f k 1 = 1 , f k = 1 Δ k , k 1 if f 2 = 1 , , f k 1 = 1 , f k 1 Δ k , k 2 if f 2 = 1 , , f k 2 = 1 , f k 1 1 Δ k , 2 if f 2 = 1 , f 3 1 Δ k , 1 if f 2 1 .
Thus, Δ k , j f o r j = 1 , , k are the k eigenblocks of the k-SSCS covariance matrix Γ k . We will obtain the estimators of the eigenblocks Δ k , j , j = 1 , , k in Section 4. In the following section, we briefly discuss that the k-SSCS covariance structure is of the Jordan algebra type.

k-SSCS Covariance Structure Is of the Jordan Algebra Type

The k-SSCS covariance structure is of the Jordan algebra type (Jordan et al. [23]). Let G p 1 , k be the set of all k-SSCS p 1 , k × p 1 , k matrices. It is clear that under the usual matrix addition and scalar multiplication, G p 1 , k is a subspace of the linear vectorial space S p 1 , k of the p 1 , k × p 1 , k symmetric matrices. For any natural number k 2 , it is easy to prove the following proposition:
Proposition 1.
If Γ k is a k-SSCS matrix given by (9) (or equivlently by (8)), then Γ k 2 = Γ k Γ k is also a k-SSCS matrix given by:
Γ k 2 = Γ k Γ k = Γ k * = j = 1 k 1 I p j + 1 , k J p 2 , j T k , j * + h = 1 k 1 J m k + 1 h T k , k * ,
where
T k , k * = j = 1 k 1 p 2 , j T k , j T k , k + T k , k T k , j + p 2 , k T k , k 2 = : U k , k * ,
and for h = k 1 , k 2 , , 2 , T k , h * = : U k , h * U k , h + 1 * , with
U k , h * = j = 1 h 1 p 2 , j T k , j T k , h + T k , h T k , j + p 2 , k U k , h 2 + U k , h + 1 *
and where T k , 1 * = : U k , 1 * U k , 2 * = U k , 1 U k , 2 2 = T k , 1 2 , that is, with U k , 1 * = T k , 1 2 + U k , 2 * .
Therefore, we conclude that G p 1 , k is a Jordan Algebra. See Lemma 4.1 on Page 10 in Malley [24], which states that G p 1 , k is a Jordan Algebra if and only if Γ k 2 = Γ k Γ k G p 1 , k for all Γ k G p 1 , k . See Roy et al. [25] and Kozioł et al. [26] for proofs that 2-SSCS and 3-SSCS covariance structures are of Jordan algebra types.

4. Estimators of the Eigenblocks

Let x r : r = 1 , , n be p 1 , k × 1 random vectors partitioned into m 1 × 1 subvectors as follows:
x r = x r ; f : f = f k , f k 1 , , f 2 F = F k , 2 = × j = k 2 F j = x r ; f k , f k 1 , , f 2 : f j F j = 1 , , m j , for j = k , k 1 , , 2 .
The vectors x r : r = 1 , , n are a random sample from a population with distribution N p 1 , k μ ; Γ k , where Γ k is a positive definite k-SSCS structured covariance matrix as given in (8) in Definition 3. Let X be the n × p 1 , k -sample data matrix as follows:
X n × p 1 , k = x 1 x n = X · ; 1 , , 1 n × m 1 , , X · ; f k , , f 2 n × m 1 , , X · ; m k , , m 2 n × m 1
with
X · , f n × m 1 = X · , f k , f k 1 , , f 2 = n × m 1 x 1 , f x n , f = x r , f 1 × m 1 r = 1 n .
In this section, we prove that certain unbiased estimators (to be defined) of the matrix parameters U k , j : j = 1 , , k 1 can be written as functions of the usual sample variance-covariance matrix S as follows:
S = 1 n 1 X Q n X = 1 n 1 X · ; 1 , , 1 m 1 × n X · ; m k , , m 2 m 1 × n Q n X · ; 1 , , 1 n × m 1 , , X · ; m k , , m 2 n × m 1 = S f , f * f , f * F ,
where Q n is given in (2) with (3). Now the sample mean x ¯ can be expressed as:
x ¯ p 1 , k × 1 = 1 n X p 1 , k × n 1 n = 1 n X · ; 1 , , 1 X · ; m k , , m 2 1 n = 1 n X · ; 1 , , 1 1 n 1 n X · ; m k , , m 2 1 n = 1 n X · ; f m 1 × n 1 n f F = 1 n r = 1 n x r ; f f F = x ¯ f m 1 × 1 p 1 , k × 1 f F .
Thus, S f , f * in S can be expressed as:
S f , f * = 1 n 1 X · ; f Q n X · ; f * = 1 n 1 r = 1 n x r ; f x ¯ f m 1 × 1 x r ; f x ¯ f 1 × m 1 if f = f * 1 n 1 r = 1 n x r ; f x ¯ f m 1 × 1 x r ; f * x ¯ f * 1 × m 1 if f f * .
Since S is an unbiased estimator of Γ k , we have:
E S f ; f * = E S f k , f k 1 , , f 2 ; f k * , f k 1 * , , f 2 * = U k , 1 i f f 2 = f 2 * , , f k = f k * U k , j i f f j f j * , f j + 1 = f j + 1 * , , f k = f k * for j = 2 , , k .
Therefore, to find a better unbiased estimator of U k , j , we average all the above random matrices that are unbiased estimator of the same U k , j . The unbiased estimators U ^ k , j of U k , j for each j = 1 , , k are derived in Lemma 5 in Leiva and Roy [10] with q k , j defined in Lemma 3 in Leiva and Roy [10] as:
q k , j = p 2 , k i f j = 1 p 2 , k m j 1 p 2 , j 1 i f j = 2 , , k .
Unbiased estimators of the eigenblocks Δ k , j can be obtained from (13). Then, using (14), the unbiased estimators of Γ k can be obtained as the following othogonal sums:
Γ ^ k = j = 1 k R k , j + 1 * Δ ^ k , j ,
and if Γ ^ k 1 exists, it can be obtained from (15) as follows:
Γ ^ k 1 = j = 1 k R k , j + 1 * Δ ^ k , j 1 ,
where R k , j + 1 * is given in (5), for each j = 1 , , k 1 , and, for j = k , R k , k + 1 * is given in (7).
The computation of the unbiased estimates of the component matrices U k , j for each j = 1 , , k is easy, as all of them have explicit solutions. At this point, we want to mention that for k-separable covariance structure the estimates of the component matrices are not easy, as the MLEs have implicit equations, and therefore are not tractable analytically. Now, from Theorem 1 of Leiva and Roy [10], we see that a multiple of the unbiased estimators of the eigenblocks Δ ^ k , j for each j = 1 , , k , have Wishart distributions as follows:
n 1 p j + 2 , k m j + 1 1 Δ ^ k , j = n 1 BT p 1 , 1 R k , j + 1 S R k , j + 1 : j = 1 , , k 1 ,
and n 1 Δ ^ k , k = n 1 BT p 1 , 1 R k , k + 1 S R k , k + 1 ,
where R k , j + 1 = R k , j + 1 * I m 1 given by (4) with (5) and (6) with (7), are independent and
n 1 p j + 2 , k m j + 1 1 Δ ^ k , j W m 1 n 1 p j + 2 , k m j + 1 1 ; Δ k , j for 1 , , k 1 and n 1 Δ ^ k , k W m 1 n 1 ; Δ k , k .
From Corollary 1 of Leiva and Roy [10], the 2-SSCS covariance matrix for second-order data or multivariate repeated measures data has two eigenblocks, Δ 2 , 2 and Δ 2 , 1 with multiplicity m 2 1 , and their distributions are as follows:
n 1 m 2 1 Δ ^ 2 , 1 W m 1 ( n 1 ) ( m 2 1 ) , Δ 2 , 1 , and n 1 Δ ^ 2 , 2 W m 1 ( n 1 ) , Δ 2 , 2 .
The 3-SSCS covariance matrix for third-order data has three eigenblocks, Δ 3 , 3 , Δ 3 , 2 with multiplicity m 3 1 and Δ 3 , 1 with multiplicity m 3 ( m 2 1 ) , and their distributions are as follows:
n 1 m 3 m 2 1 Δ ^ 3 , 1 W m 1 ( n 1 ) m 3 ( m 2 1 ) , Δ 3 , 1 , a n d n 1 m 3 1 Δ ^ 3 , 2 W m 1 ( n 1 ) ( m 3 1 ) , Δ 3 , 2 , a n d n 1 Δ ^ 3 , 3 W m 1 ( n 1 ) , Δ 3 , 3 .

5. Test for the Mean

5.1. One Sample Test

Using the notation and assumptions in Section 4, let X n × p 1 , k = x 1 , , x n be a n × p 1 , k -dimensional data matrix formed from the random samples x 1 , , x n from N p 1 , k μ ; Γ k . Let x ¯ be the sample mean, then x ¯ N p 1 , k μ , 1 n Γ k . We are interested in testing the following hypothesis:
H 0 : μ = μ 0 v s H 1 : μ μ 0 ,
for known μ 0 . For testing hypothesis (21), we use the test statistic D 2 defined as:
D 2 = n x ¯ μ 0 Γ ^ k 1 x ¯ μ 0 .

5.1.1. Distribution of Test Statistic D 2 under H 0

Now, let L k = H m k , m 2 I m 1 be the matrix as given in (16). We use here the following canonical transformation:
z = L k x ¯ μ 0 = H m k , m 2 I m 1 x ¯ μ 0 .
Therefore, z = z 1 , , 1 , , z m k , , m 2 = L k x ¯ μ 0 N p 1 , k 0 , Ω k . Therefore, according to (17) with (18), we have:
Ω k = 1 n D k = L k 1 n Γ k L k = 1 n H m k , m 2 I m 1 Γ k H m k , m 2 I m 1 = 1 n diag D f k , f k 1 , , f 2 ; f k , f k 1 , , f 2 F k × F k 1 × × F 2 ,
where, for each j = 1 , , k , the diagonal m 1 × m 1 - matrices D f = D f k , f k 1 , , f 2 are given by:
D f k , f k 1 , , f 2 = Δ k , j i f f 2 = 1 , , f j = 1 , f j + 1 1 ,
where f k + 1 1 is not taken into consideration, and the m 1 × 1 component vectors z f k , , f 2 , with f = f k , , f 2 F , are independent. The distribution of z f k , , f 2 , under H 0 is given by:
z f k , , f 2 N m 1 0 , 1 n Δ k , j i f
f 2 , j = f 2 , , f j , f j + 1 F j + 1 1 , and f j + 2 , k = f j + 2 , , f k F j + 2 × × F k .
Since
H m i I m i H m i = I m i = e m i , 1 , e m i , 2 , , e m i , m i H m i P m i H m i = e m i , 1 e m i , 1 = diag 1 , 0 , , 0 m i 1 and H m i Q m i H m i = j = 2 m i e m i , j e m i , j = diag 0 , 1 , , 1 m i 1 ,
for j = 1 , , k 1 , we have:
H m k , m 2 R k , j + 1 * H m k , m 2 = H m k , m j + 2 H m j + 1 H m j , m 2 I p j + 2 , k Q m j + 1 P m j , m 2 H m k , m j + 2 H m j + 1 H m j , m 2 = i = 1 k j 1 e m k + 1 i , 1 , , e m k + 1 i , m k + 1 i i = 2 m j + 1 e m j + 1 , i e m j + 1 , i e p j , 2 , 1 e p j , 2 , 1 = diag 0 , , 0 p 2 , j 1 , 0 , , 0 p 2 , j 1 1 , 0 , , 0 p 2 , j 1 m j + 1 0 , , 0 p 2 , j 1 , 0 , , 0 p 2 , j 1 1 , 0 , , 0 p 2 , j 1 m j + 1 p j + 2 , k ,
and for j = k , we have:
H m k , m 2 R k , k + 1 * H m k , m 2 = H m k , m 2 P m k , m 2 H m k , m 2 = e p 2 , k , 1 e p 2 , k , 1 = diag 1 , 0 , , 0 p 2 , k 1 .
Therefore, using (20), the statistic D 2 in (22) can be written as:
D 2 = n x ¯ μ 0 Γ ^ k 1 x ¯ μ 0 = n z H m k , m 2 I m 1 j = 1 k R k , j + 1 * Δ ^ k , j 1 H m k , m 2 I m 1 z = n z j = 1 k 1 diag 0 p 2 , j 1 , 0 p 2 , j 1 1 , 0 p 2 , j 1 m j + 1 0 p 2 , j 1 , 0 p 2 , j 1 1 , 0 p 2 , j 1 m j + 1 p j + 2 , k Δ ^ k , j 1 z + n z diag 1 , 0 , , 0 p 2 , k 1 Δ ^ k , k 1 z ,
that is,
D 2 = : j = 1 k T 0 j 2 ,
where for j = 1 , , k 1
T 0 j 2 = tr n f k = 1 m k f j + 2 = 1 m j + 2 f j + 1 = 2 m j + 1 k j S u m s z f k , , f j + 1 , 1 , , 1 j 1 o n e s z f k , , f j + 1 , 1 , , 1 j 1 o n e s Δ ^ k , j 1
and for j = k , we assume
T 0 k 2 = tr n f k = 1 m k f j + 2 = 1 m j + 2 f j + 1 = 2 m j + 1 k j S u m s z f k , , f j + 1 , 1 , , 1 j 1 o n e s z f k , , f j + 1 , 1 , , 1 j 1 o n e s Δ ^ k , k 1 = n tr z 1 , , 1 k 1 o n e s z 1 , , 1 k 1 o n e s Δ ^ k , k 1 .
Note that the subsets of vectors involved in T 01 2 , , T 0 k 2 , respectively, form a partition of the set of independent vectors z f k , , f 2 : f k , , f 2 F . Therefore, T 01 2 , , T 0 k 2 are mutually independent. Moreover, since for j = 1 , , k ,
H j = n f k = 1 m k f j + 2 = 1 m j + 2 f j + 1 = 2 m j + 1 k j S u m s z f k , , f j + 1 , 1 , , 1 j 1 o n e s z f k , , f j + 1 , 1 , , 1 j 1 o n e s W m 1 k j ; Δ k , j ,
where
k j = p j + 2 , k m j + 1 1 ,
and
E j = n 1 p j + 2 , k m j + 1 1 Δ ^ k , j W m 1 d j ; Δ k , j ,
with
d j = n 1 p j + 2 , k m j + 1 1 .
Therefore, T 0 j 2 given by (24) reduces to
T 0 j 2 = tr H j E j n 1 p j + 2 , k m j + 1 1 1 = n 1 p j + 2 , k m j + 1 1 tr H j E j 1 = d j tr H j E j 1 ,
and has a Lawley–Hotelling trace (LH-trace) distribution denoted by T 0 2 m 1 ; p j + 2 , k m j + 1 1 , d j if d j = n 1 p j + 2 , k m j + 1 1 m 1 . Note that, using (25), the case j = k reduces to H k = n z 1 , , 1 z 1 , , 1 W m 1 1 , Δ k , k and E k = n 1 Δ ^ k , k W m 1 n 1 ; Δ k , k . Then, T 0 k 2 has the LH-trace distribution T 0 2 m 1 ; 1 , n 1 if n 1 m 1 .
Thus, the distribution of D 2 given by (23) is the convolution of k-independent LH-trace distributions:
j = 1 k T 0 2 m 1 ; p j + 2 , k m j + 1 1 , d j = n 1 p j + 2 , k m j + 1 1 m 1 .
The critical values of this distribution can be obtained using simulations. However, LH-trace distribution is usually aproximated by F distribution, and we use here the second approximation suggested in McKeon [27]. For jth case, i.e., for j = 1 , , k 1 , let us use the notations m j = d j m 1 1 = n 1 p j + 2 , k m j + 1 1 m 1 1 , k j = p j + 2 , k m j + 1 1 and
B j = m j + k ( j ) m j 2 · m j + m 1 m j + 1 = n p j + 2 , k m j + 1 1 m 1 1 n 1 p j + 2 , k m j + 1 1 1 n 1 p j + 2 , k m j + 1 1 m 1 3 n 1 p j + 2 , k m j + 1 1 m 1 .
Then, the distribution
T 0 2 m 1 ; p j + 2 , k ( m j + 1 1 ) , n 1 p j + 2 , k ( m j + 1 1 )
of T 0 j 2 = d j tr H j E j 1 can be approximated by g j F K j , D j where K j = k j m 1 = p j + 2 , k m j + 1 1 m 1 , D j = 4 + K j + 2 B j 1 = 4 + p j + 2 , k m j + 1 1 m 1 + 2 B j 1 and g j = d j K j m j D j 2 D j = n 1 p j + 2 , k 2 m j + 1 1 2 m 1 n 1 p j + 2 , k m j + 1 1 m 1 1 D j 2 D j .
Finally, for j = k , the distribution
T 0 2 m 1 ; 1 , ( n 1 )
is the usual Hotelling T 1 , n 1 2 , that is, distributed as an exact distribution as follows:
n 1 m 1 n m 1 F m 1 , n m 1 .
This means that the distribution of D 2 can be approximated by the convolution of the above k distributions ( k 1 approximated F distribution and one exact F distribution), where its critical values are obtained by the method suggested by Dyer [28].
Remark 1.
The statistic T 0 j 2 , j = 1 , , k 1 has LH-trace distribution T 0 2 m 1 ; p j + 2 , k ( m j + 1 1 ) , d j if d j = ( n 1 ) p j + 2 , k ( m j + 1 1 ) m 1 . We also note that T 0 k 2 has the LH-trace distribution T 0 2 m 1 ; 1 , n 1 if n 1 m 1 . Now, for k-th order data, all m j 2 for j = 1 , , k and k 2 . See Definition 1. Now, k > j . Therefore, k j 1 and then k j 2 1 . Thus, p j + 2 , k 1 . Since m j 2 for all k-th order data, m j + 1 2 , we have ( n 1 ) p j + 2 , k ( m j + 1 1 ) m 1 when n 1 m 1 . Therefore, the only constraint needed on sample size in order to have all T 0 j 2 , j = 1 , , k to follow LH-trace distribution is n 1 m 1 , i.e., n m 1 + 1 , regardless of any m j , j = 2 , , k . In essence, the minimum sample size needed to compute the D 2 test statistic is m 1 + 1 , although the minimum sample size needed to compute the Hotelling’s T 2 test statistic is p 1 , k + 1 , where p 1 , k is the full dimension of the observations. For this reason, one cannot compute Hotelling’s T 2 test statistic for a small sample data set where n p 1 , k , which is doable for the D 2 test statistic.
We will now discuss some special cases of the D 2 statistic in the following remark.
Remark 2.
For second-order data or multivariate repeated measures data, D 2 = T 01 2 + T 02 2 . Now, T 02 2 is distributed as LH-trace distribution T 0 2 ( m 1 ; 1 ; n 1 ) , and T 01 2 is distributed as LH-trace distribution as follows
T 0 2 m 1 ; p 1 + 2 , k ( m 1 + 1 1 ) ; ( n 1 ) p 1 + 2 , k ( m 1 + 1 1 ) o r T 0 2 m 1 ; p 3 , 2 ( m 2 1 ) ; ( n 1 ) p 3 , 2 ( m 2 1 ) a s k = 2 o r T 0 2 m 1 ; ( m 2 1 ) ; ( n 1 ) ( m 2 1 ) a s p 3 , 2 = 1 b y ( 1 ) .
Thus, T 01 2 is distributed as LH-trace distribution T 0 2 m 1 ; ( m 2 1 ) ; ( n 1 ) ( m 2 1 ) .
So, we see that this test exactly matches the test obtained by Žežula et al. [1] for multivariate repeated measures data (second-order data) with 2-SSCS or BCS covariance structure. Therefore, we can say that our mean test statistic in this article is an extension or generalization of Žežula et al.’s [1] mean test statistic for k-th order data with k-SSCS covariance structure.
We will now derive the mean test statistic for third-order data with 3-SSCS covariance structure. For third-order data, D 2 = T 01 2 + T 02 2 + T 03 2 . Now, T 03 2 is distributed as LH-trace distribution T 0 2 ( m 1 ; 1 ; n 1 ) , and T 02 2 is distributed as LH-trace distribution as follows:
T 0 2 m 1 ; p 2 + 2 , k ( m 2 + 1 1 ) ; ( n 1 ) p 2 + 2 , k ( m 2 + 1 1 ) o r T 0 2 m 1 ; p 4 , 3 ( m 3 1 ) ; ( n 1 ) p 4 , 3 ( m 3 1 ) a s k = 3 o r T 0 2 m 1 ; ( m 3 1 ) ; ( n 1 ) ( m 3 1 ) a s p 4 , 3 = 1 b y ( 1 ) ,
that can be approximated by g 2 F K 2 , D 2 , where K 2 = m 3 1 m 1 , D 2 = 4 + K 2 + 2 B 2 1 and g 2 = d 2 K 2 m 2 D 2 2 D 2 = n 1 m 3 1 2 m 1 n 1 m 3 1 m 1 1 D 2 2 D 2 , with d 2 = n 1 m 3 1 , m 2 = d 2 m 1 1 and B 2 = m 2 + m 3 1 m 1 m 2 + m 1 m 2 2 m 2 + 1 , and T 01 2 is distributed as LH-trace distribution as follows
T 0 2 m 1 ; p 1 + 2 , k ( m 1 + 1 1 ) ; ( n 1 ) p 1 + 2 , k ( m 1 + 1 1 ) o r T 0 2 m 1 ; p 3 , 3 ( m 2 1 ) ; ( n 1 ) p 3 , 3 ( m 2 1 ) a s k = 3 o r T 0 2 m 1 ; m 3 ( m 2 1 ) ; ( n 1 ) m 3 ( m 2 1 ) a s p 3 , 3 = m 3 b y ( 1 ) ,
that can be approximated by g 1 F K 1 , D 1 where K 1 = m 3 m 2 1 m 1 , D 1 = 4 + K 1 + 2 B 1 1 and g 1 = d 1 K 1 m 1 D 1 2 D 1 = n 1 m 3 2 m 2 1 2 m 1 n 1 m 3 m 2 1 m 1 1 D 1 2 D 1 , with d 1 = n 1 m 3 m 2 1 , m 1 = d 1 m 1 1 and B 1 = m 1 + m 3 m 2 1 m 1 m 1 + m 1 m 1 2 m 1 + 1 .
So, one can easily derive the test statistic for j-th order data for j = 2 , , k from our generalized D 2 statistic. The distribution of D 2 under H 0 for second-order data with 2-SSCS covariance structure is discussed in detail in Žežula et al. [1]. We will discuss the distribution of D 2 under H 0 for third-order data in detail in the following section.

5.1.2. Distribution of Statistic D 2 under H 0 for Third-order Data with 3-SSCS Covariance Structure

This section is adopted from the work of Žežula et al. [22]. However, we use a much simpler, straightforward approach so that the practitioners or the analysts can appreciate and apply the method easily. Let L 3 = H m 3 H m 2 I m 1 = H m 3 , m 2 I m 1 be a matrix such that for each j = 2 , 3 H m j is an ( m j × m j ) Helmert matrix, that is, an ( m j × m j ) orthogonal matrix with the first column proportional to the m j × 1 vector of 1’s. We use here the following canonical transformation:
z = L 3 x ¯ μ 0 = H m 3 , m 2 I m 1 x ¯ μ 0 .
Therefore, z = z 1 , 1 , , z m 3 , m 2 = L 3 x ¯ μ 0 N p 1 , k 0 , Ω 3 , where
Ω 3 = 1 n D 3 = L 3 1 n Γ 3 L 3 = 1 n H m 3 , m 2 I m 1 Γ 3 H m 3 , m 2 I m 1 = 1 n diag D f 3 f 2 ; f 3 , f 2 F = F 3 × F 2 ,
where, for each j = 1 , 2 , 3 , the diagonal m 1 × m 1 -matrices D f = D f 3 , f 2 are given by
D f 3 , f 2 = Δ 3 , j if f 2 1 j = 1 f 2 = 1 , f 3 1 j = 2 f 2 = 1 , f 3 = 1 j = 3 ,
and the m 1 × 1 component vectors z f 3 , f 2 , with f = f 3 , f 2 F , are independent, with distributions (under the null hypothesis) z f 3 , f 2 N m 1 0 , 1 n Δ 3 , j if
f 2 , j = f 2 , , f j = 1 j 1 , f j + 1 F j + 1 { 1 } , and f j + 2 , 3 = f j + 2 , , f k F j + 2 × × F 3 .
Therefore, particularizing D 2 , given by (23), for k = 3 we have
D 2 = j = 1 k tr n f k = 1 m k f j + 2 = 1 m j + 2 f j + 1 = 2 m j + 1 k j S u m s z f k , , f j + 1 , 1 , , 1 j 1 o n e s z f k , , f j + 1 , 1 , , 1 j 1 o n e s Δ ^ k , j 1 = tr n f k = 1 m 3 f 2 = 2 m 2 3 1 S u m s z f 3 , f 2 z f 3 , f 2 Δ ^ 3 , 1 1 + t r n f k = 2 m 3 3 2 S u m s z f 3 , 1 z f 3 , 1 Δ ^ 3 , 2 1 + t r n z 1 , 1 z 1 , 1 Δ ^ 3 , 3 1 = : T 01 2 + T 02 2 + T 03 2 .
Since the subsets of vectors involved in T 01 2 , T 02 2 , T 03 2 , respectively, form a partition of the set of independent vectors z f 3 , f 2 : f 3 , f 2 F 3 × F 2 = F , T 01 2 , T 02 2 , T 03 2 are mutually independent. Moreover, since, for j = 1 ,
H 1 = n f k = 1 m 3 f 2 = 2 m 2 3 1 S u m s z f 3 , f 2 z f 3 , f 2 W m 1 m 3 m 2 1 ; Δ k , j = W m 1 k 1 ; Δ k , j , and E 1 = n 1 m 3 m 2 1 Δ ^ 3 , 1 W m 1 n 1 m 3 m 2 1 ; Δ 3 , 1 = W m 1 d 1 ; Δ 3 , 1 , and
T 01 2 = tr n f k = 1 m 3 f 2 = 2 m 2 z f 3 , f 2 z f 3 , f 2 Δ ^ 3 , 1 1 = tr H 1 E 1 n 1 m 3 m 2 1 1 = n 1 m 3 m 2 1 tr H 1 E 1 1 = d 1 tr H 1 E 1 1
has a LH-trace distribution denoted by T 0 2 m 1 ; m 3 m 2 1 , d 1 if d 1 = ( n 1 ) m 3 ( m 2 1 ) > m 1 . Similarly, for j = 2 ,
H 2 = n f k = 2 m 3 3 2 S u m s z f 3 , 1 z f 3 , 1 W m 1 m 3 1 ; Δ 3 , 2 and E 2 = n 1 m 3 1 Δ ^ 3 , 2 W m 1 n 1 m 3 1 ; Δ 3 , 2 , and
T 02 2 = tr n f 3 = 2 m 3 z f 3 , 1 z f 3 , 1 Δ ^ 3 , 2 1 = tr H 2 E 2 n 1 m 3 1 1 = n 1 m 3 1 tr H 2 E 2 1 = d 2 tr H 2 E 2 1 ,
has a LH-trace distribution denoted by T 0 2 m 1 ; m 3 1 , d 2 if d 2 = n 1 m 3 1 > m 1 . Note that the case j = 3 reduces to H 3 = n z 1 , 1 z 1 , 1 W m 1 1 , Δ 3 , 3 and E 3 = n 1 Δ ^ 3 , 3 W n 1 ; Δ 3 , 3 , then T 03 2 has the LH-trace distribution T 0 2 m 1 ; 1 , d 3 if d 3 = n 1 > m 1 .
Therefore, the distribution of D 2 = j = 1 3 T 0 j 2 is the following convolution of three independent LH-trace distributions:
j = 1 3 T 0 2 m 1 ; p j + 2 , 3 m j + 1 1 , d j = T 0 2 m 1 ; m 3 m 2 1 , d 1 T 0 2 m 1 ; m 3 1 , d 2 T 0 2 m 1 ; 1 , d 3 ,
if for j = 1 , 2 , 3 , n 1 p j + 2 , 3 m j + 1 1 m 1 . The critical values of this distribution can be obtained using simulations. LH-trace distribution is usually approximated by F distribution as mentioned before, however, we use here the second approximation suggested in McKeon [27].
For j = 1 , denoting by m 1 = d 1 m 1 1 = n 1 m 3 m 2 1 m 1 1 , by k 1 = m 3 m 2 1 and by
B 1 = m 1 + k 1 m 1 2 · m 1 + m 1 m 1 + 1 = n m 3 m 2 1 m 1 1 n 1 m 3 m 2 1 1 n 1 m 3 m 2 1 m 1 3 n 1 m 3 m 2 1 m 1 ,
the distribution
T 0 2 m 1 ; m 3 m 2 1 , n 1 m 3 m 2 1
of T 01 2 = d 1 tr H 1 E 1 1 can be approximated by g 1 F K 1 , D 1 , where K 1 = m 3 m 2 1 m 1 , D 1 = 4 + m 3 m 2 1 m 1 + 2 B 1 1 and g 1 = d 1 K 1 m 1 D 1 2 D 1 = n 1 m 3 2 m 2 1 2 m 1 n 1 m 3 m 2 1 m 1 1 D 1 2 D 1 .
For j = 2 , denoting by m 2 = d 2 m 1 1 = n 1 m 3 1 m 1 1 , by k 2 = m 3 1 and by
B 2 = m 2 + k 2 m 2 2 · m 2 + m 1 m 2 + 1 = n m 3 1 m 1 1 n 1 m 3 1 1 n 1 m 3 1 m 1 3 n 1 m 3 1 m 1 ,
the distribution
T 0 2 m 1 ; m 3 1 , d 2
of T 02 2 = d 2 tr H 2 E 2 1 can be approximated by g 2 F K 2 , D 2 , where K 2 = m 3 1 m 1 , D 2 = 4 + m 3 1 m 1 + 2 B 2 1 and g 2 = n 1 m 3 1 K 2 m 2 D 2 2 D 2 = n 1 m 3 1 2 m 1 n 1 m 3 1 m 1 1 D 2 2 D 2 .
Finally, for our last case corresponding to j = 3 , the distribution T 0 2 m 1 ; 1 , n 1 is the usual Hotelling T m 1 , n 1 2 , that is, an exact distribution as follows:
n 1 m 1 n m 1 F m 1 , n m 1 .
This means that the distribution of D 2 can be approximated by the convolution of the above three  F distributions (two approximated F distribution and one exact F distribution), where its critical values are obtained by the method suggested by Dyer [28].
Now, we need to perform the convolution of three distribution functions. Since convolution is associative, for three distribution functions F 1 , F 2 , and F 3 , the associative law of random variables implies that F 1 F 2 F 3 = F 1 F 2 F 3 , so we can dispense with the parentheses and can write F 1 F 2 F 3 . In the following section, we present the unbiased estimates of the eigenblocks for a 3-SSCS covariance matrix.

5.2. The Expressions of the Δ ’s Estimators for the Case k = 3

1.
From Lemma 5 in Leiva and Roy [10], the unbiased estimators U ^ 3 , j of U 3 , j for each j = 1 , 2 , 3 , are written as follows:
U ^ 3 , 1 = 1 p 2 , 3 BT p 1 , 1 S = 1 p 2 , 3 f F S f ; f = 1 n 1 q 3 , 1 f 3 F 3 f 2 F 2 2 s u m s r = 1 n x r ; f 3 , f 2 x ¯ f 3 , f 2 m 1 × 1 x r ; f 3 , f 2 x ¯ f 3 , f 2 m 1 × 1 ,
U ^ 3 , 2 = BS p 1 , 1 BT p 1 , 2 S BS p 1 , 1 BT p 1 , 1 S q 3 , 2 = BS p 1 , 1 BT p 1 , 2 S BT p 1 , 1 S q 3 , 2 = 1 ( n 1 ) q 3 , 2 f 3 F 3 1 s u m f 2 F 2 f 2 f 2 * F 2 1 s p e c i a l s u m p a i r r = 1 n x r ; f 3 , f 2 x ¯ f 3 , f 2 m 1 × 1 x r ; f 3 * , f 2 * x ¯ f 3 * , f 2 * 1 × m 1 ,
and
U ^ 3 , 3 = BS p 1 , 1 BT p 1 , 3 S BS p 1 , 1 BT p 1 , 2 S q 3 , 3 = 1 ( n 1 ) q 3 , 3 f 3 F 3 f 3 f 3 * F 3 1 s p e c i a l s u m p a i r f 2 F 2 f 2 * F 2 1 s u m p a i r r = 1 n x r ; f 3 , f 2 x ¯ f 3 , f 2 m 1 × 1 x r ; f 3 * , f 2 * x ¯ f 3 * , f 2 * 1 × m 1 ,
where q 3 , j , j = 1 , 2 , 3 are given in (19). Therefore, an unbiased estimator of Γ 3 is given by:
Γ ^ 3 = j = 1 2 I p j + 1 , 3 J p 2 , j U ^ 3 , j U ^ 3 , j + 1 + J p 2 , 3 U ^ 3 , 3 .
Since k-SSCS matrix Γ k in (9) is of Jordan algebra type, following Kozioł et al. [26] one can show that the above estimate Γ ^ 3 is the best unbiased, consistent and complete estimator for Γ 3 .
2.
For each j = 1 , 2 , 3 an unbiased estimator of Δ 3 , j , Δ ^ 3 , j is given by:
Δ ^ 3 , j = i = 1 j p 2 , i U ^ 3 , i U ^ 3 , i + 1 ,
where U ^ 3 , 4 = 0 and p 2 , 1 = 1 , or equivalently:
Δ ^ 3 , j = U ^ 3 , 1 U ^ 3 , 2 i f j = 1 Δ ^ 3 , j 1 + p 2 , j U ^ 3 , j U ^ 3 , j + 1 i f j = 2 , 3 ,
The above unbiased estimators admit the following expressions as functions of S :
Δ ^ 3 , j = m j + 1 · BS p 1 , 1 BT p 1 , j S BS p 1 , 1 BT p 1 , j + 1 S p 2 , 3 m j + 1 1 ,
for j = 1 , 2 and where BS p 1 , 1 BT p 1 , 1 S = BT p 1 , 1 S and BS p 1 , 1 BT p 1 , j + 1 S = BS p 1 , 1 S , and an unbiased estimator of Δ 3 , 3 , Δ ^ 3 , 3 is given by:
Δ ^ 3 , 3 = 1 p 2 , 3 BS p 1 , 1 S .

6. Test for the Equality of Two Means

6.1. Paired Observation Model

In this section, we consider that in each one of the n individuals, a p 1 , k -variate vector is measured at two different times (e.g., before and after a treatment). These measurements are k-th order (array-variate) measurements from each individual. To be more precise, for each f = f k , , f 2 F , let v r ; f k , , f 2 and w r ; f k , , f 2 be the paired m 1 -dimensional vectors measured at the f k , , f 2 site of the r individuals, for r = 1 , , n . Let u r be the partitioned 2 m -variate vectors u r = v r , w r = v r , 1 , , v r , m k , w r , 1 , , w r , m k , where v r ; f k = v r ; f k , 1 , , v r ; f k 1 , m k 1 and w r ; f k = w r ; f k , 1 , , w r ; f k , m k 1 where v r ; f k , f k 1 = v r ; f k , f k 1 , 1 , , v r ; f k , f k 1 , m k 2 and w r ; f k , f k 1 = w r ; f k , f k 1 , 1 , , w r ; f k , f k 1 , m k 2 , with v r ; f k , , f 3 = v r ; f k , , f 3 , 1 , , v r ; f k , , f 3 , m 2 and w r ; f k , , f 3 = w r ; f k , , f 3 , 1 , , w r ; f k , , f 3 , m 2 , respectively, where v r ; f k , , f 3 , f 2 = v r ; f k , , f 3 , f 2 , 1 , , v r ; f k , , f 3 , f 2 , m 1 and w r ; f k , , f 3 , f 2 = w r ; f k , , f 3 , f 2 , 1 , , w r ; f k , , f 3 , f 2 , m 1 are the m 1 paired measurements taken from the rth individual, for r = 1 , , n . We assume that u 1 , , u n i . i . d . N 2 p 1 , k μ u , Γ u , where i.i.d. stands for independent and identically distributed, and μ u = μ v , μ w and Γ u is the partitioned 2 p 1 , k × 2 p 1 , k -matrix
Γ u = Γ v p 1 , k × p 1 , k Γ vw p 1 , k × p 1 , k Γ wv p 1 , k × p 1 , k Γ w p 1 , k × p 1 , k ,
where
Γ v = j = 1 k 1 I p j + 1 , k J p 2 , j U v , j U v , j + 1 + J p 2 , k U v , k and Γ w = j = 1 k 1 I p j + 1 , k J p 2 , j U w , j U w , j + 1 + J p 2 , k U w , k , and
Γ vw = Γ wv = j = 1 k 1 I p j + 1 , k J p 2 , j U vw , j U vw , j + 1 + J p 2 , k U vw , k .
The matrices Γ vw and Γ wv are accountable for the linear dependence among the considered m 1 paired measurements. Particular cases of Γ vw could be of interest, e.g., U vw , 1 = = U vw , k , that is, Γ vw = Γ wv = J p 2 , k U vw , k (see, for example, when k = 2 , Definition 2 on Page 388 in Leiva [6]). Under this set up, we are interested in testing the following hypothesis:
H 0 : μ v = μ w v s . H 1 : μ v μ w .
If we define x = v w the above hypothesis is equivalent to
H 0 : μ x = 0 v s H 1 : μ x 0 ,
as μ x = E ( v w ) = μ v μ w . Moreover, x r = v r w r , r = 1 , , n , are i.i.d. N μ x ; Γ where μ x = μ v μ w and
Γ = cov x = cov ( v w ) = Γ v + Γ w Γ vw Γ wv = j = 1 k 1 I p j + 1 , k J p 2 , j U x , j U x , j + 1 + J p 2 , k U x , k ,
where U x , j = U v , j + U w , j U vw , j U vw , j for j = 1 , , k .
Assuming Γ is a positive definite matrix and that n > p 1 , k , one may consider the likelihood ratio test for the above hypothesis testing problem for k- level multivariate data assuming the mean vectors μ v and μ w are unstructured. Note that this test problem reduces to the one sample mean case of the previous section where μ 0 = 0 . Therefore, all the results obtained in the previous section are valid for this case. Following the same logic as in Remark 1, the needed sample size for the test is n m 1 + 1 , regardless of any m j , j = 2 , , k .

6.2. Independent Observation Model

In this section, we consider the case where we have two independent samples: one random sample of size n 1 of p 1 , k vectors v r 1 : r 1 = 1 , , n 1 i . i . d . N p 1 , k μ v , Γ v , with v r 1 = v r 1 , 1 , , v r 1 , m k , where v r 1 ; f k = v r 1 ; f k , 1 , , v r 1 ; f k , m k 1 , v r 1 ; f k , f k 1 = v r 1 ; f k , f k 1 , 1 , , v r 1 ; f k , f k 1 , m k 2 , …, and where v r 1 ; f k , , f 3 = v r 1 ; f k , , f 3 , 1 , , v r 1 ; f k , , f 3 , m 2 with v r 1 ; f k , , f 3 , f 2 = v r 1 ; f k , , f 3 , f 2 , 1 , , v r 1 ; f k , , f 3 , f 2 , m 1 being the m 1 measurements taken from the r 1 t h individual, for r 1 = 1 , , n 1 , and another random sample of size n 2 of p 1 , k vectors w r 2 : r 2 = 1 , , n 2 i . i . d . N p 1 , k μ w , Γ w , with w r 2 = w r 2 , 1 , , w r 2 , m k , where w r 2 ; f k = w r 2 ; f k , 1 , , w r 2 ; f k , m k 1 , where w r 2 ; f k , f k 1 = w r 2 ; f k , f k 1 , 1 , , w r 2 ; f k , f k 1 , m k 2 , …, and where w r 2 ; f k , , f 3 = w r 2 ; f k , , f 3 , 1 , , w r 2 ; f k , , f 3 , m 1 with w r 2 ; f k , , f 3 , f 2 = w r 2 ; f k , , f 3 , f 2 , 1 , , w r 2 ; f k , , f 3 , f 2 , m 1 being the m 1 measurements taken from the r 2 t h individual, for r 2 = 1 , , n 2 , for f = f k , , f 2 F .
Our interest is in testing the following hypothesis:
H 0 : μ v = μ w v s . H 1 : μ v μ w ,
under the assumption that Γ v = Γ w = Γ = Γ k is an unknown k-SSCS covariance matrix of the form (8). Let V = v 1 , , v n 1 and W = w 1 , , w n 2 denote the corresponding two sample matrix data. We know that the sample means v ¯ = 1 n 1 r 1 = 1 n 1 v r 1 and w ¯ = 1 n 2 r 2 = 1 n 2 w r 2 are independent of the covariance matrix estimators S 1 = 1 n 1 1 V Q n 1 V and S 2 = 1 n 2 1 W Q n 2 W respectively. Therefore, they are also independent of S pl , the pooled unbiased estimator Γ (convex linear combination of unbiased estimators of Γ ) , which is given by:
S pl = n 1 1 n 1 + n 2 2 S 1 + n 2 1 n 1 + n 2 2 S 2 = S f k , , f 2 ; f k * , , f 2 * pl f 2 , f 2 * = 1 ; ; f k , f k * = 1 m 2 ; ; m k = S f ; f * pl f ; f * F ,
where
S f ; f * pl = r 1 = 1 n 1 v r 1 ; f v ¯ f v r 1 ; f * v ¯ f * + r 2 = 1 n 2 w r 2 ; f w ¯ f w r 2 ; f * w ¯ f * n 1 + n 2 2 .
Now
v ¯ w ¯ N p 1 , k μ v μ w , 1 n pl Γ k , where n pl = n 1 n 2 n 1 + n 2 .
We know that under H 0 :
v ¯ w ¯ N p 1 , k 0 , 1 n pl Γ k , and S pl W p 1 , k n 1 + n 2 2 , 1 n 1 + n 2 2 Γ k .
Due to the exchangeable form of Γ k , it is clear that we again have:
E S f ; f * pl = E S f k , , f 2 ; f k * , , f 2 * pl = U k , 1 i f f 2 = f 2 * , , f k = f k * U k , j i f f j f j * , f j + 1 = f j + 1 * , , f k = f k * f o r j = 2 , , k .
Note that each of the following expressions is the arithmetic mean of all submatrices of S pl , which, according to (31), have the same expectation. It is easy to prove that for each j = 1 , , k , an unbiased estimator of U k , j , U ^ k , j pl , for j = 1 is given by:
U ^ k , 1 pl = 1 p 2 , k BT p 1 , 1 S pl = 1 p 2 , k f F S f ; f pl = n 1 1 n 1 + n 2 2 1 p 2 , k f F S 1 : f ; f + n 2 1 n 1 + n 2 2 1 p 2 , k f F S 2 : f ; f ,
and for j = 2 , , k , U ^ k , j pl after some algebraic simplification is given by:
U ^ k , j pl = BS p 1 , 1 BT p 1 , j S pl BT p 1 , j 1 S pl q k , j = n 1 1 n 1 + n 2 2 1 q k , j f k F k f j + 1 F j + 1 f j F j f j f j * F j f j 1 F j 1 f j 1 * F j 1 f 2 F k f 2 * F k S 1 : f ; f * + n 2 1 n 1 + n 2 2 1 q k , j f k F k f j + 1 F j + 1 f j F j f j f j * F j f j 1 F j 1 f j 1 * F j 1 f 2 F k f 2 * F k S 2 : f ; f * ,
where q k , j is given in (19). Therefore, we can use the following unbiased estimator of variance and covariance matrices
Γ ^ pl = Γ ^ k pl = j = 1 k 1 I p j + 1 , k J p 2 , j U k , j pl U k , j + 1 pl + J p 2 , k U k , k pl , and Ω ^ k pl = 1 n pl D ^ k pl = L k 1 n pl Γ ^ k pl L k = 1 n pl H m k , m 2 I m 1 Γ ^ k pl H m k , m 2 I m 1 = 1 n pl diag D ^ f k , f k 1 , , f 2 pl ; f k , f k 1 , , f 2 F k × F k 1 × × F 2 ,
where, for each j = 1 , , k , the diagonal m 1 × m 1 -matrices D f pl = D f k , f k 1 , , f 2 pl are given by:
D ^ f k , f k 1 , , f 2 pl = Δ ^ k , j pl i f f 2 = 1 , , f j = 1 , f j + 1 1 ,
where f k + 1 1 is not taken into consideration and where
Δ ^ k , j pl = i = 1 j p 2 , i U ^ k , i pl U ^ k , i + 1 pl ,
with U ^ k , k + 1 pl = 0 , or equvalently:
Δ ^ k , j pl = U ^ k , 1 pl U ^ k , 2 pl i f j = 1 Δ ^ k , j 1 pl + p 2 , j U ^ k , j pl U ^ k , j + 1 pl i f j = 2 , , k .
The usual likelihood ratio test of (30) is to reject H 0 if:
T 2 = n 1 n 2 n 1 + n 2 v ¯ w ¯ S pl 1 v ¯ w ¯ > c 2 .
Since n 1 + n 2 2 S pl = n 1 1 S 1 + n 2 1 S 2 W m 1 n 1 + n 2 2 ; Γ k ,
T 2 = 1 n 1 + 1 n 2 1 2 v ¯ w ¯ μ v μ w N o r m a l p 1 , k n 1 + n 2 2 S pl n 1 + n 2 2 W i s h a r t / d . f . 1 1 n 1 + 1 n 2 1 2 v ¯ w ¯ μ v μ w N o r m a l p 1 , k .
Nevertheless, we cannot use the above result, as in our case, Γ ^ k pl is an estimator of Γ k . However, by Theorem 1 of Leiva and Roy [10], we know that the random vectors n 1 p j + 2 , k m j + 1 1 Δ ^ k , j = n 1 BT p 1 , 1 R k , j + 1 S R k , j + 1 : j = 1 , , k 1 , and n 1 Δ ^ k , k = n 1 BT p 1 , 1 R k , j + 1 S R k , j + 1 , where R k , j + 1 = R k , j + 1 * I m 1 are given by (4) with (5) and (6) with (7) are independent and
n 1 + n 2 2 p j + 2 , k m j + 1 1 Δ ^ k , j pl W m 1 Δ k , j ; n 1 + n 2 2 p j + 2 , k m j + 1 1 for j = 1 , , k 1 , and n 1 + n 2 2 Δ ^ k , k pl W m 1 Δ k , k ; n 1 + n 2 2 , where p k + 1 , k = 1 .
Since the estimators Δ ^ k , j pl : j = 1 , , k are functions of S pl , they are independent of v ¯ w ¯ . Therefore, using a similar procedure as in the one sample case where we used the transformation z = L k x ¯ μ 0 = H m k H m 2 I m 1 x ¯ μ 0 , we now use the following transformation:
d = d 1 , , 1 , , d m k , , m 2 = L k v ¯ w ¯ H m k H m 2 I m 1 v ¯ w ¯ .
According to the previous result, d = L k v ¯ w ¯ N p 1 , k 0 , Ω k , where
Ω k = 1 n D k = L k 1 n Γ k L k = 1 n H m k , m 2 I m 1 Γ k H m k , m 2 I m 1 = 1 n diag D f k , f k 1 , , f 2 ; f k , f k 1 , , f 2 F k × F k 1 × × F 2 ,
where, for each j = 1 , , k , the diagonal m 1 × m 1 - matrices D f = D f k , f k 1 , , f 2 are given by:
D f k , f k 1 , , f 2 = Δ k , j if f 2 = 1 , , f j = 1 , f j + 1 1 .
Using a similar result as the one used in the one sample case, we obtain the statistic D pl 2 as follows:
D pl 2 = n 1 n 2 n 1 + n 2 v ¯ w ¯ Γ ^ pl 1 v ¯ w ¯ = n pl d ¯ Γ ^ k 1 d ¯ = n pl j = 1 k 1 f k = 1 m k f j + 2 = 1 m j + 2 f j + 1 = 2 m j + 1 k j S u m s d f k , , f j + 1 , 1 , , 1 j 1 o n e s Δ ^ k , j pl 1 d f k , , f j + 1 , 1 , , 1 j 1 o n e s + n pl d f k , , f j + 1 , 1 , , 1 k 1 o n e s Δ ^ k , k pl 1 d 1 , , 1 k 1 o n e s = j = 1 k tr n 1 n 2 n 1 + n 2 f k = 1 m k f j + 2 = 1 m j + 2 f j + 1 = 2 m j + 1 k j S u m s d f k , , f j + 1 , 1 , , 1 j 1 o n e s d f k , , f j + 1 , 1 , , 1 j 1 o n e s Δ ^ k , j pl 1 = : j = 1 k T 0 j 2 .
Then, the distribution of D pl 2 = j = 1 k T 0 j 2 is the convolution of k independent LH-trace distributions as follows:
j = 1 k T 0 2 m 1 ; p j + 2 , k m j + 1 1 , d j = n 1 + n 2 2 p j + 2 , k m j + 1 1 m 1 .
The only condition needed on the sample size in order to have the above convolution of k independent LH-trace distribution is n 1 + n 2 m 1 + 2 . However, LH-trace distributions are usually approximated by F distribution (We use here the second approximation suggested in McKeon [27]). This means that the distribution of D pl 2 can be aproximated by the convolution of k F distributions, where its critical values are obtained by the method sugested by Dyer [28].

7. An Example

We apply our proposed extended D 2 test statistic to a third-order ( k = 3 ) medical dataset as described in the Introduction, where the interest is in testing the equality of mean of a population of glaucoma patients to a target mean of another population of glaucoma patients [29]. Several studies showed that the central corneal thickness (CCT) plays a major role in the diagnosis of glaucoma. Intraocular pressure (IOP) is positively correlated with CCT and may therefore affect diagnosis. Therefore, CCT should be measured along with IOP in all patients for verification of glaucoma. CCT and IOP vary from individual to individual, from right eye to left eye, and from time to time. We have a sample of 30 glaucoma patients. Measurements on IOP and CCT were taken from both the eyes (sites) and were observed over three time points at an interval of three months. Clearly, then, this dataset is a third-order dataset with m 1 = 2 , m 2 = 2 , and m 3 = 3 . This dataset was studied by Leiva and Roy [20] by assuming a 3-SSCS covariance structure. Here, we also assume that this dataset has a 3-SSCS covariance structure. The ( 2 × 1 ) -dimensional sample partitioned mean vector in our sample of 30 glaucoma patients is presented in Table 1.
Additionally, using the Formulas (27)–(29) presented in Section 5.2, the unbiased estimates U ^ 3 , 1 , U ^ 3 , 2 , and U ^ 3 , 3 are:
U ^ 3 , 1 = 12.230 12.061 12.061 426.155 , U ^ 3 , 2 = 5.826 6.939 6.939 164.156 , and U ^ 3 , 3 = 3.528 9.268 9.268 288.684 ,
respectively. Using the above estimates, the unbiased estimate of Γ 3 is: Symmetry 14 00291 i001
The 2 × 2 block diagonals U ^ 3 , 1 represent the estimate of the variance-covariance matrix of the two response variables IOP and CCT at any given eye and at any given time point, whereas the 2 × 2 block off diagonals U ^ 3 , 2 represent the estimate of the covariance matrix of the two response variables IOP and CCT between the two eyes and at any given time point. The 2 × 2 block off diagonals U ^ 3 , 3 represent the covariance matrix of the two response variables IOP and CCT between any two time points.
Iester et al. [29] reported the mean and standard deviation (SD) of the IOP and CCT measurements for both the eyes from 794 Italian Caucasian glaucoma patients (see Table 2). We deem these means as the means of the IOP and CCT at the first time point and then randomly generate four samples within three standard errors (SD of mean) from these reported means of IOP and CCT to represent the means of IOP and CCT for the left and right eyes in the third and sixth months, respectively. These randomly generated means of IOP and CCT for the left and right eyes at three time points in vector form are reported in Table 3, and we will take this mean vector as the targeted mean μ 0 in (21). The sample mean vector in Table 1 appears to be very different from the targeted population mean vector μ 0 in Table 3.
The aim of our study is to see whether our sample of 30 glaucoma patients has the same mean vector as the Italian Caucasian glaucoma patients. Our main intention of the analysis of our glaucoma dataset is to illustrate the use of our new hypotheses testing procedures rather than giving any insight into the dataset itself.
The calculated D 2 statistic (26), which is a convolution of three independent L–H distributions, T 0 2 ( 2 ; 3 , 87 ) , T 0 2 ( 2 ; 2 , 58 ) and T 0 2 ( 2 ; 1 , 29 ) , respectively, which in turn is approximated by two approximated F distributions and one exact F distribution, is 317.2971, and the corresponding p value is 0. So, we reject the null hypothesis that the population mean of our dataset is equal to the Italian Caucasian glaucoma patients, and this conclusion was expected from the data.

8. Conclusions and Discussion

We study the tests of hypotheses of equality of means for one population as well as for two populations for high-dimensional and higher-order data with k-SSCS covariance structure. Such a structure is natural and a credible assumption in many research studies. MLEs and the unbiased estimates of the matrix parameters of the k-SSCS covariance structure have closed-form solutions. On the other hand, the MLEs and the unbiased estimates of the matrix parameters of the k separable covariance structure are not tractable and are computationally intensive. So, k-SSCS covariance structure is a desirable covariance structure for k-th order data. Aghaian et al. [30] examined differences in CCT of 801 subjects, establishing the fact that the CCT of Japanese participants was significantly lower than that of Caucasians, Chinese, Filipinos, and Hispanics, and greater than that of African Americans. African American individuals have thinner corneas compared to white individuals [31]. So, CCT and IOP in glaucoma patients vary with race, and our result confirms this fact. Our proposed new hypotheses testing procedures are perfect for high-dimensional array-variate data, which are ubiquitous in this century. In discriminant analysis [32], the first step is to test the equality of means for the two populations. Therefore, our new method developed in this article will have important applications in the analysis of modern multivariate datasets with higher-order structure. Our new method can be extended to non-normal datasets. In addition, it can be extended in testing the equality of means for more than two populations and simultaneous hypotheses testing in models with k-SSCS covariance structure.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors gratefully acknowledge the funding support from the Department of Management Science and Statistics, Carlos Alvarez College of Business, The University of Texas at San Antonio, San Antonio, Texas for paying the APC. The authors are also thankful to the editor and three anonymous referees for their careful reading, valuable comments, and suggestions that led to a quite improved version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Žežula, I.; Klein, D.; Roy, A. Testing of multivariate repeated measures data with block exchangeable covariance structure. Test 2018, 27, 360–378. [Google Scholar] [CrossRef]
  2. Dutilleul, P. The mle algorithm for the matrix normal distribution. J. Stat. Comput. Simul. 1999, 64, 105–123. [Google Scholar] [CrossRef]
  3. Gupta, A.K. On a multivariate statistical classification model. In Multivariate Statistical Analysis; Gupta, R.P., Ed.; North-Holland: Amsterdam, The Netherlands, 1980; pp. 83–93. [Google Scholar]
  4. Lu, N.; Zimmerman, D. The likelihood ratio test for a separable covariance matrix. Stat. Probab. Lett. 2005, 73, 449–457. [Google Scholar] [CrossRef]
  5. Kollo, T.; von Rosen, D. Advanced Multivariate Statistics with Matrices; Springer: Dordrecht, The Netherlands, 2005. [Google Scholar]
  6. Leiva, R. Linear discrimination with equicorrelated training vectors. J. Multivar. Anal. 2007, 98, 384–409. [Google Scholar] [CrossRef] [Green Version]
  7. Manceur, A.M.; Dutilleul, P. Maximum likelihood estimation for the tensor normal distribution: Algorithm, minimum sample size, and empirical bias and dispersion. J. Computat. Appl. Math. 2013, 239, 37–49. [Google Scholar] [CrossRef]
  8. Leiva, R.; Roy, A. Classification of higher-order data with separable covariance and structured multiplicative or additive mean models. Commun. Stat. Theory Methods 2014, 43, 989–1012. [Google Scholar] [CrossRef]
  9. Ohlson, M.; Ahmad, M.R.; von Rosen, D. The multilinear normal distribution: Introduction and some basic properties. J. Multivar. Anal. 2013, 113, 37–47. [Google Scholar] [CrossRef] [Green Version]
  10. Leiva, R.; Roy, A. Self Similar Compound Symmetry Covariance Structure. J. Stat. Theory Pract. 2021, 15, 70. [Google Scholar] [CrossRef]
  11. Rao, C.R. Familial correlations or the multivariate generalizations of the intraclass correlation. Curr. Sci. 1945, 14, 66–67. [Google Scholar]
  12. Rao, C.R. Discriminant functions for genetic differentiation and selection. Sankhya 1953, 12, 229–246. [Google Scholar]
  13. Olkin, I.; Press S., J. Testing and estimation for a circular stationary model. Ann. Math. Stat. 1969, 40, 1358–1373. [Google Scholar] [CrossRef]
  14. Liang, Y.; von Rosen, D.; von Rosen, T. On estimation in hierarchical models with block circular covariance structures. Ann. Inst. Stat. Math. 2015, 67, 773–791. [Google Scholar] [CrossRef]
  15. Olkin, I. Inference for a Normal Population when the Parameters Exhibit Some Structure, Reliability and Biometry; SIAM: Philadelphia, PA, USA, 1974; pp. 759–773. [Google Scholar]
  16. Wilks, S.S. Sample criteria for testing equality of means, equality of variances, and equality of covariances in a normal multivariate distribution. Ann. Math. Stat. 1946, 17, 257–281. [Google Scholar] [CrossRef]
  17. Arnold, S.F. Application of the theory of products of problems to certain patterned covariance matrices. Ann. Stat. 1973, 1, 682–699. [Google Scholar] [CrossRef]
  18. Arnold, S.F. Linear models with exchangeably distributed errors. J. Am. Stat. Assoc. 1979, 74, 194–199. [Google Scholar] [CrossRef]
  19. Roy, A.; Leiva, R.; Žežula, I.; Klein, D. Testing of equality of mean vectors for paired doubly multivariate observations in blocked compound symmetric covariance matrix setup. J. Multivar. Anal. 2015, 137, 50–60. [Google Scholar] [CrossRef]
  20. Leiva, R.; Roy, A. Linear discrimination for three-level multivariate data with separable additive mean vector and doubly exchangeable covariance structure. Comput. Stat. Data Anal. 2012, 56, 1644–1661. [Google Scholar] [CrossRef]
  21. Roy, A.; Fonseca, M. Linear models with doubly exchangeable distributed errors. Commun. Stat. Theory Methods 2012, 41, 2545–2569. [Google Scholar] [CrossRef]
  22. Žežula, I.; Klein, D.; Roy, A. Mean Value Test for Three-Level Multivariate Observations with Doubly Exchangeable Covariance Structure. In Recent Developments in Multivariate and Random Matrix Analysis; Holgersson, T., Singull, M., Eds.; Springer Nature: Cham, Switzerland, 2020; pp. 335–349. [Google Scholar]
  23. Jordan, P.; von Neumann, J.; Wigner, E.P. On an algebraic generalization of the quantum mechanical formalism. Ann. Math. 1934, 35, 29–64. [Google Scholar] [CrossRef]
  24. Malley, J.D. Statistical Applications of Jordan Algebras. In Lecture Notes in Statistics; Springer: New York, NY, USA, 1994. [Google Scholar]
  25. Roy, A.; Zmyślony, R.; Fonseca, M.; Leiva, R. Optimal estimation for doubly multivariate data in blocked compound symmetric covariance structure. J. Multivar. Anal. 2016, 144, 81–90. [Google Scholar] [CrossRef]
  26. Kozioł, A.; Roy, A.; Zmyślony, R.; Leiva, R.; Fonseca, M. Best unbiased estimates for parameters of three-level multivariate data with doubly exchangeable covariance structure. Linear Algebra Appl. 2017, 535, 87–104. [Google Scholar] [CrossRef]
  27. McKeon, J.J. F approximations to the distribution of Hotelling’s T 0 2 . Biometrika 1974, 61, 381–383. [Google Scholar]
  28. Dyer, D. The Convolution of Generalized F Distributions. J. Am. Stat. Assoc. 1982, 77, 184–189. [Google Scholar] [CrossRef]
  29. Iester, M.; Telani, S.; Frezzotti, P.; Manni, G.; Uva, M.; Figus, M.; Perdicchi, A. Differences in central corneal thickness between the paired eyes and the severity of the glaucomatous damage. Eye 2012, 26, 1424–1430. [Google Scholar] [CrossRef] [Green Version]
  30. Aghaian, E.; Choe, J.E.; Lin, S.; Stamper, R.L. Central corneal thickness of Caucasians, Chinese, Hispanics, Filipinos, African Americans, and Japanese in a glaucoma clinic. Ophthalmology 2004, 111, 2211–2219. [Google Scholar] [CrossRef] [PubMed]
  31. Brandt, J.D.; Beiser, J.A.; Kass, M.A.; Gordon, M.O.; Ocular Hypertension Treatment Study (OHTS) Group. Central corneal thickness in the Ocular Hypertension Treatment Study (OHTS). Ophthalmology 2001, 108, 1779–1788. [Google Scholar] [CrossRef]
  32. Johnson, R.A.; Wichern, D.W. Applied Multivariate Statistical Analysis, 6th ed.; Pearson Prentice Hall: Hoboken, NJ, USA, 2007. [Google Scholar]
Table 1. The (2 × 1) dimensional sample partitioned mean vector in our sample of 30 glaucoma patients.
Table 1. The (2 × 1) dimensional sample partitioned mean vector in our sample of 30 glaucoma patients.
ts(IOP, CCT)
11 ( 24.333 , 527.367 )
12 ( 23.567 , 534.633 )
21 ( 20.233 , 525.333 )
22 ( 19.567 , 532.500 )
31 ( 19.233 , 527.133 )
32 ( 18.933 , 534.867 )
Table 2. IOP and CCT measurements from 794 Italian Caucasian glaucoma patients.
Table 2. IOP and CCT measurements from 794 Italian Caucasian glaucoma patients.
MeanSD
Right IOP16.163.55
Right CCT545.6835.82
Left IOP16.283.31
Left CCT546.8936.09
Table 3. The (2 × 1) dimensional targeted partitioned mean vector μ 0 in the Italian Caucasian glaucoma patients.
Table 3. The (2 × 1) dimensional targeted partitioned mean vector μ 0 in the Italian Caucasian glaucoma patients.
ts(IOP, CCT)
11(16.16, 545.68)
12(16.28, 546.89)
21(15.97, 546.18)
22(16.25, 550.30)
31(16.20, 546.90)
32(16.07, 549.64)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Leiva, R.; Roy, A. Mean Equality Tests for High-Dimensional and Higher-Order Data with k-Self Similar Compound Symmetry Covariance Structure. Symmetry 2022, 14, 291. https://doi.org/10.3390/sym14020291

AMA Style

Leiva R, Roy A. Mean Equality Tests for High-Dimensional and Higher-Order Data with k-Self Similar Compound Symmetry Covariance Structure. Symmetry. 2022; 14(2):291. https://doi.org/10.3390/sym14020291

Chicago/Turabian Style

Leiva, Ricardo, and Anuradha Roy. 2022. "Mean Equality Tests for High-Dimensional and Higher-Order Data with k-Self Similar Compound Symmetry Covariance Structure" Symmetry 14, no. 2: 291. https://doi.org/10.3390/sym14020291

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop