Next Article in Journal
Graph Transformer Collaborative Filtering Method for Multi-Behavior Recommendations
Previous Article in Journal
ADMM-Based Differential Privacy Learning for Penalized Quantile Regression on Distributed Functional Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Partially Coupled Stochastic Gradient Estimation for Multivariate Equation-Error Systems

1
Jiangsu Key Laboratory of Media Design and Software Technology, School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
2
School of Automation, Wuxi University, Wuxi 214105, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2955; https://doi.org/10.3390/math10162955
Submission received: 5 July 2022 / Revised: 4 August 2022 / Accepted: 12 August 2022 / Published: 16 August 2022
(This article belongs to the Section Engineering Mathematics)

Abstract

:
This paper researches the identification problem for the unknown parameters of the multivariate equation-error autoregressive systems. Firstly, the original identification model is decomposed into several sub-identification models according to the number of system outputs. Then, based on the characteristic that the information vector and the parameter vector are common among the sub-identification models, the coupling identification concept is used to propose a partially coupled generalized stochastic gradient algorithm. Furthermore, by expanding the scalar innovation of each subsystem model to the innovation vector, a partially coupled multi-innovation generalized stochastic gradient algorithm is proposed. Finally, the numerical simulations indicate that the proposed algorithms are effective and have good parameter estimation performances.

1. Introduction

Parameter estimation is an important part in the field of system identification, which usually identifies the unknown parameters in the models according to the input and output data of the system when the model structure of a system is known [1,2,3]. Parameter estimation has been used in many fields in recent years, including chemistry, mechanics, engineering and so on [4,5,6]. For example, in chemical engineering field, Khalik et al. applied the parameter estimation approach by using current/voltage data in achieving physically meaningful parameters of the Doyle–Fuller–Newman model for Lithium-ion batteries [7]. In mechanics field, Shamrao et al. estimated the terramechanics parameters by applying dynamic Bayesian estimation techniques on the measurements from simple single-wheel tests [8]. In engineering, Calasan et al. proposed two algorithms for the transformer parameter estimation to improve the estimation process and prevent inaccuracies and parameter mismatch with the real parameters of the transformer [9].
In many identification objects of parameter estimation, multivariate systems is a very common class [10,11,12]. The majority of industrial processes are the multivariate systems [13,14]. It is not easy to estimate the unknown parameters of multivariate systems, because such systems are quite complex; for example, there are many variables, there is coupling between variables, or there is time delay [15,16,17]. In recent years, the identification of the multivariate systems has attracted the attention of many scholars. Shafin et al. studied the angle and delay estimation for 3D massive MIMO systems under a parametric channel modeling, which was crucial for such systems to realize the predicted capacity gains [18]. Kawaria et al. designed a Levy shuffled frog leaping algorithm with high parameter estimation efficiency to estimate the parameters of multiple-input–multiple-output bilinear systems [19]. Roy et al. developed an online plant-parameter identification method for the multi-input–multi-output linear time-invariant systems [20].
The coupling identification concept is a useful identification method that has been developed in recent decades. It is suitable for parameter estimation problems for the multivariate systems with the same parameters among subsystems [21,22]. The identification algorithm based on the coupling concept has the advantage of lower computation amounts compared with the traditional identification algorithm for multivariate systems. Cui et al. combined the Kalman filtering principle and the coupling identification concept to derive a Kalman filtering-based partially coupled recursive least squares algorithm for jointly estimating the parameters and the states of the multivariable state–space system; their algorithm had high computational efficiency [23]. The multi-innovation identification theory is also an effective method that has been used to improve parameter estimation accuracy in recent years [24,25,26]. This method not only utilizes the data information of the current time, but also makes full use of the data information of the previous time [27,28,29]. Chaudhary et al. presented a multi-innovation fractional least mean square adaptive algorithm for the input-nonlinear systems by expanding the scalar innovation into a vector innovation by using the multi-innovation identification theory [30]. This method can also be applied to the parameter estimation of the multivariate systems.
We have studied the parameter identification problems of multivariate systems in the past. In [31], the colored noise of the original system was filtered into white noise by using the data filtering technique. In [32], the original identification system was decomposed into two sub-identification systems, where one contains the system model parameter vector and the other contains the noise model parameter matrix by applying the decomposition method. However, the coupling identification concept used in this paper is different from the previous two identification methods, as it decomposes the original system according to the number of system outputs to obtain several subsystems with partial parameter vectors and information vectors coupled. It has significant advantages in reducing the computational cost of the algorithm because it makes the original complex system simplified. Therefore, this paper presents new identification methods to estimate the parameters of the multivariate equation-error autoregressive system, which have high computational efficiency and high parameter estimation accuracy. The main contributions of this paper lie in the following.
  • This paper decomposes the multivariate equation-error autoregressive system into several sub-identification models according to the number of the system outputs.
  • A multivariate partially coupled generalized stochastic gradient (M-PC-GSG) algorithm is proposed for the multivariate equation-error system by utilizing the coupling identification concept, which can reduce the computation amounts compared with the traditional stochastic gradient algorithm.
  • A multivariate partially coupled multi-innovation generalized stochastic gradient (M-PC-MI-GSG) algorithm is proposed by using the multi-innovation identification theory, which has higher parameter estimation accuracy than the M-PC-GSG algorithm.
The rest of this paper is organized as follows. Section 2 presents a multivariate equation-error autoregressive system and describes its identification difficulties. Section 3 proposes a partially coupled generalized stochastic gradient algorithm and gives its schematic diagram. Section 4 proposes a partially coupled multi-innovation generalized stochastic gradient algorithm. Section 5 presents two numerical examples to indicate that the proposed algorithms are effective. Finally, we offer some concluding remarks in Section 6.

2. System Description and Identification Model

First of all, we give some notation in this paper. I m denotes an identity matrix of size m × m ; 1 n stands for an n-dimensional column vector whose elements are 1, that is 1 n = [ 1 , 1 , , 1 ] T n ; 1 m × n represents a matrix of size m × n whose elements are 1; the norm of a matrix X is defined by X 2 : = tr [ X X T ] , the superscript T stands for the matrix/vector transpose, the symbol ⊗ represents Kronecker product, for example, A : = [ a i j ] m × n , B : = [ b i j ] p × q , A B = [ a i j B ] ( m p ) × ( n q ) , in general, A B B A ; col [ X ] is defined as a vector consisting of all columns of matrix X arranged in order, for example, X : = [ x 1 x 2 x n ] m × n , x i m ( i = 1 , 2 , , n ) , col [ X ] : = [ x 1 T , x 2 T ,⋯, x n T ] T m n .
Consider the following multivariate equation-error autoregressive system,
y ( t ) = Φ s ( t ) θ + C 1 ( z ) v ( t ) ,
where y ( t ) : = [ y 1 ( t ) , y 2 ( t ) , , y m ( t ) ] T m is the system output vector, Φ s ( t ) m × n is the system information matrix consisting of the input-output data, θ n is the system parameter vector to be identified, v ( t ) : = [ v 1 ( t ) , v 2 ( t ) ,⋯, v m ( t ) ] T m is a white noise process with zero mean, C ( z ) m × m is a polynomial matrix in the unit backward shift operator [ z 1 y ( t ) = y ( t 1 ) ] :
C ( z ) : = I m + C 1 z 1 + C 2 z 2 + + C n c z n c , C i m × m .
Define the noise model,
w ( t ) : = C 1 ( z ) v ( t ) m .
Assume that the orders m, n and n c are known, and y ( t ) = 0 , Φ s ( t ) = 0 , w ( t ) = 0 and v ( t ) = 0 for t 0 .
Define the parameter matrix γ and the information vector ϕ ( t ) as
γ T : = [ C 1 , C 2 , , C n c ] m × ( m n c ) , ϕ ( t ) : = [ w T ( t 1 ) , w T ( t 2 ) , , w T ( t n c ) ] T m n c .
From Equation (2), we have
w ( t ) = [ I m C ( z ) ] w ( t ) + v ( t ) , = ( C 1 z 1 C 2 z 2 C n c z n c ) w ( t ) + v ( t ) = C 1 w ( t 1 ) C 2 w ( t 2 ) C n c w ( t n c ) + v ( t ) . = γ T ϕ ( t ) + v ( t ) .
Then, the multivariate equation-error autoregressive system in (1) can be transformed into the following identification model,
y ( t ) = Φ s ( t ) θ + w ( t )
            = Φ s ( t ) θ + γ T ϕ ( t ) + v ( t ) .
For the identification model in (5), the objective of this paper is to identify the unknown parameters θ and γ by researching some identification methods. Currently, the observable data are y ( t ) and Φ s ( t ) . Certainly, the most direct identification method is to integrate the information vector θ and the information matrix γ into a new information matrix ϑ , and the parameter matrix Φ s ( t ) and the parameter vector ϕ ( t ) into a new parameter matrix Φ ( t ) , then we can obtain the following identification model:
y ( t ) = Φ ( t ) ϑ + v ( t ) ,
Φ ( t ) : = [ Φ s ( t ) , I m ϕ T ( t ) ] m × n 0 , n 0 : = n + m 2 n c ,
ϑ : = θ col [ γ ] n 0 .
However, the information matrix Φ ( t ) in model (6) contains a large number of zero elements because it is calculated from the Kronecker product, which results in redundant computation amounts in the identification processes. Therefore, it is necessary to find an alternative method with less computation amounts to estimate the identification model in (5). In this paper, two efficient algorithms with good performances are proposed to solve this problem by applying the coupling identification concept and the multi-innovation identification theory.
Suppose that X ^ ( t ) is the estimate of X at time t. That is to say c i ^ ( t ) , γ ^ ( t ) and w ^ ( t ) are estimates of c i , γ and w at time t, respectively.

3. The Partially Coupled Stochastic Gradient Algorithm

First of all, according to the number of the system outputs, decomposing the identification model in (5) into m sub-identification models:
y 1 ( t ) y 2 ( t ) y m ( t ) = ψ 1 T ( t ) ψ 2 T ( t ) ψ m T ( t ) θ + γ 1 T γ 2 T γ m T ϕ ( t ) + v 1 ( t ) v 2 ( t ) v m ( t ) , i = 1 , 2 , , m ,
where γ i T 1 × m n c are the ith row of the parameter matrix γ T , and ψ i T ( t ) 1 × n are the ith row of the information matrix Φ s ( t ) :
γ T : = [ γ 1 , γ 2 , , γ m ] T m × ( m n c ) , Φ s ( t ) : = [ ψ 1 ( t ) , ψ 2 ( t ) , , ψ m ( t ) ] T m × n .
For the m sub-identification models in (9), it can be seen that the information vector θ and the parameter vector ϕ ( t ) are common among the m subsystems. Thus, Equation (5) is a coupling identification model of partial parameter vectors and partial information vectors. Equation (9) can be represented as
y i ( t ) = ψ i T ( t ) θ + γ i T ϕ ( t ) + v i ( t ) = ψ i T ( t ) θ + ϕ T ( t ) γ i + v i ( t ) , i = 1 , 2 , , m .
For the m sub-identification models in (10), define a gradient criterion function,
J 1 ( θ , γ i ) : = [ y i ( t ) ψ i T ( t ) θ ϕ T ( t ) γ i ] 2 , i = 1 , 2 , , m .
Assuming that 1 / r θ , i ( t ) and 1 / r γ , i ( t ) ( i = 1 , 2 , , m ) are the step-size, using the negative gradient search [33] and minimizing J 1 ( θ , γ i ) , we have the gradient relationships:
θ ^ ( t ) = θ ^ ( t 1 ) + ψ i ( t ) r θ , i ( t ) [ y i ( t ) ψ i T ( t ) θ ^ ( t 1 ) ϕ T ( t ) γ ^ i ( t 1 ) ] ,
r θ , i ( t ) = r θ , i ( t 1 ) + ψ i ( t ) 2 , r θ , i ( 0 ) = 1 ,
γ ^ i ( t ) = γ ^ i ( t 1 ) + ϕ ( t ) r γ , i ( t ) [ y i ( t ) ψ i T ( t ) θ ^ ( t 1 ) ϕ T ( t ) γ ^ i ( t 1 ) ] ,
r γ , i ( t ) = r γ , i ( t 1 ) + ϕ ( t ) 2 , r γ , i ( 0 ) = 1 .
The problem of identification in (11)–(14) is that the estimates θ ^ ( t ) and γ i ^ ( t ) can not be computed because ϕ ( t ) contains the unmeasurable noise terms w ( t i ) . The method to solve this problem is to replace the unmeasurable variables w ( t i ) with their corresponding estimates w ^ ( t i ) . Thus, the estimate of ϕ ( t ) can be computed by w ^ ( t i ) :
ϕ ^ ( t ) : = [ w ^ T ( t 1 ) , w ^ T ( t 2 ) , , w ^ T ( t n c ) ] T m n c .
It can be seen that the parameter vector θ is estimated for m times in (11) and (12), which will lead to many redundant estimates. Using the estimate θ ^ i to replace the estimate θ ^ in (11) can reduce the redundant computation. At the same time, replacing the unknown information vector ϕ ( t ) with its estimate ϕ ^ ( t ) , we can obtain the new gradient relationships:
θ ^ i ( t ) = θ ^ i ( t 1 ) + ψ i ( t ) r θ , i ( t ) [ y i ( t ) ψ i T ( t ) θ ^ i ( t 1 ) ϕ ^ T ( t ) γ ^ i ( t 1 ) ] ,
r θ , i ( t ) = r θ , i ( t 1 ) + ψ i ( t ) 2 , r θ , i ( 0 ) = 1 ,
γ ^ i ( t ) = γ ^ i ( t 1 ) + ϕ ^ ( t ) r γ , i ( t ) [ y i ( t ) ψ i T ( t ) θ ^ i ( t 1 ) ϕ ^ T ( t ) γ ^ i ( t 1 ) ] ,
r γ , i ( t ) = r γ , i ( t 1 ) + ϕ ^ ( t ) 2 , r γ , i ( 0 ) = 1 .
Additionally, according to Equations (4), we have
w ^ ( t ) = y ( t ) Φ s ( t ) θ ^ m ( t ) .
It is generally believed that the parameter estimate θ ^ i 1 ( t ) of the ( i 1 ) th subsystem at time t is closer to the true value θ than the parameter estimate θ ^ i ( t 1 ) of the ith subsystem at time ( t 1 ) . Replacing θ ^ i ( t 1 ) on the right-hand of Equation (16) with θ ^ i 1 ( t ) , and replacing θ ^ 1 ( t 1 ) of Equation (16) when i = 1 with θ ^ m ( t 1 ) . Combing Equations (15) and (20), we can obtain the multivariate partially coupled generalized stochastic gradient (M-PC-GSG) algorithm as
θ ^ 1 ( t ) = θ ^ m ( t 1 ) + ψ 1 ( t ) r θ , 1 ( t ) [ y 1 ( t ) ψ 1 T ( t ) θ ^ m ( t 1 ) ϕ ^ T ( t ) γ ^ 1 ( t 1 ) ] ,
r θ , 1 ( t ) = r θ , m ( t 1 ) + ψ 1 ( t ) 2 ,
γ ^ 1 ( t ) = γ ^ 1 ( t 1 ) + ϕ ^ ( t ) r γ , 1 ( t ) [ y 1 ( t ) ψ 1 T ( t ) θ ^ m ( t 1 ) ϕ ^ T ( t ) γ ^ 1 ( t 1 ) ] ,
r γ , 1 ( t ) = r γ , m ( t 1 ) + ϕ ^ ( t ) 2 ,
θ ^ i ( t ) = θ ^ i 1 ( t ) + ψ i ( t ) r θ , i ( t ) [ y i ( t ) ψ i T ( t ) θ ^ i 1 ( t ) ϕ ^ T ( t ) γ ^ i ( t 1 ) ] ,
r θ , i ( t ) = r θ , i 1 ( t ) + ψ i ( t ) 2 ,
γ ^ i ( t ) = γ ^ i ( t 1 ) + ϕ ^ ( t ) r γ , i ( t ) [ y i ( t ) ψ i T ( t ) θ ^ i 1 ( t ) ϕ ^ T ( t ) γ ^ i ( t 1 ) ] ,
r γ , i ( t ) = r γ , i 1 ( t ) + ϕ ^ ( t ) 2 ,
Φ s ( t ) = [ ψ 1 ( t ) , ψ 2 ( t ) , , ψ m ( t ) ] T ,
ϕ ^ ( t ) = [ w ^ T ( t 1 ) , w ^ T ( t 2 ) , , w ^ T ( t n c ) ] T ,
w ^ ( t ) = y ( t ) Φ s ( t ) θ ^ m ( t ) ,
γ ^ ( t ) = [ γ ^ 1 ( t ) , γ ^ 2 ( t ) , , γ ^ m ( t ) ] .
The schematic diagram of the M-PC-GSG algorithm in (21)–(32) is shown in Figure 1, and the computation procedures are listed as follows.
  • Let t = 1 , set the initial values θ ^ m ( 0 ) = 1 n / p 0 , r θ , i ( 0 ) = r γ , i ( 0 ) = 1 , γ ^ i ( 0 ) = 1 m n c / p 0 , i = 1 , 2 , , m , w ^ ( t j ) = 0 , j = 0 , 1, ⋯, n c , p 0 = 10 6 , and set the data length K.
  • Collect the observation data Φ s ( t ) and y ( t ) , read ψ i ( t ) from Φ s ( t ) using (29).
  • Construct ϕ ^ ( t ) using (30).
  • Compute the step-size r θ , 1 ( t ) and r γ , 1 ( t ) using (22) and (24).
  • Update the parameter estimates θ ^ 1 ( t ) and γ 1 ^ ( t ) using (21) and (23).
  • When i = 2 , 3 , , m , compute r θ , i ( t ) and r γ , i ( t ) using (26) and (28), and update the parameter estimates θ ^ i ( t ) and γ ^ i ( t ) using (25) and (27).
  • Compute w ^ ( t ) using (31) and construct γ ^ ( t ) using (32).
  • Increase t by 1 if t < K , and then go to Step 2. Otherwise, obtain parameter estimates θ ^ ( t ) and γ ^ ( t ) and stop computing.

4. The Partially Coupled Multi-Innovation Stochastic Gradient Algorithm

In this section, we apply the multi-innovation identification theory based on the M-PC-GSG algorithm to further improve the parameter estimation accuracy. We introduce an innovation length p 1 to expand the innovation scalars of each subsystem models to the innovation vectors. According to the M-PC-GSG algorithm in (21)–(32), define the subsystem stacked information matrices Λ 1 ( p , t ) , Λ i ( p , t ) ( i = 2 , 3 , , m ) , Ξ ( p , t ) and the subsystem stacked output vectors Y 1 ( p , t ) and Y i ( p , t ) ( i = 2 , 3 , , m ) as
Λ 1 ( p , t ) : = [ ψ 1 ( t ) , ψ 1 ( t 1 ) , , ψ 1 ( t p + 1 ) ] n × p , Λ i ( p , t ) : = [ ψ i ( t ) , ψ i ( t 1 ) , , ψ i ( t p + 1 ) ] n × p , i = 2 , 3 , , m , Ξ ( p , t ) : = [ ϕ ^ ( t ) , ϕ ^ ( t 1 ) , , ϕ ^ ( t p + 1 ) ] m n c × p , Y 1 ( p , t ) : = [ y 1 ( t ) , y 1 ( t 1 ) , , y 1 ( t p + 1 ) ] T p , Y i ( p , t ) : = [ y i ( t ) , y i ( t 1 ) , , y i ( t p + 1 ) ] T p , i = 2 , 3 , , m .
In general, define the subsystem innovation scalars e 1 ( t ) and e i ( t ) ( i = 2 , 3 , , m ) in the M-PC-GSG algorithm in (21)–(32) as
e 1 ( t ) : = y 1 ( t ) ψ 1 T ( t ) θ ^ m ( t 1 ) ϕ ^ T ( t ) γ ^ 1 ( t 1 ) , e i ( t ) : = y i ( t ) ψ i T ( t ) θ ^ i 1 ( t ) ϕ ^ T ( t ) γ ^ i ( t 1 ) , i = 2 , 3 , , m .
According to the multi-innovation identification theory, expand the subsystem innovation scalars e 1 ( t ) and e i ( t ) ( i = 2 , 3 , , m ) into the subsystem innovation vectors E 1 ( p , t ) and E i ( p , t ) ( i = 2 , 3 , , m ) :
E 1 ( p , t ) : = e 1 ( t ) e 1 ( t 1 ) e 1 ( t p + 1 ) = y 1 ( t ) ψ 1 T ( t ) θ ^ m ( t 1 ) ϕ ^ T ( t ) γ ^ 1 ( t 1 ) y 1 ( t 1 ) ψ 1 T ( t 1 ) θ ^ m ( t 2 ) ϕ ^ T ( t 1 ) γ ^ 1 ( t 2 ) y 1 ( t p + 1 ) ψ 1 T ( t p + 1 ) θ ^ m ( t p ) ϕ ^ T ( t p + 1 ) γ ^ 1 ( t p ) p ,
E i ( p , t ) : = e i ( t ) e i ( t 1 ) e i ( t p + 1 ) ( i = 2 , 3 , , m ) = y i ( t ) ψ i T ( t ) θ ^ i 1 ( t ) ϕ ^ T ( t ) γ ^ i ( t 1 ) y i ( t 1 ) ψ i T ( t 1 ) θ ^ i 1 ( t 1 ) ϕ ^ T ( t 1 ) γ ^ i ( t 2 ) y i ( t p + 1 ) ψ i T ( t p + 1 ) θ ^ i 1 ( t p + 1 ) ϕ ^ T ( t p + 1 ) γ ^ i ( t p ) p .
Normally, we reach an agreement that the estimates θ ^ m ( t 1 ) and γ ^ i ( t 1 ) ( i = 1 , 2 , , m ) at time ( t 1 ) are closer to the true values θ m and γ i than the estimates θ ^ m ( t j ) and γ ^ i ( t j ) at time ( t j ) ( j 2 ) . Similarly, the estimate θ ^ i 1 ( t ) at time t is closer to the true value θ i 1 than the estimate θ ^ i 1 ( t j ) at time ( t j ) ( j 2 ) . Therefore, replacing the terms θ ^ m ( t j ) , γ ^ i ( t j ) and θ ^ i 1 ( t j ) ( j 2 ) with θ ^ m ( t 1 ) , γ ^ i ( t 1 ) and θ ^ i 1 ( t ) in (33) and (34), and the subsystem innovation vectors E 1 ( p , t ) and E i ( p , t ) ( i = 2 , 3 , , m ) are modified into
E 1 ( p , t ) : = y 1 ( t ) ψ 1 T ( t ) θ ^ m ( t 1 ) ϕ ^ T ( t ) γ ^ 1 ( t 1 ) y 1 ( t 1 ) ψ 1 T ( t 1 ) θ ^ m ( t 1 ) ϕ ^ T ( t 1 ) γ ^ 1 ( t 1 ) y 1 ( t p + 1 ) ψ 1 T ( t p + 1 ) θ ^ m ( t 1 ) ϕ ^ T ( t p + 1 ) γ ^ 1 ( t 1 ) = Y 1 ( p , t ) Λ 1 T ( p , t ) θ ^ m ( t 1 ) Ξ T ( p , t ) γ ^ 1 ( t 1 ) p ,
E i ( p , t ) : = y i ( t ) ψ i T ( t ) θ ^ i 1 ( t ) ϕ ^ T ( t ) γ ^ i ( t 1 ) y i ( t 1 ) ψ i T ( t 1 ) θ ^ i 1 ( t ) ϕ ^ T ( t 1 ) γ ^ i ( t 1 ) y i ( t p + 1 ) ψ i T ( t p + 1 ) θ ^ i 1 ( t ) ϕ ^ T ( t p + 1 ) γ ^ i ( t 1 ) , = Y i ( p , t ) Λ i T ( p , t ) θ ^ i 1 ( t ) Ξ T ( p , t ) γ ^ i ( t 1 ) p , i = 2 , 3 , , m .
Thus, based on the the M-PC-GSG algorithm in (21)–(32), we can obtain the following multivariate partially coupled multi-innovation generalized stochastic gradient (M-PC-MI-GSG) algorithm with an innovation length p:
θ ^ 1 ( t ) = θ ^ m ( t 1 ) + Λ 1 ( p , t ) r θ , 1 ( t ) E 1 ( p , t ) ,
Y 1 ( p , t ) = [ y 1 ( t ) , y 1 ( t 1 ) , , y 1 ( t p + 1 ) ] T ,
Λ 1 ( p , t ) = [ ψ 1 ( t ) , ψ 1 ( t 1 ) , , ψ 1 ( t p + 1 ) ] ,
Ξ ( p , t ) = [ ϕ ^ ( t ) , ϕ ^ ( t 1 ) , , ϕ ^ ( t p + 1 ) ] ,
E 1 ( p , t ) = Y 1 ( p , t ) Λ 1 T ( p , t ) θ ^ m ( t 1 ) Ξ T ( p , t ) γ ^ 1 ( t 1 ) ,
r θ , 1 ( t ) = r θ , m ( t 1 ) + ψ 1 ( t ) 2 ,
γ ^ 1 ( t ) = γ ^ 1 ( t 1 ) + Ξ ( p , t ) r γ , 1 ( t ) E 1 ( p , t ) ,
r γ , 1 ( t ) = r γ , m ( t 1 ) + ϕ ^ ( t ) 2 ,
θ ^ i ( t ) = θ ^ i 1 ( t ) + Λ i ( p , t ) r θ , i ( t ) E i ( p , t ) ,
Y i ( p , t ) = [ y i ( t ) , y i ( t 1 ) , , y i ( t p + 1 ) ] T ,
Λ i ( p , t ) = [ ψ i ( t ) , ψ i ( t 1 ) , , ψ i ( t p + 1 ) ] ,
E i ( p , t ) = Y i ( p , t ) Λ i T ( p , t ) θ ^ i 1 ( t ) Ξ T ( p , t ) γ ^ i ( t 1 ) ,
r θ , i ( t ) = r θ , i 1 ( t ) + ψ i ( t ) 2 ,
γ ^ i ( t ) = γ ^ i ( t 1 ) + Ξ ( p , t ) r γ , i ( t ) E i ( p , t ) ,
r γ , i ( t ) = r γ , i 1 ( t ) + ϕ ^ ( t ) 2 ,
Φ s ( t ) = [ ψ 1 ( t ) , ψ 2 ( t ) , , ψ m ( t ) ] T ,
ϕ ^ ( t ) = [ w ^ T ( t 1 ) , w ^ T ( t 2 ) , , w ^ T ( t n c ) ] T ,
w ^ ( t ) = y ( t ) Φ s ( t ) θ ^ m ( t ) ,
γ ^ ( t ) = [ γ ^ 1 ( t ) , γ ^ 2 ( t ) , , γ ^ m ( t ) ] .
The computation procedures of the M-PC-MI-GSG algorithm in (37)–(55) are listed as follows.
  • Let t = 1 , choose an innovation length p, set the initial values θ ^ m ( 0 ) = 1 n / p 0 , r θ , i ( 0 ) = r γ , i ( 0 ) = 1 , γ ^ i ( 0 ) = 1 m n c / p 0 , i = 1 , 2 , , m , w ^ ( t j ) = 0 , j = 0 , 1, ⋯, n c , p 0 = 10 6 , and set the data length K.
  • Collect the observation data Φ s ( t ) and y ( t ) , read ψ i ( t ) from Φ s ( t ) using (52).
  • Construct ϕ ^ ( t ) using (53).
  • Construct Y 1 ( p , t ) , Λ 1 ( p , t ) and Ξ ( p , t ) using (38)–(40).
  • Compute E 1 ( p , t ) using (41).
  • Compute r θ , 1 ( t ) and r γ , 1 ( t ) using (42) and (44).
  • Update the parameter estimates θ ^ 1 ( t ) and γ 1 ^ ( t ) using (37) and (43).
  • When i = 2 , 3 , , m , construct Y i ( p , t ) and Λ i ( p , t ) using (46) and (47).
  • Compute E i ( p , t ) using (48).
  • Compute r θ , i ( t ) and r γ , i ( t ) using (49) and (51).
  • Update the parameter estimates θ ^ i ( t ) and γ ^ i ( t ) using (45) and (50).
  • Compute w ^ ( t ) using (54) and construct γ ^ ( t ) using (55).
  • Increase t by 1 if t < K , and then go to Step 2. Otherwise, obtain parameter estimates θ ^ ( t ) and γ ^ ( t ) and stop computing.
It can be easily seen that the M-PC-MI-GSG algorithm with the innovation length p = 1 is equal to the M-PC-GSG algorithm. Since the past data information of the system are utilized sufficiently, the M-PC-MI-GSG algorithm has higher parameter estimation accuracy than the M-PC-GSG algorithm.

5. The Simulation Examples

In this section, two numerical simulations are given to show the good performances of the newly proposed algorithms.
Example 1.
Consider the following multivariate equation-error autoregressive systems,
y ( t ) = Φ s ( t ) θ + C 1 ( z ) v ( t ) , Φ s ( t ) = sin ( u 1 ( t 2 ) ) y 1 ( t 1 ) cos ( u 1 ( t 1 ) ) y 1 ( t 2 ) cos ( u 1 ( t 2 ) ) y 2 ( t 1 ) sin ( u 1 ( t 2 ) ) y 2 ( t 2 ) sin ( u 2 ( t 2 ) ) y 2 ( t 1 ) cos ( u 2 ( t 1 ) ) y 2 ( t 2 ) cos ( u 2 ( t 2 ) ) y 1 ( t 1 ) sin ( u 2 ( t 1 ) ) y 1 ( t 2 ) , θ = [ θ 1 , θ 2 , θ 3 , θ 4 ] T = [ 0.42 , 0.93 , 0.56 , 0.31 ] T , C ( z ) = I 2 + c 11 c 12 c 21 c 22 z 1 = 1 0 0 1 + 0.25 0.68 0.33 0.44 z 1 .
The parameter vector to be estimated is
ϑ = [ θ 1 , θ 2 , θ 3 , θ 4 , c 11 , c 12 , c 21 , c 22 ] T = [ 0.42 , 0.93 , 0.56 , 0.31 , 0.25 , 0.68 , 0.33 , 0.44 ] T .
In this simulation, u ( t ) = u 1 ( t ) u 2 ( t ) 2 is the input vector, which is a random sequence with zero mean and variance one; y ( t ) = y 1 ( t ) y 2 ( t ) 2 is the output vector; v ( t ) = v 1 ( t ) v 2 ( t ) 2 is a white noise vector with zero mean; σ 1 2 and σ 2 2 are the variances of v 1 ( t ) and v 2 ( t ) . Taking the noise variances σ 1 2 = 0 . 40 2 and σ 2 2 = 0 . 30 2 , using the M-PC-GSG algorithm (i.e., the M-PC-MI-GSG algorithm with p = 1 ) and the M-PC-MI-GSG algorithm with p = 2 , p = 4 and p = 8 to estimate the parameters of this example system. We obtain the parameter estimates and their errors δ : = ϑ ^ ( t ) ϑ / ϑ shown in Table 1. The parameter estimation errors versus t are shown in Figure 2. The parameter estimates θ ^ 1 ( t ) , θ ^ 2 ( t ) , θ ^ 3 ( t ) , θ ^ 4 ( t ) and c ^ 11 ( t ) , c ^ 12 ( t ) , c ^ 21 ( t ) , c ^ 22 ( t ) versus t of the M-PC-MI-GSG algorithm with p = 8 are shown in Figure 3 and Figure 4.
Example 2.
Consider another multivariate equation-error autoregressive systems,
y ( t ) = Φ s ( t ) θ + C 1 ( z ) v ( t ) , Φ s ( t ) = u 1 ( t 1 ) y 1 ( t 2 ) u 1 ( t 2 ) u 2 ( t 2 ) u 2 ( t 1 ) cos ( t ) u 1 ( t 2 ) y 1 ( t 1 ) y 1 ( t 2 ) y 2 ( t 2 ) y 2 ( t 1 ) sin ( t ) cos 2 ( t 1 ) u 2 ( t 2 ) sin ( t 1 ) cos ( u 1 ( t 1 ) ) sin 2 ( t 1 ) y 2 ( t 2 ) cos ( t 1 ) sin ( y 1 ( t 1 ) ) , θ = [ θ 1 , θ 2 , θ 3 , θ 4 , θ 5 , θ 6 ] T = [ 0.36 , 0.22 , 0.34 , 0.45 , 0.25 , 0.11 ] T , C ( z ) = I 2 + c 11 c 12 c 21 c 22 z 1 = 1 0 0 1 + 0.41 0.48 0.35 0.31 z 1 .
The parameter vector to be estimated is
ϑ = [ θ 1 , θ 2 , θ 3 , θ 4 , θ 5 , θ 6 , c 11 , c 12 , c 21 , c 22 ] T = [ 0.36 , 0.22 , 0.34 , 0.45 , 0.25 , 0.11 , 0.41 , 0.48 , 0.35 , 0.31 ] T .
Here, the simulation conditions are similar to Example 1. Taking the noise variances σ 1 2 = σ 2 2 = 0.50 2 , using the M-PC-GSG algorithm (i.e., the M-PC-MI-GSG algorithm with p = 1 ) and the M-PC-MI-GSG algorithm with p = 3 and p = 5 to estimate the parameters of this example system. The parameter estimates and their errors are shown in Table 2. The parameter estimation errors versus t are shown in Figure 5. For model validation, we use the estimated model obtained by the M-PC-MI-GSG algorithm with p = 5 to predict the system outputs with 200 samples from t = 3001 to t = 3200 . The true output y 1 ( t ) and the predicted output y ^ 1 ( t ) as well as their errors are shown in Figure 6. The true output y 2 ( t ) and the predicted output y ^ 2 ( t ) as well as their errors are shown in Figure 7.
From Table 1 and Table 2 and Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7, the following conclusions are obtained.
  • From Table 1 and Table 2, Figure 2 and Figure 5, it can be shown that the parameter estimation errors of the M-PC-GSG and the M-PC-MI-GSG algorithms become smaller as the data length t increases, which means that the proposed algorithms are effective in parameter estimation for the multivariate autoregressive system.
  • Figure 2 and Figure 5 show that the M-PC-MI-GSG algorithm has higher parameter estimation accuracy than the M-PC-GSG algorithm under the same noise variances and same data length. Introducing the innovation length p can effectively improve the parameter estimation accuracy for the M-PC-GSG algorithm, and the parameter estimates can be more stationary as the innovation length p increases.
  • It can be seen from Figure 3 and Figure 4 that the M-PC-MI-GSG algorithm can obtain accurate parameter estimates.
  • From Figure 6 and Figure 7, we can see that the predicted outputs of the M-PC-MI-GSG algorithm are very close to the true outputs, which indicates that the estimated model can capture the dynamics of the system.
Remark 1.
In order to show the advantages of the identification performances of the proposed algorithm in this paper, the forgetting factor stochastic gradient identification method proposed in [34] is compared with the proposed algorithm. The forgetting factor stochastic gradient identification method is applied to identify the multivariate equation-error autoregressive systems in this paper, and the multivariate forgetting factor generalized stochastic gradient (M-FF-GSG) algorithm is obtained. The M-FF-GSG algorithm is compared with the M-PC-MI-GSG algorithm with p = 5 through simulation, and the simulation experimental conditions are the same as those in Example 2. The comparison results are shown in Figure 8. It can be seen that the proposed algorithm in this paper has faster identification speed and higher estimation accuracy.

6. Conclusions

Coupling identification concept is an emerging method in the field of system identification in recent decades, one which is usually used in the parameter estimation of the multivariate systems. Its main idea is to utilize the parameter-coupling characteristic, and to identify the parameter of each subsystem model separately and then connected, which can greatly reduce the calculation amounts in the estimation processes. This paper combines the coupling identification concept with the stochastic gradient identification method to propose a new identification algorithm for multivariate equation-error systems. The proposed algorithm has the advantage of a lower computation than the traditional stochastic gradient identification algorithm. Additionally, the multi-innovation identification theory is also a promising identification method, which makes full use of the data information collected in the past to identify the unknown parameters. Based on the partially coupled stochastic gradient algorithm, this paper then introduces the innovation length by applying the multi-innovation identification theory, and proposes the partially coupled multi-innovation stochastic gradient algorithm. The new algorithm also has the advantage of higher parameter estimation accuracy.
The proposed coupling and multi-innovation-based identification methods can be extended to study other multivariate systems with different structures and disturbance noises. Meanwhile, the idea of the algorithms can be utilized when the system identification model has the coupled terms. The future research opportunities are to apply the proposed algorithms to actual engineering production systems to improve the computational efficiency and accuracy of the system identification in production practice. Additionally, the proposed methods in the paper can also combine other mathematical tools and statistical strategies to research the performances of some parameter estimation algorithms for other linear or nonlinear systems with colored noises.

Author Contributions

Writing—original draft, P.M.; Writing—review and editing, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 61873111), the Fundamental Research Funds for the Central Universities (No. JUSRP121071), the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province (No. 22KJB120009), and the Start-up Fund for Introducing Talent of Wuxi University (No. 2021r045).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AbbreviationsExplanations
M-PC-GSGthe multivariate partially coupled generalized stochastic gradient algorithm
M-PC-MI-GSGthe multivariate partially coupled multi-innovation generalized stochastic
gradient algorithm
M-FF-GSGthe multivariate forgetting factor generalized stochastic gradient algorithm

References

  1. Na, J.; Yang, J.; Wu, X.; Guo, Y. Robust adaptive parameter estimation of sinusoidal signals. Automatica 2015, 53, 376–384. [Google Scholar] [CrossRef]
  2. Wang, J.; Efimov, D.; Bobtsov, A.A. On robust parameter estimation in finite-time without persistence of excitation. IEEE Trans. Autom. Control 2020, 65, 1731–1738. [Google Scholar] [CrossRef]
  3. Xu, L. Application of the Newton iteration algorithm to the parameter estimation for dynamical systems. J. Comput. Appl. Math. 2015, 288, 33–43. [Google Scholar] [CrossRef]
  4. Mo, D.; Duarte, M.F. Compressive parameter estimation via K-median. Signal Process. 2018, 142, 36–48. [Google Scholar] [CrossRef]
  5. Demirli, R.; Saniie, J. Asymmetric Gaussian chirplet model and parameter estimation for generalized echo representation. J. Frankl. Inst. 2014, 351, 907–921. [Google Scholar] [CrossRef]
  6. Upadhyay, R.K.; Paul, C.; Mondal, A.; Vishwakarma, G.K. Estimation of biophysical parameters in a neuron model under random fluctuations. Appl. Math. Comput. 2018, 329, 364–373. [Google Scholar] [CrossRef]
  7. Khalik, Z.; Donkers, M.C.F.; Sturm, J.; Bergveld, H.J. Parameter estimation of the Doyle-Fuller-Newman model for Lithium-ion batteries by parameter normalization, grouping, and sensitivity analysis. J. Power Sources 2021, 499, 229901. [Google Scholar] [CrossRef]
  8. Padmanabhan, C.; Gupta, S.; Mylswamy, A. Estimation of terramechanics parameters of wheel-soil interaction model using particle filtering. J. Terramechanics 2018, 79, 79–95. [Google Scholar]
  9. Calasan, M.P.; Jovanovic, A.; Rubezic, V.; Mujicic, D.; Deriszadeh, A. Notes on parameter estimation for single-phase transformer. IEEE Trans. Ind. Appl. 2020, 56, 3710–3718. [Google Scholar] [CrossRef]
  10. Zhang, X.; El Korso, M.N.; Pesavento, M. MIMO radar target localization and performance evaluation under SIRP clutter. Signal Process. 2017, 130, 217–232. [Google Scholar] [CrossRef]
  11. Kulikova, M.V.; Tsyganova, J.V.; Kulikov, G.Y. UD-based pairwise and MIMO kalman-like filtering for estimation of econometric model structures. IEEE Trans. Autom. Control 2020, 65, 4472–4479. [Google Scholar] [CrossRef]
  12. Weijtjens, W.; Sitter, G.D.; Devriendt, C.; Guillaume, P. Operational modal parameter estimation of MIMO systems using transmissibility functions. Automatica 2014, 50, 559–564. [Google Scholar] [CrossRef]
  13. Wang, H.; Xu, L.W.; Yang, Z.Q.; Gulliver, T.A. Low-complexity MIMO-FBMC sparse channel parameter estimation for industrial big data communications. IEEE Trans. Ind. Inform. 2021, 17, 3422–3430. [Google Scholar] [CrossRef]
  14. Cerone, V.; Razza, V.; Regruto, D. Set-membership errors-in-variables identification of MIMO linear systems. Automatica 2018, 90, 25–37. [Google Scholar] [CrossRef]
  15. Liu, L.; Wang, Y.; Wang, C.; Ding, F.; Hayat, T. Maximum likelihood recursive least squares estimation for multivariate equation-error ARMA systems. J. Frankl. Inst. 2018, 355, 7609–7625. [Google Scholar] [CrossRef]
  16. Cecilio, I.M.; Ottewill, J.R.; Fretheim, H.; Thornhill, N.F. Multivariate detection of transient disturbances for uni- and multirate systems. IEEE Trans. Control Syst. Technol. 2015, 23, 1477–1493. [Google Scholar] [CrossRef]
  17. Ma, J.; Xiong, W.; Chen, J.; Feng, D. Hierarchical identification for multivariate Hammerstein systems by using the modified Kalman filter. IET Control Theory Appl. 2017, 11, 857–869. [Google Scholar] [CrossRef]
  18. Shafin, R.; Liu, L.J.; Li, Y.; Wang, A.D.; Zhang, J.Z. Angle and delay estimation for 3-D massive MIMO/FD-MIMO systems based on parametric channel modeling. IEEE Trans. Wirel. Commun. 2017, 16, 5370–5383. [Google Scholar] [CrossRef]
  19. Kawaria, N.; Patidar, R.; George, N.V. Parameter estimation of MIMO bilinear systems using a Levy shuffled frog leaping algorithm. Soft Comput. 2017, 21, 3849–3858. [Google Scholar] [CrossRef]
  20. Roy, S.B.; Bhasin, S.; Kar, I.N. Combined MRAC for unknown MIMO LTI systems with parameter convergence. IEEE Trans. Autom. Control 2018, 63, 283–290. [Google Scholar] [CrossRef]
  21. Ma, H.; Zhang, X.; Liu, Q.; Ding, F.; Jin, X.B.; Alsaedi, A.; Hayat, T. Partially-coupled gradient-based iterative algorithms for multivariable output-error-like systems with autoregressive moving average noises. IET Control Theory Appl. 2020, 14, 2613–2627. [Google Scholar] [CrossRef]
  22. Huang, W.; Ding, F.; Hayat, T.; Alsaedi, A. Coupled stochastic gradient identification algorithms for multivariate output-error systems using the auxiliary model. Int. J. Control Autom. Syst. 2017, 15, 1622–1631. [Google Scholar] [CrossRef]
  23. Cui, T.; Chen, F.Y.; Ding, F.; Sheng, J. Combined estimation of the parameters and states for a multivariable state-space system in presence of colored noise. Int. J. Adapt. Control Signal Process. 2020, 34, 590–613. [Google Scholar] [CrossRef]
  24. Xu, L.; Yang, E.F. Auxiliary model multiinnovation stochastic gradient parameter estimation methods for nonlinear sandwich systems. Int. J. Robust Nonlinear Control 2021, 31, 148–165. [Google Scholar] [CrossRef]
  25. Xu, L.; Sheng, J. Separable multi-innovation stochastic gradient estimation algorithm for the nonlinear dynamic responses of systems. Int. J. Adapt. Control Signal Process. 2020, 34, 937–954. [Google Scholar] [CrossRef]
  26. Jin, Q.B.; Wang, Z.; Liu, X.P. Auxiliary model-based interval-varying multi-innovation least squares identification for multivariable OE-like systems with scarce measurements. J. Process Control 2015, 35, 154–168. [Google Scholar] [CrossRef]
  27. Zhang, G.Q.; Zhang, X.K.; Pang, H.S. Multi-innovation auto-constructed least squares identification for 4 DOF ship manoeuvring modelling with full-scale trial data. ISA Trans. 2015, 58, 186–195. [Google Scholar] [CrossRef]
  28. Wang, C.; Zhu, L. Parameter identification of a class of nonlinear systems based on the multi-innovation identification theory. J. Frankl. Inst. 2015, 352, 4624–4637. [Google Scholar] [CrossRef]
  29. Pan, J.; Jiang, X.; Wan, X.K.; Ding, W.F. A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems. Int. J. Control Autom. Syst. 2017, 15, 1189–1197. [Google Scholar] [CrossRef]
  30. Chaudhary, N.I.; Raja, M.A.Z.; He, Y.G.; Khan, Z.A.; Machado, J.A.T. Design of multi innovation fractional LMS algorithm for parameter estimation of input nonlinear control autoregressive systems. Appl. Math. Model. 2021, 93, 412–425. [Google Scholar] [CrossRef]
  31. Ma, P.; Ding, F. New gradient based identification methods for multivariate pseudo-linear systems using the multi-innovation and the data filtering. J. Frankl. Inst. 2017, 354, 1568–1583. [Google Scholar] [CrossRef]
  32. Ma, P.; Ding, F.; Alsaedi, A.; Hayat, T. Decomposition-based gradient estimation algorithms for multivariate equation-error autoregressive systems using the multi-innovation theory. Circuits Syst. Signal Process. 2018, 37, 1846–1862. [Google Scholar] [CrossRef]
  33. Ding, F.; Xu, L.; Meng, D.D.; Jin, X.B.; Alsaedi, A.; Hayat, T. Gradient estimation algorithms for the parameter identification of bilinear systems using the auxiliary model. J. Comput. Appl. Math. 2020, 369, 112575. [Google Scholar] [CrossRef]
  34. Ji, Y.; Kang, Z. Three-stage forgetting factor stochastic gradient parameter estimation methods for a class of nonlinear systems. Int. J. Robust Nonlinear Control 2021, 31, 971–987. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram of the M-PC-GSG algorithm.
Figure 1. The schematic diagram of the M-PC-GSG algorithm.
Mathematics 10 02955 g001
Figure 2. The M-PC-MI-GSG estimation errors versus t.
Figure 2. The M-PC-MI-GSG estimation errors versus t.
Mathematics 10 02955 g002
Figure 3. Parameter estimates θ ^ 1 ( t ) , θ ^ 2 ( t ) , θ ^ 3 ( t ) , θ ^ 4 ( t ) versus t.
Figure 3. Parameter estimates θ ^ 1 ( t ) , θ ^ 2 ( t ) , θ ^ 3 ( t ) , θ ^ 4 ( t ) versus t.
Mathematics 10 02955 g003
Figure 4. Parameter estimates c ^ 11 ( t ) , c ^ 12 ( t ) , c ^ 21 ( t ) , c ^ 22 ( t ) versus t.
Figure 4. Parameter estimates c ^ 11 ( t ) , c ^ 12 ( t ) , c ^ 21 ( t ) , c ^ 22 ( t ) versus t.
Mathematics 10 02955 g004
Figure 5. The M-PC-MI-GSG estimation errors versus t.
Figure 5. The M-PC-MI-GSG estimation errors versus t.
Mathematics 10 02955 g005
Figure 6. The true output, the predicted output of y 1 ( t ) and the prediction error.
Figure 6. The true output, the predicted output of y 1 ( t ) and the prediction error.
Mathematics 10 02955 g006
Figure 7. The true output, the predicted output of y 2 ( t ) and the prediction error.
Figure 7. The true output, the predicted output of y 2 ( t ) and the prediction error.
Mathematics 10 02955 g007
Figure 8. The comparison of algorithms M-FF-GSG with M-PC-MI-GSG.
Figure 8. The comparison of algorithms M-FF-GSG with M-PC-MI-GSG.
Mathematics 10 02955 g008
Table 1. Parameter estimates and errors ( σ 1 2 = 0.40 2 , σ 2 2 = 0.30 2 ).
Table 1. Parameter estimates and errors ( σ 1 2 = 0.40 2 , σ 2 2 = 0.30 2 ).
Algorithmst100200500100020003000True Values
M-PC-GSG θ 1 0.190220.282810.337880.366370.367870.368790.42000
θ 2 0.770450.845290.884000.910500.909560.908320.93000
θ 3 0.426240.491370.545500.573600.576950.573140.56000
θ 4 0.250450.259990.290320.318020.323790.329170.31000
c 11 −0.16279−0.13896−0.12959−0.13205−0.13608−0.13540−0.25000
c 12 −0.062640.010240.073930.108790.143830.163850.68000
c 21 −0.18783−0.21131−0.22760−0.22982−0.23502−0.23697−0.33000
c 22 0.062550.059050.073050.091720.108500.120210.44000
δ ( % ) 60.0460453.5488148.5214145.6611043.0871441.59002
M-PC-MI-GSG θ 1 0.244130.352420.380450.392050.388050.388090.42000
( p = 2 ) θ 2 0.804110.894330.910710.926290.920580.917970.93000
θ 3 0.494370.548330.578020.581150.582240.571020.56000
θ 4 0.307210.286790.293910.323010.323470.329640.31000
c 11 −0.05715−0.06016−0.08608−0.10747−0.13040−0.13492−0.25000
c 12 0.120990.220520.304920.342340.382600.403610.68000
c 21 −0.23138−0.26019−0.26756−0.26447−0.26951−0.27084−0.33000
c 22 0.117620.117980.151070.184540.209840.227010.44000
δ ( % ) 47.5169539.8736933.6153030.0175326.5964724.80351
M-PC-MI-GSG θ 1 0.268240.396550.392060.405270.401170.403440.42000
( p = 4 ) θ 2 0.817640.918220.910090.930270.922230.925060.93000
θ 3 0.530110.562240.572910.580060.581630.562210.56000
θ 4 0.337810.289340.279620.326700.319000.330210.31000
c 11 −0.02859−0.07525−0.14398−0.17985−0.21158−0.20979−0.25000
c 12 0.400630.494440.570420.577940.606990.617260.68000
c 21 −0.27689−0.30473−0.29946−0.29170−0.29725−0.29909−0.33000
c 22 0.225030.213100.257340.296820.324390.342710.44000
δ ( % ) 30.6058922.7780716.2021812.948759.873038.55799
M-PC-MI-GSG θ 1 0.228750.411230.393120.410670.408470.413770.42000
( p = 8 ) θ 2 0.825380.916750.899620.926600.919160.934610.93000
θ 3 0.500620.561290.569670.584800.579050.555540.56000
θ 4 0.347840.294550.270150.324930.316390.333240.31000
c 11 −0.08032−0.14448−0.22262−0.25289−0.27162−0.25101−0.25000
c 12 0.608740.665770.717580.674800.695500.692730.68000
c 21 −0.31148−0.33276−0.30958−0.29949−0.30552−0.31357−0.33000
c 22 0.322920.265690.319770.362270.390670.407490.44000
δ ( % ) 20.9917213.612649.453365.905214.391623.04341
Table 2. Parameter estimates and errors ( σ 1 2 = σ 2 2 = 0.50 2 ).
Table 2. Parameter estimates and errors ( σ 1 2 = σ 2 2 = 0.50 2 ).
Algorithmst100200500100020003000True Values
M-PC-GSG θ 1 −0.38109−0.34290−0.35197−0.35358−0.35240−0.35140−0.36000
θ 2 0.171540.192020.223180.205710.208940.202900.22000
θ 3 0.520490.462400.420630.404180.388850.381300.34000
θ 4 0.367720.390410.404310.430590.442280.450840.45000
θ 5 0.325820.287990.272650.257080.264330.264680.25000
θ 6 −0.05956−0.02720−0.02478−0.02008−0.00934−0.008760.11000
c 11 −0.32562−0.33413−0.35093−0.36678−0.37198−0.37339−0.41000
c 12 −0.42387−0.42801−0.43950−0.44549−0.45522−0.45784−0.48000
c 21 −0.07449−0.044470.003570.055410.088910.105990.35000
c 22 0.133150.094400.048960.00007−0.03206−0.04982−0.31000
δ ( % ) 62.4663055.6221548.6682541.8159337.2197734.99402
M-PC-MI-GSG θ 1 −0.34351−0.30662−0.36709−0.35952−0.35025−0.35053−0.36000
( p = 3 ) θ 2 0.136160.218450.246200.201400.218820.205750.22000
θ 3 0.464470.348100.322860.336050.341310.337360.34000
θ 4 0.534670.518410.491980.508960.494900.491430.45000
θ 5 0.340520.244040.241500.231990.264430.265360.25000
θ 6 −0.08458−0.00717−0.004120.014220.051550.050320.11000
c 11 −0.40747−0.39750−0.42113−0.42870−0.41996−0.41446−0.41000
c 12 −0.44953−0.45067−0.47122−0.47509−0.49160−0.49275−0.48000
c 21 −0.000190.052730.138490.226330.258650.277740.35000
c 22 −0.10619−0.14079−0.18170−0.23788−0.25384−0.26009−0.31000
δ ( % ) 45.0559834.2552425.5085216.9695312.1275610.74341
M-PC-MI-GSG θ 1 −0.35745−0.30970−0.38425−0.36288−0.34050−0.34946−0.36000
( p = 5 ) θ 2 0.121360.232500.254010.205720.227450.208500.22000
θ 3 0.404150.286470.306110.336180.345890.337810.34000
θ 4 0.519780.482850.466170.488240.464620.460240.45000
θ 5 0.387820.225720.229040.224730.270310.268320.25000
θ 6 −0.020080.066210.053340.069010.108370.095070.11000
c 11 −0.41690−0.39157−0.43170−0.43710−0.42419−0.41720−0.41000
c 12 −0.46440−0.45497−0.48430−0.48122−0.50181−0.49994−0.48000
c 21 0.090860.140500.230860.319940.334010.348570.35000
c 22 −0.24541−0.23841−0.25163−0.30483−0.29672−0.29278−0.31000
δ ( % ) 32.6024622.3098914.477466.911704.311853.73785
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, P.; Wang, L. Partially Coupled Stochastic Gradient Estimation for Multivariate Equation-Error Systems. Mathematics 2022, 10, 2955. https://doi.org/10.3390/math10162955

AMA Style

Ma P, Wang L. Partially Coupled Stochastic Gradient Estimation for Multivariate Equation-Error Systems. Mathematics. 2022; 10(16):2955. https://doi.org/10.3390/math10162955

Chicago/Turabian Style

Ma, Ping, and Lei Wang. 2022. "Partially Coupled Stochastic Gradient Estimation for Multivariate Equation-Error Systems" Mathematics 10, no. 16: 2955. https://doi.org/10.3390/math10162955

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop