Acessibilidade / Reportar erro

The generalized anti-reflexive solutions for a class of matrix equations (BX = C, XD = E)

Abstract

In this paper, the generalized anti-reflexive solution for matrix equations (BX = C, XD = E), which arise in left and right inverse eigenpairs problem, is considered. With the special properties of generalized anti-reflexive matrices, the necessary and sufficient conditions for the solvability and a general expression of the solution are obtained. Furthermore, the related optimal approximation problem to a given matrix over the solution set is solved. In addition, the algorithm and the example to obtain the unique optimal approximation solution are given.

matrix equations; generalized anti-reflexive matrices; optimal approximation


The generalized anti-reflexive solutions for a class of matrix equations (BX = C, XD = E)

Fan-Liang LiI,* * Corresponding author ; Xi-Yan HuII; Lei ZhangII

IInstitute of Mathematics and Physics, School of Sciences, Central South University of Forestry and Technology, Changsha, 410004, P.R. China. E-mail: lfl302@tom.com

IICollege of Mathematics and Econometrics, Hunan University, Changsha, 410082, P.R. China. E-mails: xyhu@hnu.cn / lzhang@hnu.cn

ABSTRACT

In this paper, the generalized anti-reflexive solution for matrix equations (BX = C, XD = E), which arise in left and right inverse eigenpairs problem, is considered. With the special properties of generalized anti-reflexive matrices, the necessary and sufficient conditions for the solvability and a general expression of the solution are obtained. Furthermore, the related optimal approximation problem to a given matrix over the solution set is solved. In addition, the algorithm and the example to obtain the unique optimal approximation solution are given.

Mathematical subject classification: 15A24, 15A09.

Key words: matrix equations, generalized anti-reflexive matrices, optimal approximation.

1 Introduction

Left and right inverse eigenpairs problem is a special inverse eigenvalue problem. That is, giving partial left and right eigenpairs (eigenvalue and corresponding eigenvector) of a matrix A, (gj, yj), j = 1,..., l; (li, xi), i = 1,..., h, a special matrix set S Í Rn×n (In this paper, denote the set of all n × n real matrices by Rn×n), and h < n, l < n, find A Î S such that

The prototype of this problem initially arose in perturbation analysis of matrix eigenvalue and in recursive matters. It has profound applications background [1-4].

Let X = (x1,...,xh), Y = (y1,...,yl), L = diag(l1,...,lh), G = (g1,..., gh), Z = XL, W = YG, then the above problem can be described as follows. Giving matrices X,Y,Z,W and a special matrix set S, find A Î S such that

Actually, this is the problem of seeking the solutions of linear matrix equations. In this paper, we will extend it and obtain the following problem. Giving matrices B,C,D,E and a special matrix set S, find X Î S such that

This equations is an important class of matrix equations, and has profound applications in engineering and matrix inverse problem AX = B [5, 6]. In recent years, many authors have studied it, and a series of meaningful results have been achieved [7-10]. However, its generalized anti-reflexive solutions have not been concerned with. In this paper, we will discuss this problem.

We now introduce the following notation. Cn×m denote the set of n × m complex matrices. OCn×n denote the set of n × n unitary matrices. AH, r(A), tr(A) and A+ be the conjugate transpose, rank, trace and the Moore-Penrose generalized inverse of a matrix A, respectively. In be the identity matrix of size n. For A,B Î Cn×m, áA,Bñ = tr(BHA) denote the inner product of matrices A and B. The induced matrix norm is called Frobenius norm, i.e. ||A|| = = , then Cn×m is a Hilbert inner product space.

To extend reflexive (anti-reflexive) matrices and centrosymmetric matrices, Chen [11] has introduced two new special classes of matrices, which are generalized reflexive matrices and generalized anti-reflexive matrices. He presented three examples obtained from the altitude estimation of a level network, an electric network and structural analysis of trusses. His investigation indicated that generalized reflexive matrices arise naturally from problem with reflexive symmetry, which account for a great number of real world scientific and engineering applications.

Definition 1. An n × n complex matrix P is called the generalized reflection matrix if P = PH, P2 = In.

Definition 2. Let A Î Cn×m, A is called the generalized reflexive matrix (generalized anti-reflexive matrix) with respect to matrix pairs (P,Q) if A = PAQ (or A = -PAQ), where P, Q are the n × n and m × m generalized reflection matrix, respectively. Denote this class of matrices by (P, Q) (or (P, Q)), then the following results can be easily deduced.

Definition 3. = {X|PX = X, X Î Cn×m}, = {X|PX = -X, X Î Cn×m}.

From Definition 2, 3, it is easy to see that if P, Q are two given n × n and m × m generalized reflection matrices, respectively, then (P, Q) (or (P, Q)) is a closed linear subspace of Cn×m, and (or ) is also a closed linear subspace of Cn×m. Throughout, we always assume that P, Q are two given n × n and m × m generalized reflection matrices, respectively. From Definition 2, 3 and this assumption, it is easy to prove the following results.

The notation V1Å V2 stands for the orthogonal direct sum of linear subspace V1 and V2. From this, for any

B Î Ch×n, C Î Ch×m, D Î Cm×l, E Î Cn×l,

we have the following results.

In Definition 2, if P = Q, then A is a reflexive matrix (or an anti-reflexive matrix) with respect to P [12]. We denote the set of all reflexive matrices (anti-reflexive matrices) by (P) (or (P)). So (P) (or (P)) is a special case of (P, Q) (or (P, Q)).

In this paper, we consider the following problems.

Problem 1. Given B Î Ch×n, C Î Ch×m, D Î Cm×l, E Î Cn×l, find X Î (P, Q) such that

Problem 2. Given X* Î Cn×m, find Î SE such that

where SE is the solution set of Problem 1.

Problem 2 is the optimal approximation problem of Problem 1. It occurs frequently in experimental design [13]. Here the matrix X* may be a matrix obtained from experiments, but it may not satisfy the structural requirement (generalized anti-reflexive matrices with respect to matrix pairs (P, Q)) and/or matrix equations (BX = C, XD = E). The optimal estimate is the matrix that satisfies both restrictions and is the optimal approximation of X*. See for instance [14, 15].

This paper is organized as follows. In section 2, we first study the special properties of matrices in (P, Q). Then using these properties and the results of [7], we obtain the solvability conditions and the general solutions of Problem 1. Section 3 is devoted to derive the unique approximation solution of Problem 2 by applying the methods of space decomposition. Finally, the algorithm and the example to obtain the unique approximation solution are given.

2 Solvability conditions of Problem 1

First, we discuss the properties of matrices in (P, Q).

Lemma 1.

1) If X Î (P, Q), AH Î , then (AX)H Î .

2) If X Î (P, Q), A Î , then XA Î .

Proof.

1) If AH Î , then

Q(AX)H = QXHPPAH = -(AX)H.

Hence, (AX)H Î . If AH Î , then

Q(AX)H = QXHPPAH = (AX)H.

Hence, (AX)H Î . We can prove 2) by the same methods.

Lemma 2 [16]. Let E Î Cn×h,F Î Cn×l and FHE = 0. Then we have

Lemma 3.

1) If BHÎ , CHÎ , then B+B Î (P), B+C Î (P, Q).

2) If D Î , E Î , then DD+Î (Q), ED+Î (P, Q).

Proof. We only prove 1), and 2) can be proved by the same methods.

1) Since BH Î , it is easy to prove the following equations

From this, we have PB+BP = B+B, i.e. B+B Î (P). The equations (2.1) implies the following equations

Since CH Î , it is also easy to obtain the following equations

Combining (2.2) and (2.3), we obtain

PB+CQ = -B+C i.e. B+C Î (P, Q).

If BH Î , CH Î , we can also prove the conclusion by the same methods.

Lemma 4. Let K Î (P), G Î (Q), F Î Cn×m, denote M = KFG. Then the following statements are true.

1) If F Î (P, Q), then M Î (P, Q).

2) If F Î (P, Q), then M Î (P, Q).

3) If F = F1 + F2, where F1Î (P, Q), F2Î (P, Q), then M Î (P, Q) if and only if KF2G = 0. In addition, we have M = KF1G.

Proof.

1) PMQ = PKFGQ = PKPPFQQGQ = KFG = M. Hence, M Î (P, Q).

2) PMQ = PKFGQ = PKPPFQQGQ = K(-F)G = -M. Hence, M Î (P, Q).

3) M = KFG = K(F1 + F2)G = KF1G + KF2G, from 1) and 2), KF1G Î (P, Q), KF2G Î (P, Q). If M Î (P, Q), then M – KF1G Î (P, Q), but M – KF1G = KF2G Î (P, Q). According to conclusion 2) of Definition 2, we have KF2G = 0, i.e. M = KF1G. If KF2G = 0, it is clear that M = KF1G + KF2G = KF1G Î (P, Q).

Lemma 5.

1) If A Î (P), B Î (P, Q), then AB Î (P, Q).

2) If A Î , B Î , then AHB = 0, A+B = 0.

Proof.

1) PABQ = PAPPBQ = -AB. So, AB Î (P, Q).

2) From Definition 3, we have

AHB = AHPHPB = (PA)HPB = AH(-B) = -AHB.

So, AHB = 0. From Definition 3, we also have

A+B = A+PPB = (PA)+PB = A+(-B) = -A+B.

So, A+B = 0.

Denote

where B1, B2, C1, C2, D1, D2, E1, E2 are given by (1.1).

Lemma 6. If B,C,D,E are given by (1.1), , , , are denoted by (2.4), X Î (P, Q), then matrix equations (BX = C, XD = E) are equivalent to (X =

, X = ).

Proof. According to (1.1), BX = C is equivalent to

B1X + B2X = C1 + C2.

From Lemma 1, (B1X)H Î , (B2X)H Î . According to conclusion 2) of Definition 2, we have

B1X = C2, B2X = C1 ,

i.e.

So BX = C is equivalent to X = . By applying the similar methods, we can prove that XD = E is equivalent to X = .

Lemma 7 [7]. If B,C,D,E are given by (1.1), then (BX = C, XD = E) has a solution in Cn×m if and only if

Moreover, its general solution can be expressed as

Theorem 1. If B,C,D,E are given by (1.1), then Problem 1 has a solution in (P, Q) if and only if

Moreover, the general solution can be expressed as

where

Proof. Necessity: Since Problem 1 has a solution in (P, Q), from Lemma 6, matrix equations (X = , X = ) has a solution in (P, Q) Í Cn×m. From Lemma 7, we have

Combining (1.1), (2.4), Lemma 5 and according to the first equality of (2.11), we have

i.e.

Combining Lemma 2, Lemma 5, (1.1), (2.4), and according to the second equality of (2.11), we have

i.e.

Using the similar methods, from the third equality of (2.11), we also have

Combining (2.12)-(2.14) yields (2.7) and (2.8).

Sufficiency: From Lemma 2, Lemma 5, (2.7) and (2.8) are equivalent to (2.11) if (1.1), (2.4) hold. From Lemma 7, it is easy to see that matrix equations (X = , X = ) has a solution in Cn×m. Moreover, the general solution can be expressed as

where

According to (1.1), (2.4), Lemma 2 and Lemma 5, we have

So X0 in (2.16) is equivalent to X0 in (2.10). From Lemma 3, Lemma 5, (2.11), it is easy to prove

Hence X0 is a solution of matrix equations (X = , X = ) in (P, Q). From Lemma 6, X0 is also a solution of matrix equations (BX = C, XD = E) in (P, Q).

In the following, we show that the general solution of Problem 1 can be expressed as (2.9) if (1.1), (2.4), (2.7) and (2.8) hold. Denote by SE the solutionset of Problem 1 and S the set consisting of X expressed by (2.15) (S is the solution set of matrix equations (X = , X = ) in Cn×m). Denote

From (1.1), (2.4), Lemma 2, we have

So K, G in (2.17) are equivalent to K, G in (2.10), respectively. Since (P, Q) Í Cn×m, it is clear that SE Í S. According to Lemma 3, we have

According to Lemma 4, X = X0 + EFG Î (P, Q) if and only if F Î (P, Q), i.e. (2.15) is equivalent to (2.9) or S = SE if and only if F Î (P, Q) in (2.15). From Lemma 6, the general solution of Problem 1 can be expressed as (2.9).

3 The solution of Problem 2

According to (2.9), it is easy to prove that if Problem 1 has a solution in (P, Q), then the solution set SE is a nonempty closed convex set. We can claim that for any given X* Î Cn×m, there exists the unique optimal approximation for Problem 2.

Theorem 2. Given X* Î Cn×m, if B, C, D, E are denoted by (1.1) and they satisfy the conditions of Theorem 1, then Problem 2 has the unique solution Î SE. Moreover

can be expressed as

where X0, K, G are given by (2.10), and

is given by the following equation.

Proof. Denote K1 = In – K, it is easy to prove that matrices K and K1 are orthogonal projection matrices satisfying KK1 = 0. Denote G1 = Im – G, it is also easy to prove that matrices G and G1 are orthogonal projection matrices satisfying GG1 = 0.

According to conclusion 2) of Definition 2, for any X* Î Cn×m, there exist only Î (P, Q) and only Î (P, Q), which satisfy that

where

From this, combining the invariance of Frobenius norm under orthogonal transformations and the methods of space decomposition, and according to (2.9), for any X Î SE, we have

Since ||K( – X0)G1||2, ||K1( – X0)||2, ||||2 are constants, it is obvious that ||X* – X|| is equivalent to

According to the definitions of K, X0, and G, it is easy to prove KX0G = 0.

So, (3.3) is equivalent to

It is clear that F = + K1F*G1, "F* Î (P, Q) is a solution of (3.4). Substituting this result to (2.11) yields (3.1).

Algorithm

1. Input B, C, D, E, P, Q, X*.

2. According to (1.1) compute B1, B2, C1, C2, D1, D2, E1, E2.

3. Compute C1D1, C2D2, B1E1, B2E2, B1

C2, B2
C1, E1
D2, E2
D1, if (2.7), (2.8) hold, then go to 4; otherwise stop.

4. According to (2.10) compute X0, K, G.

5. According to (3.2) compute .

6. Calculate from (3.1).

Numerical analysis

Theorem 2 leads naturally to this numerical algorithm for the solution of Problem 2. The process will then be numerically stable, the reason is that the singular value decomposition is numerically stable. We can also test that as X* approximates a solution of Problem 1, X* becomes closer to the unique solution of Problem 2.

Example (n = 10, m = 8, h = 5, l = 4)

C = 1.0e + 002 ×

From first column to fourth column

From fifth column to eighth column

It is easy to see that B, C, D, E, P, Q, X* satisfy the required properties. Using the software "MATLAB", we obtain the unique solution of Problem 2.

4 Conclusions

In this paper, we considered the generalized anti-reflexive solutions of matrix equations (BX = C, XD = E), i.e. Problem 1. We also considered the nearest solution to a given matrix in Frobenius norm, i.e. Problem 2. The solvability conditions and the explicit formula for the solution are given. According to Theorem 1 and 2, the algorithm is presented to compute the nearest solution. The numerical example is given to illustrate the results obtained in this paper correction.

Acknowledgments. The authors are very grateful to the referee for their valuable comments. They also thank Professor Marcos Raydan for his helpful suggestions.

This research was supported by National natural Science Foundation of China (10571047); by Scientific Research Fund of Hunan Provincial Education Department of China Grant (06C235); by Central South University of Forestry and Technology (06Y017); by Specialized Research Fund for the Doctoral Program of Higher Education (20060532014).

Received: 24/X/06. Accepted: 16/VII/07.

#683/06.

  • [1] J.H. Wilkinson, The Algebraic Problem Oxford University Press, Oxford, 1965.
  • [2] M. Arav, D. Hershkowitz, V. Mehrmann and H. Schneider, The recursive inverse eigenvalue problem SIAM J. Matrix Anal. Appl., 22 (2000), 392-412.
  • [3] R. Loewy and V. Mehrmann, A note on the symmetric recursive inverse eigenvalue problem SIAM J. Matrix Anal. Appl., 25 (2003), 180-187.
  • [4] F.L. Li, X.Y. Hu and L. Zhang, Left and right eigenpairs problem of skew-centrosymmetric matrices Appl. Math. Comput., 177 (2006), 105-110.
  • [5] S.K. Mitra and M.L. Puri, Shorted operaters and generalized inverses of matrices Linear Algebra Appl., 25 (1979), 45-56.
  • [6] Z.Y. Peng, The inverse eigenvalue problem for Hermitian anti-reflexive matrices and its approximation Appl. Math. Comput., 162 (2005), 1377-1389.
  • [7] S.K. Mitra, The matrix equations AX = C, XB = D Linear Algebra Appl., 59 (1984), 171-181.
  • [8] K-W.E. Chu, Singular value and generalized singular value decomposition and the solution of linear matrix equations Linear Algebra Appl., 88/89 (1987), 83-98.
  • [9] S.K. Mitra, A pair of simultaneous linear matrix equations A1XB1 = C1, A2XB2 = C2 and a matrix programming problem Linear Algebra Appl., 131 (1990), 107-123.
  • [10] Y.L. Cheng, The iterative methods of solving matrix equations AX = C, XB = D J. Nanjing Norm. Univ. (Natural Science Edition), 22(1) (1999): 1-3.
  • [11] H.C. Chen, Generalized reflexive matrices: Special properties and applications SIAM J. Matix Anal. Appl., 19 (1998), 140-153.
  • [12] J.L. Chen and X.H. Chen, Special Matrices Qinghua University Press, Beijing, China, 2001 (in Chinese).
  • [13] T. Meng, Experimental design and decision support In: C. Leondes (Ed), Expert System, The Technology of Knowledge Management and Decision Making for the 21st Century, Vol. 1, Academic Press, 2001.
  • [14] N.J. Higham, Computing a nearest symmetric positive semidefinite matrix Linear Algebra Appl., 103 (1988), 103-118.
  • [15] L. Zhang, The approximation on the closed convex cone and its numerical application Hunan Ann. Math., 6 (1986), 43-48.
  • [16] G.H. Golub and Ch.F. Van Loan, Matrix Computations Johns Hopkins University Press, Baltimore Maryland, 1989.
  • *
    Corresponding author
  • Publication Dates

    • Publication in this collection
      02 Apr 2008
    • Date of issue
      2008

    History

    • Accepted
      16 July 2007
    • Received
      24 Oct 2006
    Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
    E-mail: sbmac@sbmac.org.br