Acessibilidade / Reportar erro

Numerical resolution of cone-constrained eigenvalue problems

Abstract

Given a convex cone K and matrices A and B, one wishes to find a scalar λ and a nonzero vector x satisfying the complementarity system K ∋ x ⊥(Ax-λ Bx) ∈ K+. This problem arises in mechanics and in other areas of applied mathematics. Two numerical techniques for solving such kind of cone-constrained eigenvalue problem are discussed, namely, the Power Iteration Method and the Scaling and Projection Algorithm.

complementarity condition; generalized eigenvalue problem; power iteration method; scaling; projection algorithm


Numerical resolution of cone-constrained eigenvalue problems

A. Pinto da CostaI; Alberto SeegerII

IUniversidade Técnica de Lisboa, Instituto Superior Técnico Departamento de Engenharia Civil e Arquitectura and ICIST Avenida Rovisco Pais, 1049-001 Lisboa, Portugal E-mail: apcosta@civil.ist.utl.pt

IIUniversity of Avignon, Department of Mathematics, 33 rue Louis Pasteur 84000 Avignon, France E-mail: alberto.seeger@univ-avignon.fr

ABSTRACT

Given a convex cone K and matrices A and B, one wishes to find a scalar λ and a nonzero vector x satisfying the complementarity system

Kx ⊥(AxBx) ∈ K+.

This problem arises in mechanics and in other areas of applied mathematics. Two numerical techniques for solving such kind of cone-constrained eigenvalue problem are discussed, namely, the Power Iteration Method and the Scaling and Projection Algorithm.

Mathematical subject classification: 65F15, 90C33.

Keywords: complementarity condition, generalized eigenvalue problem, power iteration method, scaling and projection algorithm.

1 Introduction

The Euclidean space

n is equipped with the inner product 〈y, x〉 = yT x and the associated norm ||·||. The dimension n is greater than or equal to 2. Orthogonality with respect to 〈·,·〉 is indicated by means of the symbol ⊥. Throughout this work, one assumes that:

We are interested in solving numerically the following cone-constrained generalized eigenvalue problem. As usual, K+ indicates the positive dual cone of K.

Problem 1.

Find λ ∈ and a nonzero vector xnsuch that

Kx ⊥ (Ax- λBx) ∈ K+.

Problem 1 arises in mechanics [9, 10] and in various areas of applied mathematics [4, 5, 6, 12, 13]. The specific meanings of A and B depend on the context. It it important to underline that, in the present work, the matrices A and B are not necessarily symmetric.

An interesting particular case of Problem 1 is the so-called Pareto eigenvalue problem, whose precise formulation reads as follows.

Problem 2.

Find λ ∈ and a nonzero vector xnsuch that

x ⊥ (Ax - x) ∈ .

For the general theory behind Problems 1 and 2, the reader is conveyed to the references [7, 11, 15, 16]. This note focuses only on numerical issues. To be more precise, we discuss two specific algorithms for solving cone-constrained eigenvalue problems, namely, the Scaling and Projection Algorithm (SPA) and the Power Iteration Method (PIM). Both techniques have some features in common, but they are different in spirit. A third algorithm, based on optimization techniques, is formulated and left open for future examination.

2 Numerical experience with the SPA

All computational tests are carried out for the particular choice B = In, but the formulation of the SPA is given for a general matrix B as in (1). The SPA is an iterative method designed for solving Problem 1 written in the following equivalent form:

Problem 3.

The normalization condition (2) prevents x from being equal to 0. Adding such condition does not change the solution set of the original cone-constrained generalized eigenvalue problem. As example of function ø, one may think of

but other options are also possible. Our favorite choice is the linear function (3) because in such a case the set

Kø = { xK: ø(x)

is not just compact, but also convex. In the parlance of the theory of convex cones, the set Kø corresponds to a "base" of K. Unless explicitly stated otherwise, we assume that the choice (3) is in force.

The SPA generates a sequence {xt}t> 0 in Kø, a sequence {λt}t> 0 of real numbers, and a sequence {yt}t> 0 of vectors in n. The detailed description of the algorithm is as follows:

  • Initialization: Fix a positive scaling factor s. Pick uK\{0} and let x0 = u/ø(u).

  • Iteration: One has a current point xt in Kø. Compute the Rayleigh-Ritz ratio

    λt = 〈xt,Axt〉/〈xt,Bxt〉 ,

    the vector yt = Axt – λt Bxt, and the projection

    vt = ΠK[xt - syt].

    The projection vt is necessarily in K\{0}. Proceed now to a ø-normalization:

    xt+1 = vt/ø(vt) .

One could perfectly well consider a scaling factor that is updated at each iteration, but in this section we stick to the case of a fixed scaling factor. The case of a variable scaling factor will be commented in Section 4. The next convergence result is taken from [11].

Theorem 1. If {xt}t >0 converges to a certain vector, then

(a) {λt}t> 0converges to = 〈, A〉/ 〈, B〉.

(b) {yt}t > 0converges to = A - B.

(c) (, , ) solves Problem 3.

2.1 Applying the SPA to the Pareto eigenvalue problem

The Pareto eigenvalue problem already exhibits many of the mathematical difficulties arising in the context of a general convex cone K. While dealing with the Pareto cone one chooses

as normalizing function. This corresponds to the linear choice (3) with e = (1, 1, ..., 1)T belonging to the interior of .

A few words on terminology are appropriate at this point in time. One usually refers to

σpareto (A) = {λ ∈: (x, λ) solves Problem 2}

as the Pareto spectrum (or set of Pareto eigenvalues) of A. The map σpareto : has some similarities with the classical spectral map, but the differences are even more numerous. It deserves to be stressed that

is finite, but grows exponentially with respect to the dimension n of the underlying Euclidean space. Thus, a matrix of a relatively small size could have a very large number of Pareto eigenvalues. In view of this observation, it is simply not reasonable to try to compute all the elements of σpareto(A). A more realistic goal is finding just one or a bunch of them.

2.1.1 Testing on a 2 × 2 matrix with 3 Pareto eigenvalues

We start by considering the 2 × 2 matrix

whose Pareto spectrum is displayed in Table 1. For ease of visualization, the Pareto eigenvalues are arranged in increasing order. The corresponding solutions are named starting from S1 up to S3.

For this small size matrix, the SPA was able to detect all the solutions. Table 2 shows which solution was detected as function of a scaling parameter s varying in steps of 0.1 units.

A first conclusion that can be drawn from this example is that the detected solution does not need to be in the same face of the Pareto cone as the initial vector u. Which one is the detected solution depends on the initial point u and, perhaps more significantly, on the size of the scaling factor. This and subsequent tests show that no changes in the detected solution and in the number of iterations occur when the parameter s is taken beyond a certain threshold value.

Observe that the solutions S1 and S3 were found by several of the values assumed by s, whereas the solution S2 was detected only with the specific choice s = 1. In short, some Pareto eigenvalues are more likely to be found than others.

Remark 1. For a given u, the detected solution and the number of iterations required by the SPA to achieve convergence may be very sensitive with respect to the parameter s. In Table 2 one observes substantial changes while passing from s = 0.9 to s = 1.1.

2.1.2 Testing on a 3 × 3 matrix with 9 Pareto eigenvalues

We now test the SPA on the matrix

whose Pareto spectrum is displayed in Table 3. This specific matrix has 9 Pareto eigenvalues and, probably, this is the largest possible cardinality for the Pareto spectrum of a 3 × 3 matrix. We know for sure that a 3 × 3 matrix cannot have 11 or more Pareto eigenvalues. We have not found yet a 3 × 3 matrix with 10 Pareto eigenvalues and seriously doubt that such a matrix exists.

The challenge that (6) offers to the SPA is not the "large" number of Pareto eigenvalues, but the fact that some of them are hard to detect numerically. In Table 4 and in the sequel, the notation Sj/m indicates that the solution Sj was found within m iterations and the symbol ∞ indicates that convergence did not occur within 2000 iterations. Some of the conclusions that can be drawn from Table 4 are:

i) Given an initial point u, it is possible to recover several solutions to the Pareto eigenvalue problem by letting the scaling factor s to assume different values. One should not expect however to recover all the solutions (unless of course one changes also the initial point).

ii) Given an initial point u, there are values of s for which the SPA does not converge. It is very difficult to identify the convergence region

S(u) ≡ values of s for which SPA converges when initialized at u

for the scaling factor, and to see what kind of structure such a region possesses. Table 4 shows that S(u) is not necessarily an interval.

2.1.3 Testing on random matrices and random initial points

Table 5 has been produced as follows. For each dimension n ∈ {2,3,...,9}, one generates a collection of 1000 stochastically independent random matrices with uniform distribution on the hypercube [-1, 1]n×n. For each one of these 1000 random matrices, one generates a collection of 100 stochastically independent random initial vectors with uniform distribution on the hypercube [0, 1]n. One applies the SPA to each one of the 105 = 1000 × 100 random pairs (A, u) and counts the number of times in which convergence occurs within 2000 iterations.

As one can see from Table 5, the best performances of the SPA are obtained when the scaling factor s lies in the interval [0.1, 0.4]. When s is taken in such range, the percentages of convergence are all above 90 per cent. This good new speaks favorably of the SPA. The same experience as in Table 5 is carried out for n ∈ {10, 20, 30, 100}, but with a different set of values for the scaling factor s. Table 6 displays the obtained results. One sees that the overall performance of the SPA diminishes as the dimension n gets larger. For n = 100 the best score of convergence is 78%. This is still acceptable.

Table 7 reports on the case n ∈ {200, 300, 400, 500, 1000}. In this range of dimensions, the performance of the SPA is a bit less satisfactory. For instance, when n = 1000 the best score of convergence is 62%.

2.2 Applying the SPA to a circular eigenvalue problem

A circular eigenvalue problem is a prototype of eigenvalue problem with constraints described by a nonpolyhedral cone:

Problem 4.

Here, Lθ stands for the n-dimensional circular cone whose half-aperture angle is θ ∈ ]0, π/2[, and whose revolution axis is the ray generated by the canonical vector en = (0, ..., 0, 1)T. In short,

A circular eigenvalue problem differs substantially from a Pareto eigenvalue problem, the main difference being that a circular spectrum

σ θ (A) = {λ ∈: (x, λ) solves Problem 4}

does not need to be finite. As shown in [16], the set σθ (A) could perfectly well contain one or more intervals of positive length.

As seen in Figure 1, the dual cone of Lθ is again a circular cone. In fact, as reported in [3], one has the general relation = L(π/2)-θ. The projection of zn onto Lθ can be evaluated by using the explicit formula

with


The above expression of v can be found in [1] or obtained simply by working out the Karush-Kuhn-Tucker optimality conditions for the convex minimization problem

For dealing with the circular eigenvalue problem, one uses

as normalization function. This corresponds to the linear choice (3) with vector e = en. Note that en belongs to the interior of .

A convex cone whose half-aperture angle is 90 degrees is no longer pointed, so the SPA implemented with the linear normalizing function (9) could run into numerical troubles if θ increases beyond a certain threshold.

We are interested in examining the performance of the SPA as function of the half-aperture angle θ of the circular cone. The numerical experiment reported in Table 8 concerns a sample of 105 random pairs (A, u) generated in the same way as in Tables 5, 6, and 7. The SPA is tested with three different values of s and with values of θ incremented in steps of 10 degrees.

Some of the conclusions that can be drawn from Table 8 are:

i) For each tested scaling factor s and dimension n, one observes that the rate of success of the SPA decreases as the half-aperture angle increases.The best results are obtained for circular cones with moderate half-aperture angle. As predicted by common sense, dealing with circular cones with half-aperture angle near 90 degrees is problematic.

ii) As we already saw in previous numerical experiences, a right choice of scaling factor is crucial for obtaining a good performance of the SPA.As a general rule, when dealing with large dimensional problems one should reduce the size of the parameter s. As one can see in the middle table and in the right table, the choice s = 1 is appropriate only if the half-aperture angle does not exceed 25 degrees. Beyond this angular threshold, the performance of the SPA deteriorates very quickly. By contrast, if one uses a smaller scaling factor, say s = 0.1, then the SPA works perfectly well in a wider range of half-aperture angles.

3 Theory and numerical experience with the PIM

The power iteration method is a well known algorithm for finding the largest eigenvalue in modulus of a given matrix. To fix the ideas, consider the case of an n × n real matrix M with real eigenvalues λ1,...,λn such that

1| > |λ2| > ... > |λn|.

The classical power iteration method consists in generating a sequence {xt}t> 0 by selecting an initial point x0 and applying the recursive formula

where ø is typically the Euclidean norm ||·|| or a certain linear function. One expects the sequence {xt}t> 0 to converge to an eigenvector u1 associated to the eigenvalue λ1. See, for instance, Section 13.4 in Schatzman's book [14] for a precise formulation of the convergence result. The limit u1 satisfies necessarily the normalization condition ø (u1) = 1. The eigenvalue λ1 is obtained by using either the relation λ1 = ø (Mu1) or the Rayleigh-Ritz ratio λ1 = 〈u1, Au1〉/||u1||2.

Remark 2. If one wishes to find a different eigenvalue, then one chooses a convenient shifting parameter β and applies the iteration scheme (10) to the shifted matrix M - βIn. By the way, the so-called inverse power iteration method consists in applying (10) to the inverse of the matrix M - βIn.

It is not difficult to adjust the power iteration method to the case of a generalized eigenvalue problem involving a pair (A, B) of matrices. However, incorporating a cone K into the picture is a matter that requires a more careful thinking. For the sake of clarity in the discussion, we leave aside the matrix B and concentrate on K. The problem at hand reads as follows.

Problem 5.

Find λ ∈ and a nonzero vector xnsuch that

Kx ⊥ (Ax - λx) ∈ K+.

In principle, the usual spectrum of A has very little to do with

σK (A) = {λ ∈ : (x, λ) solves Problem 5}

the latter set being called the K- spectrum (or set of K- eigenvalues) of A. However, regardless of the choice of K, one always has the inclusion

with λmin (Asym) and λmax (Asym) denoting, respectively, the smallest and the largest eigenvalue of the symmetric part

of the matrix A. The computation of the extremal eigenvalues of a symmetric matrix presents no difficulty.

Lemma 1. Consider anyβ such that

Then, the following conditions are equivalent:

(a) x is a K- eigenvector of A.

(b) x and ΠK [βx - Ax] are nonzero vectors pointing in the same direction.

Proof. For all λ ∈ and x ∈ n, one has the general identity

βx - Ax = (β - λ)x - (Ax - λx).

This relation is at the core of the proof. Let x be a K- eigenvector of A and let λ be the associated K- eigenvalue. In particular, x is a nonzero vector in K. Furthermore, the combination of (11) and (12) shows that γ = β - λ is a positive scalar. By applying Moreau's theorem [8] to the orthogonal decomposition

one obtains

This confirms that x and ΠK [βx - Ax] are nonzero vectors pointing in the same direction. Conversely, suppose that x ≠ 0 and that (13) holds for some positive scalar γ. Then, one can draw the following conclusions. Firstly, xK because γ xK and γ > 0. Secondly, in view of Moreau's theorem, the equality (13) yields

γx - ( βx - Ax) ∈ K+,

γx, γx - ( βx - Ax)〉 = 0.

We have proven in this way that x is a nonzero vector satisfying the complementarity system

Kx ⊥(Ax - (β - γ) x) ∈ K+.

In other words, x is a K- eigenvector of A and λ = β - γ is the associated K- eigenvalue.

Condition (b) in Lemma 1 provides the inspiration for the formulation of the PIM in the context of a cone-constrained eigenvalue problem. The detailed description of this algorithm is as follows:

  • Initialization: Select a shifting parameter β as in (12). Pick a nonzero vector uK and let x0 = u/ø(u).

  • Iteration: One has a current point xt in Kø. Compute first the projection

and then the ø-normalized vector

As was the case with the SPA, a sequence generated by the PIM remains in the compact set Kø. The emphasis of the PIM is the computation of K- eigenvectors. If one wishes, at each iteration one can also evaluate the Rayleigh-Ritz ratio

λt = 〈xt, Axt〉/||xt||2 ,

but this is not really necessary.

Theorem 2. Let {xt}t > 0 be generated by the PIM.

(a) The general term xt is well defined, i.e., the projection vt is different from 0.

(b) If {xt}t > 0 converges to a certain , then is a K- eigenvector of A.

(c) The K-eigenvalue of A associated to is given by

Proof. Suppose, on the contrary, that vt = 0 at some iteration t. From the usual characterization of a projection, one has

βxt – Axt,w〉 < 0 ∀wK

In particular, 〈βxt - Axt,xt〉 < 0. Since xt ≠ 0, one obtains

a contradiction with assumption (12). This takes care of (a). Suppose now that {xt}t> 0 converges to . By passing to the limit in (14) one sees that {vt}t> 0 converges to

= Π K [ β - A].

As done before with each term vt, one can also prove that is different from 0. By passing now to the limit in (15), one gets

= /ø ()

Since = ø() is a positive scalar, the nonzero vectors and are pointing in the same direction. In view of Lemma 1, one concludes that is a K- eigenvector of A. The first equality in (16) is obvious, while the second one is implicit in the proof of Lemma 1.

The inequality (12) has been helpful at several stages. Lemma 1 and Theorem 2 remain true if one works under the weaker assumption

but the evaluation of the above supremum could be a bothersome task. From a purely theoretical point of view, one could even work under the assumption

In practice, it makes no sense relying on (17) because σK(A) is unknown a priori.

3.1 Applying the PIM to the Pareto eigenvalue problem

3.1.1 Testing on a small size matrix

We start by testing the PIM on the matrix (6). Recall that (6) is a nonsymmetric 3 × 3 matrix with 9 Pareto eigenvalues. All of them are positive and the largest one is σ = 10. A straightforward computation shows that

The symbol ∞ in Table 9 means, as usual, that convergence does not occur within 2000 iterations. Observe that number of iterations required by the PIM to achieve convergence increases as β goes further and further away from the threshold value (17). So, it is not a good idea to take β excessively large.

On the other hand, the symbol E in Table 9 indicates that an error occurs while performing the first iteration of the PIM. To be more precise, the PIM must stop because projecting onto K produces the zero vector. This unfavorable situation does not contradict the statement (a) of Theorem 2. What happens is simply that the value of β under consideration does not satisfy the assumption (12), not even the weaker condition (17). In connection with this issue, it is worthwhile mentioning that if one uses a parameter β smaller than the right-hand side of (17), then the PIM may still work in the sense that each projection vt may be different from 0 and {xt} may converge to a K- eigenvector of A. Logically speaking, (17) is a sufficient condition for the PIM to work properly, but such condition is not necessary.

In Table 10 one applies the PIM to the matrix (6), but one considers a sample of 108 random initial points. The question that is being addressed is the following one: to which Pareto eigenvalue of (6) is the PIM most likely to converge?

3.1.2 Testing on random matrices and random initial points

The same kind of numerical experiment as in Section 2.1.3 is now carried out with the PIM. To avoid excessive repetitions, we consider only the case of a dimension n ranging over {3,6,12}.

Since one works with a sample of random pairs (A, u), it is not entirely clear how to choose the parameter β in order to optimize the performance of the PIM. In Table 11 we explore what happens with the choices

Note that β1 is smaller than λmax (Asym). The heuristic selection rule (18) may look strange at first sight, but it is based on statistical information concerning the distribution of the random variable λmax (Asym) when n ∈ {3, 6, 12}.

The figures on Table 11 refer to percentages of success. The word "success" means, of course, that the PIM has worked properly, failure corresponding to a case of lack of convergence or to the obtention of a projection equal to 0. As one can see, the percentages of success of the PIM do not change too much when β ranges over (18).

In Figure 2 we enlarge the field of values taken by the parameter β. The idea is identifying the largest interval over which β leads to a favorable and stable rate of success.


One observes a very interesting plateau phenomenon: for each dimension n, there is a corresponding range of values for β in which the PIM performs at its top level. The height of each plateau depends on the dimension: the height decreases as n increases. Note also that the interval is almost identical for the three tested values of n.

4 SPA versus PIM

From a conceptual point of view, the SPA and the PIM are different methods. However, they share in common at least two important steps: a projection onto the cone K and a ø-normalization procedure.

Are the SPA and the PIM two different algorithms after all? The answer is yes if the scaling factor s and the shifting parameter β are not allowed to change from one iteration to the next. If one is more flexible and considers instead a sequence {st}t> 0 of scaling factors and a sequence {βt}t> 0 of shifting parameters, then one can reformulate the SPA as a PIM, and viceversa.

Proposition 1. Let B = In. The following statements are true:

(a) Implementing the t-th iteration of the PIM with βt >xt, Axt/||xt||2 produces the same outcome as implementing the t-th iteration of the SPA with

(b) Conversely, implementing the t-th iteration of the SPA with st > 0 produces the same outcome as implementing the t-th iteration of the PIM with

Proof. The projection map ΠK and the normalizing function ø are both positively homogeneous. Hence, ø-normalizing the projected vector ΠK [xt - styt] produces the same result as ø-normalizing the projected vector ΠK[(1/st)xt - yt]. It suffices then to observe that

with λt = 〈xt, Axt〉/||xt||2. Of course, one rewrites the equality (21) in the form (19) for proving part (a), and in the form (20) for proving part (b).

4.1 Numerical experience with a sequence {st}t> 0

We have tested again the SPA on random matrices and random initial points, but this time we have used a variable scaling factor st. The attention is focused on dimensions not exceeding n = 30. Inspired by the relation (19) and the experimental data reported in Figure 2, we choose a scaling factor st obeying to the feedback rule

The word "feedback" is used here to indicate that st is updated by taking in account the current value xt of the state vector x.

Table 12 reports on the outcome of this experiment. The most important lesson of this experiment is the following one: the SPA implemented with the feedback rule performs equally well as the SPA implemented with the constant scaling factor s that is optimal for the prescribed dimension. For instance, when n = 9, the feedback rule leads to a rate of success of 93%, a percentage that matches the rate of success of the SPA implemented with the optimal choice s = 0.1 (cf. Table 5).

5 By way of conclusion

Our main comments on the SPA and the PIM are as follows:

1. The SPA generates a bounded sequence {(xt, λt, yt)}t> 0 that depends on a certain initial point u and a certain scaling parameter s > 0. If the sequence converges, then its limit (, , ) is a solution to Problem 3.

2. The SPA does not aim at finding all the solutions to Problem 3. This is not just because the number of solutions could be very large, but also because some solutions could be extremely hard to be detected. It happens in practice that some solutions are hard to be found even if the SPA is initialized at a convenient initial point. The existence of such "rare" solutions is somewhat intrinsic to the nature of the cone-constrained eigenvalue problem. An interesting open question is understanding why some solutions are so hard to be detected and others are not. An appropriate theoretical analysis remains to be done.

3. The SPA was extensively tested in the Paretian case. One of the main conclusions of Section 2.1 is that the scaling factor s > 0 must be chosen with care if one wishes the SPA to produce a convergent sequence. Extensive numerical experimentation has been carried out for Pareto eigenvalue problems with random data. Specifically, the entries of the matrix A and the entries of the initial vector u were considered as independent random variables uniformly distributed over the interval [-1, 1]. For each dimension n, there is a convenient range of values for the scaling factor s. A good choice of s is one for which the SPA converges in a vast majority of cases, see Tables 5, 6, and 7.

4. The PIM has the advantage of relying on a parameter β that is not too sensitive with respect to dimensionality. Thanks to the plateau phenomenon described in Section 3.1.2, there is no need of putting too much effort in selecting the shifting parameter β. One just needs to make sure that β lies in a certain range that is easily identifiable.

We end this work by mentioning an open alternative. Observe that Problem 3 is about finding a triplet (x, λ, y) that lies in the closed convex cone Q = K × × K+, and that solves the system of nonlinear equations

Ax - λBx - y = 0,

x, y〉 = 0,

ø (x) - 1 = 0.

In principle, one could apply any method for solving systems of nonlinear equations on closed convex cones. Do such methods exist in the literature? Are they efficient? These questions open the way to a broad discussion that deserves a detailed treatment in a separate publication.

A natural idea that comes to mind is finding a global minimum over Q of the residual term

f(x, λ, y) = µ1 ||Ax - λBx - y||2 + µ2x, y2 + [ø(x) -1]2,

where µi are positive scalars interpreted as weight coefficients. For instance, Friedlander et al., [2] use a residual minimization technique for solving a so-called horizontal linear complementarity problem. However, in our particular context, one should not be too optimistic about residual minimization. Indeed, this technique may produce a local minimum that is not a global one.

Last but not the least, our original motivation was solving a very specific cone-constrained eigenvalue problem arising in mechanics (namely, detecting directional instabilities in elastic systems with frictional contact). The problem at hand is highly structured and will be treated in full extent in a forthcoming work. However, we want to mention here some features that could be of interest for non-specialists:

– Firstly, the cone defining the constraints is a Cartesian product K = K1 × ... × KN of a large number of closed convex cones living in small dimensional Euclidean spaces. As a consequence of this, the vector x = (x[1], ..., x[N]) breaks into N blocks and Problem 1 leads to a collection of subproblems (coupled, in general).

– Secondly, some of the blocks x[r] are unconstrained, i.e., Kr = .This means that K is unpointed, a situation that is preferable to avoid. One way to overcome this difficulty is by applying our algorithms to the constrained subproblems and by treating the unconstrained ones as in classical numerical linear algebra. Of course, all this requires some necessary adjustments.

– Thirdly, the constrained blocks x[r] obey to the unilateral contact law and the Coulomb friction law. Typically, Kr is a closed convex cone dependent on the coefficient of friction. It is also possible to accommodate the more general case of an anisotropic friction law.

Specially structured matrices are not reflected by the random data used in this paper. Our primary concern has been focussing in problems for which the matrix A has no structure whatsoever.

Remark 3. The results presented in Tables 6 and 7 were produced with the IST Cluster, which is a Cluster 1350 computing system from IBM. It has a total of 70 computing nodes each with 2 POWER dual core processors at 2.3 GHz, with 2GB per core. All the nodes are running the AIX 5.3 operating system. The parallel processes were connected using library MPI (Message Passing Interface). The programs developed during this study were writen in FORTRAN.

Acknowledgments. We thank Prof. Moitinho de Almeida (Departmento de Engenharia Civil e Arquitectura, IST, Portugal) for the parallelization of one of the programs developed for this study.

Received: 26/V/08. Accepted: 23/IX/08.

#767/08.

  • [1] H.H. Bauschke and S.G. Kruk, Reflection-projection method for convex feasibility problems with an obtuse cone. J. Optim. Theory Appl., 120 (2004), 503-531.
  • [2] A. Friedlander, J.M. Martínez and S.A. Santos, Solution of linear complementarity problems using minimization with simple bounds. J. Global Optim., 6 (1995), 253-267.
  • [3] J.L. Goffin, The relaxation method for solving systems of linear inequalities. Math. Oper. Res., 5 (1980), 388-414.
  • [4] A. Iusem and A. Seeger, On pairs of vectors achieving the maximal angle of a convex cone. Math. Program., 104 (2005), 501-523.
  • [5] J.J. Júdice, M. Raydan, S.S. Rosa and S.A. Santos, On the solution of the symmetric eigenvalue complementarity problem by the spectral projected gradient algorithm. Numer. Algor., 47 (2008), 391-407.
  • [6] J.J. Júdice, I.M. Ribeiro and H.D. Sherali, The eigenvalue complementarity problem Comput. Optim. Appl., 37 (2007), 139-156.
  • [7] P. Lavilledieu and A. Seeger, Existence de valeurs propres pour les systèmes multivoques: résultats anciens et nouveaux. Annal. Sci. Math. Québec, 25 (2000), 47-70.
  • [8] J.J. Moreau, Décomposition orthogonale d'un espace hilbertien selon deux cônes mutuellement polaires. C.R. Acad Sci. Paris, 255 (1962), 238-240.
  • [9] A. Pinto da Costa, I.N. Figueiredo, J.J. Júdice and J.A.C. Martins, A complementarity eigenproblem in the stability of finite dimensional elastic systems with frictional contact. Complementarity: Applications, Algorithms and Extensions, 67-83. Applied Optimization Series, vol. 50. Eds. M. Ferris, O. Mangasarian and J.S. Pang, Kluwer Acad. Publ., Dordrecht. (1999).
  • [10] A. Pinto da Costa, J.A.C. Martins, I.N. Figueiredo and J.J. Júdice, The directional instability problem in systems with frictional contacts. Comput. Methods Appl. Mech. Engrg., 193 (2004), 357-384.
  • [11] A. Pinto da Costa and A. Seeger, Cone-constrained eigenvalue problems: theory and algorithms. Comput. Optim. Appl., 43/44 (2009), in press.
  • [12] M. Queiroz, J.J. Júdice and C. Humes, The symmetric eigenvalue complementarity problem. Math. Comp., 73 (2004), 1849-1863.
  • [13] P. Quittner, Spectral analysis of variational inequalities. Comm. Math. Univ. Carolinae, 27 (1986), 605-629.
  • [14] M. Schatzman, Analyse Numérique, Dunod, Paris. (2001).
  • [15] A. Seeger, Eigenvalue analysis of equilibrium processes defined by linear complementarity conditions. Linear Algebra Appl., 292 (1999), 1-14.
  • [16] A. Seeger and M. Torki, On eigenvalues induced by a cone constraint. Linear Algebra Appl., 372 (2003), 181-206.

Publication Dates

  • Publication in this collection
    30 Mar 2009
  • Date of issue
    2009

History

  • Received
    26 May 2008
  • Accepted
    23 Sept 2008
Sociedade Brasileira de Matemática Aplicada e Computacional Sociedade Brasileira de Matemática Aplicada e Computacional - SBMAC, Rua Maestro João Seppe, nº. 900 , 16º. andar - Sala 163, 13561-120 São Carlos - SP Brasil, Tel./Fax: 55 16 3412-9752 - São Carlos - SP - Brazil
E-mail: sbmac@sbmac.org.br