Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access September 15, 2020

Two modifications of the inertial Tseng extragradient method with self-adaptive step size for solving monotone variational inequality problems

  • Timilehin Opeyemi Alakoya , Lateef Olakunle Jolaoso and Oluwatosin Temitope Mewomo EMAIL logo
From the journal Demonstratio Mathematica

Abstract

In this work, we introduce two new inertial-type algorithms for solving variational inequality problems (VIPs) with monotone and Lipschitz continuous mappings in real Hilbert spaces. The first algorithm requires the computation of only one projection onto the feasible set per iteration while the second algorithm needs the computation of only one projection onto a half-space, and prior knowledge of the Lipschitz constant of the monotone mapping is not required in proving the strong convergence theorems for the two algorithms. Under some mild assumptions, we prove strong convergence results for the proposed algorithms to a solution of a VIP. Finally, we provide some numerical experiments to illustrate the efficiency and advantages of the proposed algorithms.

MSC 2010: 65K15; 47J25; 65J15; 90C33

1 Introduction

Let H be a real Hilbert space with the inner product , and the induced norm . Let C be a nonempty, closed, and convex subset in H. In this article, we consider the classical variational inequality problem (VIP), which is to find a point xC such that

(1)Ax,yx0, yC,

where A:HH is a given operator. The solution set of VIP (1) is denoted by VI(C,A).

Variational inequality theory is an important tool in economics, engineering, mathematical programming, transportation, and in other fields (see, for example, [1,2,3,4,5,6,7,8]). Many numerical methods have been constructed for solving variational inequalities and related optimization problems, see [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27] and references therein.

One of the most popular methods for solving the problem (VIP) is the extragradient method (EGM). This method was introduced by Korpelevich [28] in 1976 as follows:

(2)x0C,yn=PC(xnλAxn),xn+1=PC(xnλAyn),n0,

where λ(0,1L) and PC denotes the metric projection from H onto C. The EGM was first introduced for solving saddle point problems, after which the method was further extended to VIPs in both the Euclidean spaces and Hilbert spaces. The convergence of the EGM only requires that the operator A is monotone and L-Lipschitz continuous. If the solution set VI(C,A) is nonempty, then the sequence {xn} generated by algorithm (2) converges weakly to an element in VI(C,A).

In recent years, the EGM (2) has received great attention by many authors, who improved it in various ways (see, for instance, [9,17,29,30,31,32] and references therein). In order to obtain the strong convergence of the EGM in real Hilbert spaces, Maingé [33] proposed a modified version of the algorithm as follows:

(3)x0H,yn=PC(xnλnAxn),tn=PC(xnλnAyn),xn+1=tnαnFtn,

where A:HH is monotone on C and L-Lipschitz continuous on H and F:HH is Lipschitz continuous and strongly monotone on C such that VI(C,A). Maingé proved that if the parameters satisfy the conditions: λn[a,b](0,1L),αn[0,1),limnαn=0, and n=0αn=, then the sequence {xn} converges strongly to xVI(C,A), where x is the solution of the following VIP

Ax,yx0, yVI(C,F).

One of the drawbacks of the EGM and its modified version above is that in each iteration in the algorithm, two projections are made onto the closed convex set C. However, projections onto a general closed and convex set are not easily executed, a fact that might affect the efficiency and applicability of the method.

In order to overcome the aforementioned drawback, Censor et al. [9] presented the subgradient extragradient method, in which the second projection onto C is replaced by a projection onto a specific constructible half-space which can be easily calculated. Their algorithm is of the form:

(4)yn=PC(xnλAxn),Tn={wH|xnλAxnyn,wyn0},xn+1=PTn(xnλAyn), n0,

where λ(0,1L).

Tseng in [32] proposed another method for solving the VIP (1), which uses only one projection in each iteration. This method is known as the Tseng extragradient method (TEGM) and is presented as follows:

(5)x0H,yn=PC(xnλAxn),xn+1=ynλ(AynAxn), n0,

where λ(0,1L).

Another shortcoming of algorithms (2), (3), (4), and (5) is the choice of stepsize. The stepsize plays an essential role in the convergence properties of iterative methods. In the aforementioned algorithms, the stepsizes are defined to be dependent on the Lipschitz constant L of the monotone operator. In this case, a prior knowledge or estimate of the Lipschitz constant is required. However, in many cases, this parameter is unknown or difficult to estimate. Moreover, the stepsize defined by the constant is often very small and slows down the convergence rate of iterative methods. In practice, a larger stepsize can often be used and yields better numerical results.

Yang and Liu [34] inspired by the TEGM and the viscosity method with a simple step size proposed the following algorithm for solving VIP (1):

Algorithm 1.1

Step 0. Take λ0>0,x0H,μ(0,1).

Step 1. Given the current iterate xn, compute

yn=PC(xnλnF(xn)).

If xn=yn, then stop: xn is a solution. Otherwise, go to Step 2.

Step 2. Compute

xn+1=αnf(xn)+(1αn)zn

and

λn+1=minμxnynF(xn)F(yn),  λn,ifF(xn)F(yn)0,λn,otherwise,

where zn=yn+λn(F(xn)F(yn)). Set nn+1 and return to Step 1,

where F:HH is monotone and Lipschitz continuous with constant L>0, f:HH is a strict contraction mapping with constant ρ[0,1), and {αn}(0,1). They proved the strong convergence of the algorithm without any prior knowledge of the Lipschitz constant of the mapping.

Very recently, Thong et al. [35] introduced a new algorithm which was a combination of the modified TEGM and the viscosity method with inertial technique. The proposed algorithm is presented as follows.

Algorithm 1.2

Initialization: Let x0,x1H be arbitrary.

Iterative steps: Calculate xn+1 as follows:

Step 1. Set wn=xn+αn(xnxn1) and compute

yn=PC(wnλAwn).

If yn=wn, then stop and yn is a solution of the VIP. Otherwise, go to Step 2.

Step 2. Compute

xn+1=βnf(xn)+(1βn)zn,

where zn=ynλ(AynAwn). Set nn+1 and go to Step 1,

where A:HH is monotone and Lipschitz continuous with constant L>0, f:HH is a contraction mapping with contraction parameter, λ(0,1L),{αn}[0,α) for some α>0 and {βn}(0,1) satisfying limnβn=0,n=1βn=. Under certain mild assumptions, they proved that the proposed algorithm converges strongly to a solution of the VIP (1).

In this work, we propose iterative schemes to remedy the drawbacks highlighted above. Motivated by the works of Yang et al. [34] and Thong et al. [35] and the current research interest in this direction, we propose two new inertial-type algorithms for solving the VIP (1) based on the TEGM and Moudafi’s viscosity scheme which does not require a prior knowledge of the Lipschitz constant of the monotone operator. The inertial term αn(xnxn1) introduced can be regarded as a procedure for speeding up the convergence properties (see, for example, [22,23,33,36,37,38,39]). The first algorithm requires the computation of only one projection onto the feasible set per iteration while the second algorithm needs the computation of only one projection onto a half-space, which is easy to compute. Under some mild conditions, we prove strong convergence theorems for the algorithms without any prior knowledge of the Lipschitz constant of the monotone operator. Finally, we provide some numerical experiments to show the efficiency and advantages of the proposed algorithms. The numerical illustrations show that our proposed algorithms with inertial effects converge faster than the original algorithms without inertial effects.

2 Preliminaries

Let H be a real Hilbert, for a nonempty, closed, and convex subset C of H, the metric projection PC:HC is defined, for each xH, as the unique element PCxC such that

xPCx=inf{xz:zC}.

It is known that PC is nonexpansive. We denote the weak and strong convergence of a sequence {xn} to a point xH by xnx and xnx, respectively.

Definition 2.1

A function f:H is said to be weakly lower semicontinuous (w-lsc) at xH, if

f(x)liminfnf(xn)

holds for an arbitrary sequence {xn}n=0 in H satisfying xnx.

Lemma 2.2

[40,41] Letδ(0,1), forx,yH, we have the following statements:

  1. |x,y|xy;

  2. x+y2x2+2y,x+y;

  3. x+y2=x2+2x,y+y2;

  4. δx+(1δ)y2=δx2+(1δ)y2δ(1δ)xy2.

Lemma 2.3

[42] Let C be a nonempty closed convex subset of a real Hilbert spaceH. For anyxHandzC, we have

z=PCxxz,zy0forallyC.

Lemma 2.4

[42] Let C be a closed convex subset in a real Hilbert spaceH, andxH. Then,

  1. PCxPCy2PCxPCy,xyforallyC;

  2. PCxy2xy2xPCx2forallyC.

Lemma 2.5

[43] Let{an}be a sequence of nonnegative real numbers,{αn}be a sequence in(0,1)withn=1αn=, and{bn}be a sequence of real numbers. Assume that

an+1(1αn)an+αnbn,foralln1,
iflimsupkbnk0for every subsequence{ank}of{an}satisfyingliminfk(ank+1ank)0, thenlimnan=0.

Lemma 2.6

[44] IfA:CHis a continuous and monotone mapping, thenxis a solution of (1) if and only ifxis a solution of the following problem:

findxCsuchthatAy, yx0, yC.

3 Main results

In this work, we consider the VIP (1) under the following assumptions:

  1. The solution set of (1) denoted by VI(C,A) is nonempty.

  2. The mapping A is monotone, i.e.,

    (6)AxAy,  xy0, x,yH.
  3. The mapping A is Lipschitz-continuous with constant L>0, i.e., there exists L>0 such that

(7)AxAyLxy   x,yH.

We take f:HH to be a strict contraction mapping with contraction parameter k[0,1). Let {αn}[0,α) for some α>0 and {βn}(0,1) satisfy the following conditions:

(8)limnβn=0,n=1βn=andlimnαnβnxnxn1=0.

Now, the first algorithm is presented as follows.

Algorithm 3.1

Step 0. Take x0,x1H arbitrarily, λ0>0,μ(0,1).

Step 1. Set wn=xn+αn(xnxn1) and compute

yn=PC(wnλnAwn).

If yn=wn, then stop and yn is the solution of the VIP (1). Otherwise, go to Step 2.

Step 2. Compute

xn+1=βnf(xn)+(1βn)zn

and

λn+1=minμwnynAwnAyn,λn,ifAwnAyn0,λn,otherwise,

where zn=yn+λn(AwnAyn). Set nn+1 and return to Step 1.

Lemma 3.2

The sequence{λn}generated by Algorithm 3.1 is monotonically decreasing with lower boundminμLλ0.

Proof

Repeating the proof as in [34] and replacing {xn} by {wn}, we obtain the desired result.□

Remark 3.3

It is clear that the limit of {λn} exists, and we denote λ=limnλn. It then follows that λ>0.

Now, we prove the boundedness of the sequence {xn} generated by Algorithm 3.1.

Lemma 3.4

Let{xn}be a sequence generated by Algorithm 3.1. Then,{xn}is bounded.

Proof

Suppose pVI(C,A). Then, by the definition of {zn} and using Lemma 2.2, we obtain

(9)znp2=ynλn(AynAwn)p2=ynp2+λn2AynAwn22λnAynAwn,ynp=ynwn2+wnp2+2ynwn,wnp+λn2AynAwn22λnAynAwn,ynp=wnp2+ynwn22ynwn,ynwn+2ynwn,ynp+λn2AynAwn22λnAynAwn,ynp=wnp2ynwn2+2ynwn,ynp+λn2AynAwn22λnAynAwn,ynp.

Recalling that yn=PC(wnλnAwn), by Lemma 2.3, we obtain

ynwn+λnAwn,ynp0,

which is equivalent to

(10)ynwn,ynpλnAwn,ynp.

Combining (9) and (10), we have

(11)znp2wnp2ynwn22λnAwn,ynp+λn2AynAwn22λnAynAwn,ynpwnp2ynwn2+λn2AynAwn22λnynp,Ayn=wnp2ynwn2+λn2μ2λn+12ynwn22λnynp,Ayn=wnp2ynwn2+λn2μ2λn+12ynwn22λnynp,AynAp2λnynp,Apwnp21λn2μ2λn+12||ynwn||2.

Now, consider the limit

(12)limn1λn2μ2λn+12=1μ2>0.

Hence, there exists N0 such that  nN, we have that 1λn2μ2λn+12>0. Thus, it follows that  nN, we have

(13)znpwnp.

From the definition of {wn}, we obtain

(14)wnp=xn+αn(xnxn1)pxnp+αnxnxn1=xnp+βnαnβnxnxn1.

From the condition αnβnxnxn10, it follows that there exists a constant M1>0 such that

(15)αnβnxnxn1M1, n1.

Hence, combining (13), (14) and (15), we have

(16)znpwnpxnp+βnM1.

From the definition of {xn}, we have

(17)xn+1p=βnf(xn)+(1βn)znp=βn(f(xn)p)+(1βn)(znp)βnf(xn)p+(1βn)znpβnf(xn)f(p)+βnf(p)p+(1βn)znpβnkxnp+βnf(p)p+(1βn)znp.

Substituting (16) into (17), we obtain

xn+1p(1(1k)βn)xnp+βnM1+βnf(p)p=(1(1k)βn)xnp+(1k)βnM1+f(p)p1kmaxxnp,M1+f(p)p1kmaxxNp,M1+f(p)p1k.

This implies that the sequence {xn} is bounded. It also follows that {zn},{f(xn)},{wn}, and {yn} are bounded.□

Lemma 3.5

Assume that{wn}and{yn}are sequences generated by Algorithm (3.1) such thatlimnwnyn=0. If{wnk}converges weakly to somez0H, thenz0VI(C,A).

Proof

By the hypotheses of the lemma, we have that ynkz0 and z0C. Since A is monotone, then by the definition of ynk and by applying Lemma 2.3, we obtain

ynkwnk+λnkAwnk, zynk0, zC.

This implies that

0ynkwnk, zynk+λnkAwnk, zynk=ynkwnk, zynk+λnkAwnk, zwnk+λnkAwnk, wnkynkynkwnk, zynk+λnkAz, zwnk+λnkAwnk, wnkynk.

Letting k, applying the facts that limkynkwnk=0,{ynk} is bounded and limkλnk=λ>0, we have

Az, zz00, zC.

Applying Lemma 2.6, we have that z0VI(C,A).□

Lemma 3.6

Let{xn}, {wn}, {yn}, {λn}, {βn}, andμbe as defined in Algorithm 3.1 andM4>0be some constant. Then, the following inequality holds:

(18)(1βn)1λn2μ2λn+12ynwn2xnx2xn+1x2+βnM4,
wherexVI(C,A).

Proof

Let xVI(C,A), then by the definition of {xn} and using Lemma 2.2, we have

(19)xn+1x2=βnf(xn)+(1βn)znx2=βnf(xn)x2+(1βn)znx2βn(1βn)f(xn)zn2βnf(xn)x2+(1βn)znx2βn(f(xn)f(x)+f(x)x)2+(1βn)znx2βn(kxnx+f(x)x)2+(1βn)znx2βn(xnx+f(x)x)2+(1βn)znx2=βnxnx2+βn(2xnxf(x)x+f(x)x2)+(1βn)znx2βnxnx2+(1βn)znx2+βnM2,

for some M2>0. Substituting (11) into (19), we get

(20)xn+1x2βnxnx2+(1βn)wnx2(1βn)1λn2μ2λn+12ynwn2+βnM2.

From (16), we obtain

(21)wnx2(xnx+βnM1)2=xnx2+βn(2M1xnx+βnM12)xnx2+βnM3,

for some M3>0. Combining (20) and (21), we obtain

xn+1x2βnxnx2+(1βn)xnx2+βnM3(1βn)1λn2μ2λn+12ynwn2+βnM2=xnx2+βnM3(1βn)1λn2μ2λn+12ynwn2+βnM2.

Hence, we have that

(1βn)1λn2μ2λn+12ynwn2xnx2xn+1x2+βnM4,

where M4M2+M3.□

Now, we prove the convergence of Algorithm 3.1.

Theorem 3.7

Assume that (A1), (A2), and (A3) hold and the sequence{αn}is chosen such that it satisfies (8). Then, the sequence{xn}generated by Algorithm 3.1 converges strongly to an elementxVI(C,A), wherex=PVI(C,A)f(x).

Proof

Let xVI(C,A), then using (13) and Lemma 2.2, we obtain

(22)xn+1x2=βnf(xn)+(1βn)znx2=βn(f(xn)f(x))+(1βn)(znx)+βn(f(x)x)2βn(f(xn)f(x))+(1βn)(znx)2+2βnf(x)x,xn+1x=βn(f(xn)f(x)2+(1βn)(znx)2βn(1βn)(f(xn)f(x))(znx)2+2βnf(x)x,xn+1xβn(f(xn)f(x)2+(1βn)(znx)2+2βnf(x)x,xn+1xβnk2xnx2+(1βn)(znx)2+2βnf(x)x,xn+1xβnkxnx2+(1βn)(wnx)2+2βnf(x)x,xn+1x.

By the definition of {wn} and using Lemma 2.2, we have

(23)wnx2=xn+αn(xnxn1)x2=xnx2+2αnxnx,xnxn1+αn2xnxn12xnx2+2αnxnxxnxn1+αn2xnxn12.

Combining (22) and (23), we obtain

(24)xn+1x2(1(1k)βn)xnx2+2αnxnxxnxn1+αn2xnxn12+2βnf(x)x,xn+1x=(1(1k)βn)xnx2+(1k)βn21kf(x)x,xn+1x+αnxnxn1(2xnx+αnxnxn1)(1(1k)βn)xnx2+(1k)βn21kf(x)x,xn+1x+3Mαnxnxn1=(1(1k)βn)xnx2+(1k)βn21kf(x)x,xn+1x+3M1kαnβnxnxn1,

where MsupnN{xnx,αnxnxn1}>0.

Next, we claim that the sequence {xnx} converges to zero. In order to establish this, by Lemma 2.5, it suffices to show that limsupkf(x)x,xnk+1x0 for every subsequence {xnkx} of {xnx} satisfying

liminfk(xnk+1xxnkx)0.

Now, suppose that {xnkx} is a subsequence of {xnx} such that

liminfk(xnk+1xxnkx)0.

Then, it follows that

liminfk(xnk+1x2xnkx2)=liminfk{(xnk+1xxnkx)(xnk+1x+xnkx)}0.

Then from Lemma 3.6, and using the facts that limk1λnk2μ2λnk+12=1μ2>0 and limkβnk=0, we obtain

(25)limkynkwnk=0.

From (25), we get

(26)znkwnk=ynk+λnk(AwnkAynk)wnkynkwnk+λnkAwnkAynkynkwnk+λnk×μλnk+1wnkynk=1+λnk×μλnk+1ynkwnk0.

Also, we have that

(27)xnk+1znk=βnkf(xnk)+(1βnk)znkznk=βnkf(xnk)znk0

and

(28)xnkwnk=xnk[xnk+αnk(xnkxnk1)]=αnkxnkxnk1=βnkαnkβnkxnkxnk10.

Applying (26), (27), and (28), we obtain

(29)xnk+1xnkxnk+1znk+znkwnk+wnkxnk0.

Since {xnk} is bounded, there exists a subsequence {xnkj} that converges weakly to some z0H, such that

(30)limsupkf(x)x,xnkx=limjf(x)x,xnkjx=f(x)x,z0x.

By Lemma 3.5, and (25) and (28), we have z0VI(C,A). Since the solution set VI(C,A) is a closed, convex subset and f is a strict contraction, the mapping PVI(C,A)f is a contraction mapping. Hence, by the Banach contraction mapping principle, there exists a unique element xVI(C,A) such that x=PVI(C,A)f(x). By Lemma 2.3, we have

(31)f(x)x,  zx0, zVI(C,A).

Hence, it follows from (31) that

(32)limsupkf(x)x,  xnkx=f(x)x,  z0x0.

Combining (29) and (32), we have

(33)limsupkf(x)x,  xnk+1xlimsupkf(x)x,  xnk+1xnk+limsupkf(x)x,  xnkx=f(x)x,  z0x0.

Thus, by (33), limnαnβnxnxn1=0, (24) and Lemma 2.5, we have limnxnx=0 as required.□

We next propose our second algorithm. Suppose C is a nonempty convex set which satisfies the following conditions:

  1. The set C is given by

    C={xH:h(x)0},

    where h:H is a convex and subdifferentiable function on C.

  2. h is weakly lower semicontinuous.

  3. For any xH, at least one subgradient ξh(x) can be calculated, where h(x) is defined as follows:

    h(x)={zH:h(u)h(x)+ux,z,  uH}.

    In addition, h(x) is bounded on bounded sets.

  4. Define the set Cn by the following half-space:

Cn={xH:h(wn)+ξn,xwn0},

where ξnh(wn). By the definition of the subgradient, it is clear that CCn.

We now present the following algorithm using the half-space defined above.

Let f:HH be a strict contraction mapping with contraction parameter k[0,1). Let {αn}[0,α) for some α>0 and {βn}(0,1) satisfying the following conditions:

limnβn=0,n=1βn=andlimnαnβnxnxn1=0.

Let {xn} be a sequence generated by the following iterative process.

Algorithm 3.8

Step 0. Take x0,x1H arbitrarily, λ0>0,μ(0,1).

Step 1. Set wn=xn+αn(xnxn1) and compute

yn=PCn(wnλnAwn).

If yn=wn, then stop and yn is the solution of the VIP (1). Otherwise, go to Step 2.

Step 2. Compute

xn+1=βnf(xn)+(1βn)zn

and

λn+1=minμwnynAwnAyn,  λn,ifAwnAyn0,λn,otherwise,

where zn=yn+λn(AwnAyn). Set nn+1 and return to Step 1.

It is easy to extend Lemmas 3.2, 3.4, and 3.6 for Algorithms 3.1–3.8.

Lemma 3.9

The sequence{λn}generated by Algorithm 3.8 is monotonically decreasing with lower boundminμLλ0.

Lemma 3.10

Let{xn}be a sequence generated by Algorithm 3.8. Then, the sequence{xn}is bounded.

Lemma 3.11

Let{xn}, {wn}, {yn}, {λn}, {βn}, andμbe as defined in Algorithm 3.8 andM5>0be some constant. Then, the following inequality holds:

(34)(1βn)1λn2μ2λn+12ynwn2xnx2xn+1x2+βnM5,
wherexVI(C,A).

Lemma 3.12

Assume that{wn}and{yn}are sequences generated by Algorithm (3.8) such thatlimnwnyn=0. If{wnj}converges weakly to somexˆHasj, thenxˆVI(C,A).

Proof

Since wnjxˆ, it follows that ynjxˆ as j. Since ynjCnj, by the definition of Cn, we get

h(wnj)+ξnj,ynjwnj0.

Since {xn} is bounded by Lemma 3.10, then {wn} and {yn} are also bounded, and by condition (B3) there exists a constant M>0 such that ξnjMfor allj0. So h(wnj)Mwnjynj0asj, and this in turn implies that liminfjh(wnj)0. Using condition (B2), we have h(xˆ)liminfjh(wnj)0. This means that xˆC. From Lemma 2.3, we obtain

ynjwnj+λnjAwnj, zynj0, zCCnj.

Since A is monotone, we have

0ynjwnj, zynj+λnjAwnj, zynj=ynjwnj, zynj+λnjAwnj, zwnj+λnjAwnj, wnjynjynjwnj, zynj+λnjAz, zwnj+λnjAwnj, wnjynj.

Letting j, and since limjynjwnj=0, we have

Az, zxˆ0, zC.

Applying Lemma 2.6, we have that xˆVI(C,A).□

Now, we prove the convergence theorem for Algorithm 3.8.

Theorem 3.13

Assume that (A1), (A2), (A3), (B1), and (B2) hold. Let the sequence{αn}be chosen such that it satisfies (8). Then, the sequence{xn}generated by Algorithm 3.8 converges strongly to an elementxˆVI(C,A), wherexˆ=PVI(C,A)f(xˆ).

Proof

From (24), we have

(35)xn+1x2(1(1k)βn)xnx2+(1k)βn21kf(x)x,xn+1x+3M1kαnβnxnxn1,

where MsupnN{xnx,αnxnxn1}>0.

We claim that the sequence {xnx} converges to zero. In order to establish this, by Lemma 2.5, it suffices to show that limsupkf(x)x,xnk+1x0 for every subsequence {xnkx} of {xnx} satisfying

liminfk(xnk+1xxnkx)0.

Suppose that {xnkx} is a subsequence of {xnx} such that

liminfk(xnk+1xxnkx)0.

By applying Lemma 3.11 and following similar argument as in Theorem 3.7 we have

(36)limkynkwnk=0,limkznkwnk=0,limkxnkwnk=0,limkxnk+1xnk=0.

Since {xnk} is bounded, there exists a subsequence {xnkj} that converges weakly to some z0H, such that

(37)limsupkf(x)x,xnkx=limjf(x)x,xnkjx=f(x)x,z0x.

By Lemma 3.12 and (36), we have z0VI(C,A). Since the solution set VI(C,A) is a closed, convex subset and f is a strict contraction, the mapping PVI(C,A)f is a contraction mapping. By the Banach contraction mapping principle, there exists a unique element xVI(C,A) such that x=PVI(C,A)f(x). Applying Lemma 2.3, we have

(38)f(x)x,  zx0, zVI(C,A).

Therefore, we have that

(39)limsupkf(x)x,  xnkx=f(x)x,  z0x0.

From (36) and (39), we have

(40)limsupkf(x)x,  xnk+1xlimsupkf(x)x,  xnk+1xnk+limsupkf(x)x,  xnkx=f(x)x,  z0x0.

Hence, by (40), limnαnβnxnxn1=0, (35), and applying Lemma 2.5, we have limnxnx=0, which is the required result.□

4 Numerical experiments

In this section, we present some numerical examples to demonstrate the efficiency of our algorithms in comparison with Algorithms 1.1 and 1.2 in the literature. All numerical computations were carried out using Matlab 2016(b) on an HP personal computer, 8-Gb RAM.

Table 1

Comparison between Algorithms 3.1, 1.1, and 1.2 for Problem 1

DimensionAlgorithm 3.1Algorithm 1.1Algorithm 1.2
Iter.CPU time (s)Iter.CPU time (s)Iter.CPU time (s)
m=4236.05235916.5546297.6139
m=20227.40865818.7391298.7219
m=1002415.95266335.60633118.4151
Figure 1 Problem 1, left: m=4m=4; middle: m=20m=20; and right: m=100m=100.
Figure 1

Problem 1, left: m=4; middle: m=20; and right: m=100.

Table 2

Comparison between Algorithms 3.1, 1.1, and 1.2 for Problem 2

Algorithm 3.1Algorithm 1.1Algorithm 1.2
Iter.CPU time (s)Iter.CPU time (s)Iter.CPU time (s)
Case I71.775661.9701115.6289
Case II75.022375.2426117.0342
Case III83.103583.1066118.0202
Figure 2 Problem 1, left: Case I; middle: Case II; and right: Case III.
Figure 2

Problem 1, left: Case I; middle: Case II; and right: Case III.

We choose λ0=0.9,βn=1n+1,f(x)=x5, and μ=0.6 and use xn+1xnx2x1<ε as a stopping criterion to terminate the algorithm in each example. The projection onto the feasible set C is computed using the function “fmincon” in the optimization tool box. We take θ=0.6 and choose the sequence {αn} such that

αn=minθ,βn2xnxn1ifxnxn1,θotherwise.

Problem 1

The first test (also considered in [34]) is a classical example for which the usual gradient method does not converge. The feasible set is C=Rm and A(x)=Mx, where M is a square m×m matrix given by

ai,j=1,ifj=m+1iandj>i,1ifj=m+1iandj<1,0,otherwise.

For even m, the zero vector x=(0,,0) is the solution of Problem 1. In this example, we take (Case I:) m=4, (Case II:) m=20, and (Case III:) m=100 and ε=104. The initial points are generated randomly using x0=rand(m,1) and x1=10×rand(m,1). The numerical results are summarized in Table 1 and Figure 1.

Problem 2

Suppose that H=L2([0,1]) with the inner product

x,y01x(t)y(t)dt, x,yH

and the induced norm

x01|x(t)|2dt, xH.

Let C{xH:x1} be the unit ball and define the operator A:CH by

(Ax)(t)=max{0,x(t)}.

It can be easily verified that A is 1-Lipschitz continuous and monotone on C. With these given C and A, the solution set of the VIP (1) is given by

VI(C,A)={0}.

It is known that

PC(x)=xxL2ifxL2>1,xifxL21.

We test the algorithms for three different starting points and use ε=103 as a stopping criterion. The numerical results are summarized in Table 2 and Figure 2.

Case I: x0=t35, x1=t3+5t21;

Case II: x0=exp(5t), x1=(2t21)5;

Case III: x0=5sin(2πt), x1=cos(2t)4.

Problem 3

Next, we consider the Kojima-Shindo nonlinear complementarity problem, where n=4 and the mapping A is defined by

A(x1,x2,x3,x4)=3x12+2x1x2+2x22+x3+3x462x12+x1+x22+10x3+2x423x12+x1x2+2x22+2x3+9x49x12+3x22+2x3+3x43.

It is known that A is Lipschitz continuous [45]. The feasible set is C={xR+4 | x1+x2+x3+x4=4}. We choose the starting points: Case I: x0=(1,2,0,1);x1=(1,1,1,1) and Case II: x0=(2,0,0,2);x1=(1,0,1,2). For all the starting points, we have two tests: with ε=103 and ε=106. The results are summarized in Tables 3–4 and Figures 3–4.

5 Conclusion

In this article, we introduce two modifications of the inertial TEGM with self-adaptive step size for solving monotone VIPs. The algorithms were constructed in such a way that only one projection onto the feasible set C was made in each iteration. The results obtained improve many known results in this direction in the literature.

Table 3

Comparison between Algorithms 3.1, 1.1, and 1.2 for Problem 3, ε=103

Algorithm 3.1Algorithm 1.1Algorithm 1.2
Iter.CPU time (s)Iter.CPU time (s)Iter.CPU time (s)
Case I154.4316359.5863184.8669
Case II185.55983610.1659236.3572
Table 4

Comparison between Algorithms 3.1, 1.1, and 1.2 for Problem 3, ε=106

Algorithm 3.1Algorithm 1.1Algorithm 1.2
Iter.CPU time (s)Iter.CPU time (s)Iter.CPU time (s)
Case I319.59307522.07224110.6315
Case II319.55267521.70234111.5232
Figure 3 Problem 1, left: Case I; right: Case II.
Figure 3

Problem 1, left: Case I; right: Case II.

Figure 4 Problem 1, left: Case I; right: Case II.
Figure 4

Problem 1, left: Case I; right: Case II.

Acknowledgments

The authors sincerely thank the anonymous reviewers for their careful reading, constructive comments, and fruitful suggestions that substantially improved the manuscript. Oluwatosin Temitope Mewomo is supported in part by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). Opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the NRF.

  1. Conflict of interest: The authors declare that they have no competing interests.

References

[1] F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer Series in Operations Research and Financial Engineering, Springer-Verlag, New York, 2003.10.1007/b97544Search in Google Scholar

[2] L. O. Jolaoso, K. O. Oyewole, C. C. Okeke, and O. T. Mewomo, A unified algorithm for solving split generalized mixed equilibrium problem and fixed point of nonspreading mapping in Hilbert space, Demonstr. Math. 51 (2018), 211–232.10.1515/dema-2018-0015Search in Google Scholar

[3] D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications, Academic Press, New York, 1980.Search in Google Scholar

[4] I. V. Konnov, Combined Relaxation Methods for Variational Inequalities, Springer, Berlin, 2001.10.1007/978-3-642-56886-2Search in Google Scholar

[5] O. T. Mewomo and F. U. Ogbuisi, Convergence analysis of an iterative method for solving multiple-set split feasibility problems in certain Banach spaces, Quest. Math. 41 (2018), no. 1, 129–148.10.2989/16073606.2017.1375569Search in Google Scholar

[6] F. U. Ogbuisi and O. T. Mewomo, Convergence analysis of common solution of certain nonlinear problems, Fixed Point Theory 19 (2018), no. 1, 335–358.10.24193/fpt-ro.2018.1.26Search in Google Scholar

[7] F. U. Ogbuisi and O. T. Mewomo, Iterative solution of split variational inclusion problem in real Banach space, Afr. Mat. 28 (2017), no. 1–2, 295–309.10.1007/s13370-016-0450-zSearch in Google Scholar

[8] L. O. Jolaoso, A. Taiwo, T. O. Alakoya, and O. T. Mewomo, Strong convergence theorem for solving pseudo-monotone variational inequality problem using projection method in a reflexive Banach space, J. Optim. Theory Appl. 185 (2020), no. 3, 744–766, 10.1007/s10957-020-01672-3.Search in Google Scholar

[9] Y. Censor, A. Gibali, and S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl. 148 (2011), 318–335.10.1007/s10957-010-9757-3Search in Google Scholar PubMed PubMed Central

[10] Y. Censor, A. Gibali, and S. Reich, Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space, Optim. Methods Softw. 26 (2011), 827–845.10.1080/10556788.2010.551536Search in Google Scholar

[11] Y. Censor, A. Gibali, and S. Reich, Algorithms for the split variational inequality problem, Numer. Algorithms 59 (2012), 301–323.10.1007/s11075-011-9490-5Search in Google Scholar

[12] P. Cholamjiak and S. Suantai, Iterative methods for solving equilibrium problems, variational inequalities and fixed points of nonexpansive semigroups, J. Glob. Optim. 57 (2013), 1277–1297.10.1007/s10898-012-0029-7Search in Google Scholar

[13] C. Izuchukwu, G. C. Ugwunnadi, O. T. Mewomo, A. R. Khan, and M. Abbas, Proximal-type algorithms for split minimization problem in p-uniformly convex metric space, Numer. Algorithms 82 (2019), no. 3, 909–935.10.1007/s11075-018-0633-9Search in Google Scholar

[14] L. O. Jolaoso, T. O. Alakoya, A. Taiwo, and O. T. Mewomo, A parallel combination extragradient method with Armijo line searching for finding common solution of finite families of equilibrium and fixed point problems, Rend. Circ. Mat. Palermo II (2019), 10.1007/s12215-019-00431-2.Search in Google Scholar

[15] L. O. Jolaoso, T. O. Alakoya, A. Taiwo, and O. T. Mewomo, Inertial extragradient method via viscosity approximation approach for solving Equilibrium problem in Hilbert space, Optimization (2020), 10.1080/02331934.2020.1716752.Search in Google Scholar

[16] L. O. Jolaoso, F. U. Ogbuisi, and O. T. Mewomo, An iterative method for solving minimization, variational inequality and fixed point problems in reflexive Banach spaces, Adv. Pure Appl. Math. 9 (2018), no. 3, 167–184.10.1515/apam-2017-0037Search in Google Scholar

[17] L. O. Jolaoso, A. Taiwo, T. O. Alakoya, and O. T. Mewomo, A self-adaptive inertial subgradient extragradient algorithm for variational inequality and common fixed point of multivalued mappings in Hilbert spaces, Demonstr. Math. 52 (2019), 183–203.10.1515/dema-2019-0013Search in Google Scholar

[18] G. Kassay, S. Reich, and S. Sabach, Iterative methods for solving systems of variational inequalities in reflexive Banach spaces, SIAM J. Optim. 21 (2011), 1319–1344.10.1137/110820002Search in Google Scholar

[19] S. A. Khan and W. Cholamjiak, Mixed variational inequalities with various error bounds for random fuzzy mappings, J. Intelligent & Fuzzy Systems 34 (2018), no. 4, 2313–2324.10.3233/JIFS-171370Search in Google Scholar

[20] C. C. Okeke and O. T. Mewomo, On split equilibrium problem, variational inequality problem and fixed point problem for multi-valued mappings, Ann. Acad. Rom. Sci. Ser. Math. Appl. 9 (2017), no. 2, 255–280.Search in Google Scholar

[21] Y. Shehu and O. T. Mewomo, Further investigation into split common fixed point problem for demicontractive operators, Acta Math. Sin. (Engl. Ser.) 32 (2016), no. 11, 1357–1376.10.1007/s10114-016-5548-6Search in Google Scholar

[22] Y. Shehu and P. Cholamjiak, Iterative method with inertial for variational inequalities in Hilbert spaces, Calcolo (2019), 10.1007/s10092-018-0300-5.Search in Google Scholar

[23] S. Suantai, N. Pholasa, and P. Cholamjiak, The modified inertial relaxed CQ algorithm for solving the split feasibility problems, J. Indust. Manag. Optim. 14 (2018), 1595–1615.10.3934/jimo.2018023Search in Google Scholar

[24] A. Taiwo, L. O. Jolaoso, and O. T. Mewomo, A modified Halpern algorithm for approximating a common solution of split equality convex minimization problem and fixed point problem in uniformly convex Banach spaces, Comput. Appl. Math. 38 (2019), no. 2, 77, 10.1007/s40314-019-0841-5.Search in Google Scholar

[25] A. Taiwo, L. O. Jolaoso, and O. T. Mewomo, Parallel hybrid algorithm for solving pseudomonotone equilibrium and split common fixed point problems, Bull. Malays. Math. Sci. Soc. 43 (2020), 1893–1918.10.1007/s40840-019-00781-1Search in Google Scholar

[26] J. Yang and H. Liu, A modified projected gradient method for monotone variational inequalities, J. Optim. Theory Appl. 179 (2018), 197–211.10.1007/s10957-018-1351-0Search in Google Scholar

[27] A. Taiwo, T. O. Alakoya, and O. T. Mewomo, Halpern-type iterative process for solving split common fixed point and monotone variational inclusion problem between Banach spaces, Numer. Algorithms (2020), 10.1007/s11075-020-00937-2.Search in Google Scholar

[28] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekon. Mat. Metody 12 (1976), 747–756.Search in Google Scholar

[29] Y. Censor, A. Gibali, and S. Reich, Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space, Optimization 61 (2012), 1119–1132.10.1080/02331934.2010.539689Search in Google Scholar

[30] N. Nadezhkina and W. Takahashi, Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings, J. Optim. Theory Appl. 128 (2006), 191–201.10.1007/s10957-005-7564-zSearch in Google Scholar

[31] M. V. Solodov and B. F. Svaiter, A new projection method for variational inequality problems, SIAM J. Control Optim. 37 (1993), 765–776.10.1137/S0363012997317475Search in Google Scholar

[32] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim. 38 (2020), 431–446.10.1137/S0363012998338806Search in Google Scholar

[33] P. E. Maingé, A hybrid extragradient-viscosity method for monotone operators and fixed point problems, SIAM J. Control Optim. 47 (2008), 1499–1515.10.1137/060675319Search in Google Scholar

[34] J. Yang and H. Liu, Strong convergence result for solving monotone variational inequalities in Hilbert space, Numer. Algorithms 80 (2019), no. 3, 741–752.10.1007/s11075-018-0504-4Search in Google Scholar

[35] D. V. Thong, N. T. Vinh, and Y. J. Cho, A strong convergence theorem for Tseng’s extragradient method for solving variational inequality problems, Optim. Lett. (2019), 10.1007/s11590-019-01391-3.Search in Google Scholar

[36] F. Alvarez and H. Attouch, An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping, Set-Valued Anal. 9 (2001), 3–11.10.1023/A:1011253113155Search in Google Scholar

[37] W. Cholamjiak, P. Cholamjiak, and S. Suantai, An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces, J. Fixed Point Theory Appl. 20 (2018), 42, 10.1007/s11784-018-0526-5.Search in Google Scholar

[38] W. Cholamjiak, N. Pholasa, and S. Suantai, A modified inertial shrinking projection method for solving inclusion problems and quasi-nonexpansive multivalued mappings, Comput. Appl. Math. 37 (2018), no. 4, 5750–5774.10.1007/s40314-018-0661-zSearch in Google Scholar

[39] B. T. Polyak, Some methods of speeding up the convergence of iterative methods, Zh. Vychisl. Mat. Mat. Fiz. 4 (1964), 1–17.Search in Google Scholar

[40] L. O. Jolaoso, A. Taiwo, T. O. Alakoya, and O. T. Mewomo, A unified algorithm for solving variational inequality and fixed point problems with application to the split equality problem, Comput. Appl. Math. 39 (2020), 38, 10.1007/s40314-019-1014-2.Search in Google Scholar

[41] O. K. Oyewole, H. A. Abass, and O. T. Mewomo, A strong convergence algorithm for a fixed point constrained split null point problem, Rend. Circ. Mat. Palermo II (2020), 10.1007/s12215-020-00505-6.Search in Google Scholar

[42] K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, Marcel Dekker, New York, 1984.Search in Google Scholar

[43] T. O. Alakoya, L. O. Jolaoso, and O. T. Mewomo, Modified inertial subgradient extragradient method with self adaptive stepsize for solving monotone variational inequality and fixed point problems, Optimization (2020), 10.1080/02331934.2020.1723586.Search in Google Scholar

[44] L. J. Lin, M. F. Yang, Q. H. Ansari, and G. Kassay, Existence results for Stampacchia and Minty type implicit variational inequalities with multivalued maps, Nonlinear Anal. 61 (2005), 1–19.10.1016/j.na.2004.07.038Search in Google Scholar

[45] M. Kojima and S. Shindo, Extension of Newton and quasi-Newton methods to systems of PC1 equations, J. Oper. Res. Soc. Japan 29 (1986), no. 4, 352–375.Search in Google Scholar

Received: 2019-07-12
Revised: 2020-05-07
Accepted: 2020-05-30
Published Online: 2020-09-15

© 2020 Timilehin Opeyemi Alakoya et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 27.4.2024 from https://www.degruyter.com/document/doi/10.1515/dema-2020-0013/html
Scroll to top button