Preprint
Article

This version is not peer-reviewed.

Algorithms for Solving Resolvent of the Sum of Two Maximal Monotone Operators with Finite Family of Nonexpansive Operators

A peer-reviewed article of this preprint also exists.

Submitted:

07 October 2025

Posted:

08 October 2025

You are already at the latest version

Abstract
In this research paper, we address a variational problem using maximal monotone operators in conjunction with a finite family of nonexpansive operators. We propose a single-valued mapping whose fixed point allows us to find the solution to the main problem. Subsequently, we propose two algorithms. In the first, we introduce a system of sequences whose limit helps in solving the problem. In the second, we apply the Ishikawa algorithm to our setting using fixed point theory, which enables us to achieve strong convergence. Finally, we provide an illustrative example to demonstrate the applicability of our results.
Keywords: 
;  ;  ;  

1. Introduction

Convex analysis, monotone operator theory, and the theory of nonexpansive mappings are foundational pillars of nonlinear analysis, intricately linked through their shared mathematical structures. Notably, a wide range of minimization problems encountered in practice can be elegantly reformulated as monotone inclusion problems, highlighting the unifying role these theories play in addressing complex variational challenges. (see [2,9,13,23]).
To begin, let’s explore one of the most well-known problems involving maximal monotone operators, which can be stated as follows:
find x H such that 0 A ( x ) ,
where H is real Hilbert space with an inner product . ; . and induced norm . , A is maximal monotone operator. there are large number of authors have been interested in this problem one of them Rockafellar in 1976 (see[15]), and they devloped deferent algorithms to find the set of solution of the recent problem, the latter can be written as S = { x H | J λ A ( x ) = x } , where J λ A the resolvent of A and one of the most famous of these algorithms is proposed by Mann (see[12]) which defined as follows:
x 0 H , x k + 1 = ( 1 a k ) x k + a k J λ A ( x k ) , k 0 ,
under suitable conditions on a k it is proved that the iterative sequence converges strongly to fixed point of J λ A , which is equevalent to a solution of the original problem. In this research, we focus on characterizing the set of solutions to the following problem:
Find x in H such that
0 A ( x ) + B ( x ) + i = 1 n C i ( x ) ,
where A : H H is α -inverse strongly monotone operator, B : H H is a β -strongly monotone operator, and C 1 i n : H H are a nonexpansive operators.
If C i = 0 , 1 i n , and A, B are two maximal monotone operators defined on a real Hilbert space, this case has been studied by many authors (see [2,10,16]) using the Douglas-Rachford iterative algorithm (DRIA) which is defined by the iterative sequence { Z n } as follows:
Z n + 1 = J λ B ( 2 J λ A I ) ( Z n ) + ( I J λ B ) ( Z n ) .
A.Beddani also proposed a new algorithm to find the set ( A + B ) 1 ( 0 ) (see [3,5]). The algorithm defined by the following function:
f λ : R 2 R 2 ,
f λ ( x , y ) = J λ A ( x ) x + y 2 J λ B ( y ) x + y 2 ,
if there exists a pair ( x , y ) R 2 such that f λ ( x , y ) = 0 , then 0 A ( J λ A x ) + B ( J λ A x ) .
In our work, we focus on one of the most well-known approaches: the Douglas-Rachford Splitting Algorithm (DRSA) in the case of two maximal monotone operators (see [7]). In the subsequent sections, we propose an operator defined as :
Ψ λ ( x ) = J λ B ( λ i = 1 n C i ( J λ A x ) x + 2 J λ A x ) + λ A λ ( x ) ,
we propose several algorithms, one of which is the Ishikawa iterative sequence (see [11]), which guarantees strong convergence under apropriat conditions.
In order to build these algorithms, we need some a preliminary concepts from convex analysis and monotone operator theory.

2. Preliminaries

2.1. Operators and Monotonicity

Let H be a real Hilbert space, and let A : H H be a set-valued operator. We denote the domain of A by dom ( A ) , i.e.,
dom ( A ) = { x H : A ( x ) } .
We say that A has full domain if dom ( A ) = H . The range of A is defined as
Im ( A ) = { y H : x H , y A ( x ) } .
The graph of A is given by:
gph ( A ) : = { ( x , y ) H × H : x dom ( A ) , y A ( x ) } .
Let { A i } i = 1 n be a finite family of operators. Then the sum operator is defined as:
i = 1 n A i ( x ) : = i = 1 n y i : y i A i ( x ) , i = 1 , , n .
Definition 1
([6]). The operator A is said to be monotone if:
y 1 y 2 , x 1 x 2 0 for all ( x i , y i ) gph ( A ) , i = 1 , 2 .
Definition 2
([6]). The operator A is said to be α -strongly monotone with α > 0 if:
y 1 y 2 , x 1 x 2 α x 1 x 2 2 for all ( x i , y i ) gph ( A ) , i = 1 , 2 .
For any operator A and λ > 0 , the resolvent of A is defined as:
J λ A : = ( I + λ A ) 1 .
An operator A is said to be nonexpansive if:
y 1 y 2 x 1 x 2 for all ( x i , y i ) gph ( A ) , i = 1 , 2 .
 Proposition 1.
For all α > 0 and λ > 0 :
1 )
If A is α-strongly monotone, then J λ A is 1 α λ + 1 -Lipschitz continuous.
2 )
If A is α-inverse strongly monotone, then the Yosida approximation A λ is 1 α + λ -Lipschitz continuous.
Proposition 2
([1]). A λ ( y ) A J λ A ( y ) , y H .
Proposition 3
([1]). For any λ > 0 , δ > 0 , we have
J λ A ( x ) = J δ A δ λ x + 1 δ λ J λ A ( x ) .

2.2. Maximal Monotone Operators and Convex Functions

An operator A is said to be maximal monotone if it satisfies the following:
  • A is monotone.
  • If B is another monotone operator such that the graph of A (i.e., the set of all pairs ( x , A ( x ) ) ) is contained in the graph of B, then B = A .
Let X , Y H be convex subsets of a Hilbert space H , and let f : X R be a function.
Definition 3
([7]). A function f is said to be convex if
x , y X and for all λ [ 0 , 1 ] , f ( ( 1 λ ) x + λ y ) ( 1 λ ) f ( x ) + λ f ( y ) .
If the inequality is strict for all x y , then f is called strictly convex. Moreover, f is said to be α-strongly convex if:
f ( ( 1 λ ) x + λ y ) ( 1 λ ) f ( x ) + λ f ( y ) α λ ( 1 λ ) 2 x y 2 .
Definition 4
([7]). Let f : H R be a convex function. The set
f ( x ) : = g H : g , y x f ( y ) f ( x ) for all y H
is called the subdifferential of f at x. The function f is said to be subdifferentiable at x if f ( x ) . An element of the subdifferential is called a subgradient.
A famous example of a maximal monotone operator is the subgradient of the function f ( x ) = | x | on R :
f ( x ) = 1 , x < 0 , , x = 0 , 1 , x > 0 .
We observe that no operator can contain f on R , hence it is a maximal monotone operator.
Lemma 1
([8]). Given any maximal monotone operator A, a real number λ > 0 , and x H , we have 0 A ( x ) if and only if J λ A ( x ) = x .
Lemma 2
([11]). Let a real sequence { x k } k = 1 satisfy the following condition:
x k + 1 σ x k + ρ k
where x k 0 , ρ k 0 , and lim k ρ k = 0 , 0 σ < 1 . Then, lim k x k = 0 .

3. Main Result

In this section, we present our main results related to the problem under consideration. Our objective is to address and solve various cases of the following monotone inclusion problem.
0 A ( x ) + B ( x ) + i = 1 n C i ( x ) ,
where A : H 2 H is α -inverse strongly monotone operator, B : H 2 H is β -strongly monotone, and { C i } 1 i n are finite family of nonexpansive operators C i : H H .
Let us define the operator F ( Ψ λ ) = { x * H Ψ λ ( x * ) = x * } , and S defined as:
S ( x ) = x H 0 A ( x ) + B ( x ) + i = 1 n C i ( x ) .
First we aim to study the problem defined by the sum of two maximal monotone opertaors A and B, both defined on a Hilbert space H , this problem is formulated as: Find element x H such that,
0 A ( x ) + B ( x ) .
So let Propose simple algorithm by using the approximation of yoshida which can solve (3.2).
Proposition 4.
For any δ > 0 , λ > 0 , we have A λ ( x ) = A δ ( x + δ A λ ( x ) ) .
Proof. 
let δ > 0 , λ > 0 , we have
J λ A ( x ) = J δ A δ λ x + 1 δ λ J λ A ( x ) ,
this equevalent to,
x λ A λ ( x ) = δ λ x + 1 δ λ ( x λ A λ ( x ) ) δ A δ δ λ x + 1 δ λ ( x λ A λ ( x ) ) ,
therefore,
λ A λ ( x ) = λ A λ ( x ) + δ A δ δ A δ δ λ x + 1 δ λ ( x λ A λ ( x ) ) ,
so we conclude,
A λ ( x ) = A δ ( x + δ A λ ( x ) ) .
This complete the proof. □
Proposition 5.
Let λ > 0 and define the operator θ λ : H H by
θ λ ( x ) = x + A λ ( x ) + B λ x 2 λ A λ ( x ) .
If x is a fixed point of θ λ , then x λ A λ ( x ) is a solution of the monotone inclusion problem (3.2).
Proof. 
Assume that x * is a fixed point of θ λ . By definition, this implies:
x * + A λ ( x * ) + B λ ( x * 2 λ A λ ( x * ) ) = x * .
Subtracting x * from both sides, we obtain:
A λ ( x * ) + B λ ( x * 2 λ A λ ( x * ) ) = 0 .
Let us define z = x * λ A λ ( x * ) . Then:
x * = z + λ A λ ( x * ) , and x * 2 λ A λ ( x * ) = z λ A λ ( x * ) .
Substituting these into the previous equation gives:
A λ ( x * ) + B λ ( z λ A λ ( x * ) ) = 0 .
Thus, we have:
0 A ( z ) + B ( z ) ,
which means that z is a solution of equation (3.2), as claimed. □
Theorem 1.
for all λ > 0 , if the sequence { x k } which defined as:
x 0 , x 1 H , x k + 1 = ( 1 α ) x k + α θ λ ( x k ) + ϵ k ( x k x k 1 ) , k 0 .
Where 0 < α < 1 , ϵ k ] 0 , 1 [ and i = 1 ϵ k < . If { x k } converge to l then l λ A λ ( l ) solve (3.2).
We now turn to the study of the principal problem (3.1).
Theorem 2.
For all λ > 0 , if Ψ λ ( x ) = J λ B ( λ i = 1 n C i ( J λ A ( x ) ) x + 2 J λ A ( x ) ) + λ A λ ( x ) if S , then { J λ A ( F ( Ψ λ ) ) } S .
Proof. 
Let x * be a fixed point of Ψ λ , so:
x * F ( Ψ λ ) Ψ λ ( x * ) = x * J λ B ( λ i = 1 n C i ( J λ A ( x * ) ) x * + 2 J λ A ( x * ) ) + λ A λ ( x * ) = x * x * λ A λ ( x * ) = J λ B ( λ i = 1 n C i ( J λ A ( x * ) ) x * + 2 J λ A ( x * ) ) J λ A ( x * ) = J λ B ( λ i = 1 n C i ( J λ A ( x * ) ) x * + 2 J λ A ( x * ) ) λ i = 1 n C i ( J λ A ( x * ) ) x * + 2 J λ A ( x * ) λ B ( J λ A ( x * ) ) + J λ A ( x * ) x * + J λ A ( x * ) λ i = 1 n C i ( J λ A ( x * ) ) + λ B ( J λ A ( x * ) ) and x * J λ A ( x * ) λ A ( J λ A ( x * ) ) 0 λ i = 1 n C i ( J λ A ( x * ) ) + λ B ( J λ A ( x * ) ) + λ A ( J λ A ( x * ) ) J λ A ( x * ) zer ( i = 1 n C i + A + B ) .
This complete the proof. □

3.1. Algorithm 1

In this algorithm, we impose an additional condition on the family { C i } 1 i n , which is that the operators I + λ C i must be bijective.
Proposition 6.
for all λ > 0 , if F ( Ψ λ ) , then the system defined as follow:
J λ A ( x ) = x + y + i = 1 i = n z i n + 2 J λ B ( y ) = x + y + i = 1 i = n z i n + 2 J λ C 1 ( z 1 ) = x + y + i = 1 i = n z i n + 2 . . . J λ C n ( z n ) = x + y + i = 1 i = n z i n + 2 .
has a solution ( x , y , z 1 , . . . , z n ) H n + 2 .
Proof. 
let F ( Ψ λ ) , then exist x H such that Ψ λ ( x ) = x , this implies,
J λ A ( x ) = J λ B ( λ i = 1 n C i ( J λ A ( x ) ) x + 2 J λ A ( x ) ) .
let we pose,
x = x , λ i = 1 n C i ( J λ A ( x ) ) x + 2 J λ A ( x ) ) = y , J λ A ( x ) + λ C i ( J λ A ( x ) ) = z i .
Therfore,
J λ A ( x ) = J λ B ( y ) , J λ A ( x ) = J λ C i ( z i ) , x + y + i = 1 i = n z i = ( n + 2 ) J λ A ( x ) .
Consequently,
J λ A ( x ) = x + y + i = 1 i = n z i n + 2 , J λ B ( y ) = x + y + i = 1 i = n z i n + 2 , J λ C 1 ( z 1 ) = x + y + i = 1 i = n z i n + 2 , . . . J λ C n ( z n ) = x + y + i = 1 i = n z i n + 2 .
This implies that ( x , y , z 1 , . . . , z n ) is solution of ( 3.2 ) . □
Theorem 3.
for all λ > 0 , if the system of sequences defined as:
( x 0 , y 0 , z 1 0 , . . . , z n 0 ) H n + 2 , x k + 1 = ( n + 2 ) J λ A ( x k ) y k i = 1 i = n z i k , y k + 1 = ( n + 2 ) J λ A ( y k ) x k i = 1 i = n z i k , z i k + 1 = J λ A ( x k ) + λ C i ( J λ A ( x k ) ) , k 0 .
Converge to ( x , y , z 1 , . . . , z n ) then J λ A ( x ) is solution of ( 1.1 ) .
Proof. 
let assume that the last system converge to ( x , y , z 1 , . . . , z n ) in H n + 2 so, we have
x = ( n + 2 ) J λ A ( x ) y k i = 1 i = n z i , y = ( n + 2 ) J λ A ( y ) x i = 1 i = n z i , z i = J λ A ( x ) + λ C i ( J λ A ( x ) ) .
Therefore,
x + y + i = 1 i = n z i = ( n + 2 ) J λ A ( x ) , y + x + i = 1 i = n z i = ( n + 2 ) J λ A ( y ) , z i = J λ A x + λ C i ( J λ A ( x ) ) .
Then,
x + y + n J λ A x + λ i = 1 i = n C i ( J λ A x ) = ( n + 2 ) J λ A ( x ) ,
After simplify,
y = 2 J λ A x λ i = 1 i = n C i ( J λ A x ) x
We have also, J λ A ( x ) = J λ B ( y ) .
Consequently,
J λ A ( x ) = J λ B ( λ i = 1 n C i ( J λ A ( x ) ) x + 2 J λ A x ) .
So we conclude that x is fixed point of Ψ λ , which prove that J λ A ( x ) is solution of ( 1.1 ) . □
Below we will examine the case when n = 1 , so the problem (1.1) will define as: Find an element x in the Hilbert space H such that,
0 A ( x ) + B ( x ) + C ( x ) .
Where A and B are two maximal monotone operators defined on Hilbert space H and C is nonexpansive single valued mapping defined also on H . Then the algorithm will defined as:
( x 0 , y 0 , z 0 ) H 3 , x k + 1 = 3 J λ A ( x k ) y k z k , y k + 1 = 3 J λ A ( y k ) x k z k , z k + 1 = J λ A ( x k ) + λ C ( J λ A ( x k ) ) , k 0 .

3.2. Algorithm 2

Proposition 7.
Let { C i } n i 1 be finite family of nonexpansive operators defined on H, A is α-inverse strongly monotone operator, B be β-strongly monotone operator defined on a real Hilbert space. For all α > 0 , β > 0 and λ > 0 , Ψ λ is a L-lipschitzian operator where
L = n λ 2 + ( α n + 2 ) λ + α ( β λ + 1 ) ( λ + α ) + λ λ + α .
Proof. 
Ψ λ ( x ) Ψ λ ( y ) = J λ B λ i = 1 n C i ( J λ A x ) x + 2 J λ A x J λ B λ i = 1 n C i ( J λ A y ) y + 2 J λ A y λ A λ ( x ) + λ A λ ( y ) < J λ B λ i = 1 n C i ( J λ A x ) ( x J λ A x ) J λ B λ i = 1 n C i ( J λ A y ) ( y J λ A y ) + λ λ + α x y < 1 β λ + 1 λ i = 1 n C i ( J λ A x ) λ A λ ( x ) J λ A x + λ i = 1 n C i ( J λ A y ) + λ A λ ( y ) + J λ A y + λ λ + α x y < 1 β λ + 1 λ i = 1 n C i ( J λ A x ) C i ( J λ A y ) + λ λ + α + 1 x y + λ λ + α x y < 1 β λ + 1 λ i = 1 n J λ A x J λ A y + λ λ + α + 1 x y + λ λ + α x y < 1 β λ + 1 λ n + 2 λ + α λ + α x y + λ λ + α x y < n λ 2 + ( α n + 2 ) λ + α ( β λ + 1 ) ( λ + α ) + λ λ + α x y .
Theorem 4.
For all λ, α and β positive real numbers: if β > n , α > 2 β n and 0 < λ < ( β n ) α 2 n , where n N * . Then Ψ λ is a contractive mapping.
Proof. 
If Ψ λ is contractive, then the inequality satisfies:
n λ 2 + ( α n + 2 ) λ + α ( β λ + 1 ) ( λ + α ) + λ λ + α < 1 ( n + β ) λ 2 + ( 3 + α n ) λ + α ( β λ + 1 ) ( α + λ ) 1 < 0 n λ 2 + ( 2 + α n α β ) λ ( β λ + 1 ) ( α + λ ) < 0 .
After simplification, the next step is to solve the resulting polynomial inequality involving parameters α , n, and β :
n λ 2 + ( 2 + α n α β ) λ < 0 ,
we have:
n λ 2 + ( 2 + α n α β ) λ < 0 λ ( n λ + 2 + n α α β ) < 0 .
This implies:
λ > 0 , n λ + 2 + n α α β < 0 , n > 0 .
Hence, we deduce the following conditions:
β > n , α > 2 β n , 0 < λ < α β 2 n α n , n > 0 .
This completes the proof. □
In this part, we modify the Ishikawa algorithm to acheive faster convergence of aour sequence { Ψ λ x k } . Accordingly, we present the following theorem that defines the modified algorithm:
Theorem 5.
Let H be a real Hilbert space and C be a closed convex subspace of H . Let Ψ λ : C C be a contractive mapping. Let { x k } be a sequence defined iteratively for each integer k 0 by
x 0 H , x k + 1 = a k x k + b k Ψ λ y k , y k = c k x k + d k Ψ λ x k , k 0 ,
where { a k } , { b k } are sequences of positive numbers satisfying the following conditions:
  • 0 d k b k < 1 ,
  • b k + a k = 1 ,
  • c k + d k = 1 .
If { x k } converges, then it converges to a unique fixed point of Ψ λ .

3.3. Convergence Analysis

Ishikawa has shown that for any points x, y, z in a Hilbert space and any real number λ :
| | λ x + ( 1 λ ) y z | | 2 = λ | | x z | | 2 + ( 1 λ ) | | y z | | 2 λ ( 1 λ ) | | x y | | 2 .
Let x * be a fixed point of Ψ λ , then we have
| | x k + 1 x * | | 2 = | | a k x k + b k Ψ λ y k x * | | 2 = b k | | Ψ λ y k x * | | 2 + a k | | x k x * | | 2 b k a k | | x k Ψ λ y k | | 2
From contraction condition we have:
Ψ λ y k x * 2 = | | Ψ λ y k Ψ λ x * | | 2 y k x * 2 + h y k Ψ λ y k 2 , where h = L 2 .
On the other hand,
y k x * 2 = c k x k + d k Ψ λ x k x * 2 ,
which expands to:
y k x * 2 = d k Ψ λ x k x * 2 + c k x k x * 2 d k c k x k Ψ λ x k 2 .
Similarly, we can express:
y k Ψ λ y k 2 = d k Ψ λ x k Ψ λ y k 2 + c k x k Ψ λ y k 2 d k c k x k Ψ λ x k 2 .
Moreover, we have the following inequality:
Ψ λ x k x * 2 x k x * 2 + h x k Ψ λ x k 2 .
By introducing equations (11), (10), and (9) into (8), we obtain:
Ψ λ y k x * 2 c k x k x * 2 + h c k x k Ψ λ x k 2 + d k x k x * 2
d k c k x k Ψ λ x k 2 + h d k Ψ λ x k Ψ λ y k 2 + h c k x k Ψ λ y k 2 .
Thus,
Ψ λ y k x * 2 x k x * 2 d k ( c k h d k ) x k Ψ λ x k 2
+ h d k Ψ λ x k Ψ λ y k 2 + h c k x k Ψ λ y k 2 .
Substituting equation (12) into equation (7), we get:
x k + 1 x * 2 x k x * 2 + h b k Ψ λ x k Ψ λ y k 2 b k d k ( c k h d k ) x k Ψ λ x k 2 b k ( a k h + h d k ) x k Ψ λ y k 2 .
This shows that { x k x * 2 } is decreasing for all sufficiently large k. Since conditions ( 2 ) and ( 3 ) are satisfied, there exists a subsequence { x k m } of { x k } such that:
lim m x k m Ψ λ x k m = 0 .
Now, we show that { Ψ λ x k } is a Cauchy sequence. Indeed,
Ψ λ x k m Ψ λ x k x k Ψ λ x k m + x k Ψ λ x k
Taking the limit as m , , we have:
Ψ λ x k m Ψ λ x k 0 .
Thus, { Ψ λ x k } is a Cauchy sequence, hence convergent.
Call the limit x * . Then:
lim m Ψ λ x k m = lim m x k m = x * .
Using the contraction of Ψ λ , we have:
Ψ λ x * Ψ λ x k m x * x k m + x k m Ψ λ x k m + Ψ λ x * Ψ λ x k m .
Taking the limit as m , we obtain:
lim m Ψ λ x * Ψ λ x k m = 0 .
Hence, we conclude that:
x * Ψ λ x * x * x k m + x k m Ψ λ x k m + Ψ λ x k m Ψ λ x * .
Taking the limit as m , we deduce that x * Ψ λ x * = 0 , i.e., x * = Ψ λ x * . Now, we aim to prove that the sequence { x k } converges to the unique fixed point of Ψ λ .
x k + 1 x * 2 = a k x k + b k Ψ λ y k x * 2 = b k Ψ λ y k x * 2 + a k x k x * 2 b k a k Ψ λ y k x * 2 .
we know that:
Ψ λ y k x * 2 L 2 y k x * 2 + L 2 y k Ψ λ y k 2 .
Suppose that L 2 = h , then:
Ψ λ y k x * 2 h y k x * 2 + h y k Ψ λ y k 2 .
On the other hand:
y k x * 2 = d k Ψ λ x k + c k x k x * 2 = d k Ψ λ x k x * 2 + c k x k x * 2 d k c k Ψ λ x k x k 2 .
And similarly:
y k Ψ λ y k 2 = d k Ψ λ x k + c k x k Ψ λ y k 2 = c k Ψ λ x k Ψ λ y k 2 + c k x k Ψ λ y k 2 d k c k Ψ λ x k x k 2 .
Hence (15) can be rewritten as follows:
Ψ λ y k x * 2 h d k Ψ λ x k x * 2 + h c k x k x * 2 h d k c k Ψ λ x k x k 2 + h d k Ψ λ x k Ψ λ y k 2 + h c k x k Ψ λ y k 2 h d k c k Ψ λ x k x k 2 .
However, we also have
Ψ λ x k x * 2 h x k x * 2 + h Ψ λ x k x k 2 .
By substituting (17) into (16), we obtain:
Ψ λ y k x * 2 h 2 d k x k x * 2 + h 2 Ψ λ x k x k 2 + h c k x k x * 2 h d k c k Ψ λ x k x k 2 + h d k c k Ψ λ x k Ψ λ y k 2 + h c k x k Ψ λ y k 2 h d k c k Ψ λ x k x k 2 ( h c k + h d k ) x k x * 2 h d k ( 2 2 d k ) Ψ λ x k x k 2 + h c k x k Ψ λ y k 2 + h d k Ψ λ x k Ψ λ y k 2 .
Incorporating (18) into (13) yields:
x k + 1 x * 2 b k h x k x * 2 b k d k h ( 2 h 2 d k ) Ψ λ x k x k 2 + b k c k h x k Ψ λ y k 2 + b k d k h Ψ λ x k Ψ λ y k 2 + a k x k x * 2 b k a k h x k Ψ λ y k 2 a k ( 1 h ) x k x * 2 b k d k h ( 2 h 2 d k ) Ψ λ x k x k 2 + b k d k h Ψ λ x k Ψ λ y k 2 b k a k ( 1 h + h d k ) ) x k Ψ λ y k 2 .
Given that 1 h 2 b k 1 h , 0 < h < 1 , d k 0 , and lim d k = 0 , there exists a natural number N such that for k > N :
2 h 2 d k 0 and a k h + h d k 0 .
Thus, for k N , we have
x k + 1 x * 2 h ˜ x k x * 2 + b k d k h Ψ λ x k Ψ λ y k 2 ,
where 0 < h ˜ = 1 ( 1 h ) 2 2 .
From the boundedness of C, it follows that Ψ λ x k Ψ λ y k 2 is bounded. Therefore, we conclude that
lim k b k d k h Ψ λ x k Ψ λ y k 2 = 0 .
From Lemma 2.7, we conclude that lim k x k = x * . This completes the proof.

3.4. Maximal Monotone Operators and Minimization Problem

We consider the following composite convex optimization problem:
min x R n f ( x ) + G ( x ) + H ( x ) ,
where:
  • f : R n R is a continuously differentiable function with a Lipschitz continuous gradient, i.e., f is 1-Lipschitz,
  • G and H are convex, closed, and proper functions.
Proposition 8.
Let A = G , B = H and C = f . Then, the minimization problem (20) is equivalent to finding a zero of the sum of maximal monotone operators, that is:
Find x R n such that 0 A ( x ) + B ( x ) + C ( x ) .

3.5. Example

Let f, G, and H be three real-valued functions defined on R as follows:
f ( x ) = 1 2 x 2 + 2 , G ( x ) = 1 10 x 2 , H ( x ) = 2 x 2 .
We consider the following minimization problem:
min x R f ( x ) + G ( x ) + H ( x ) .
Let us define the following monotone operators corresponding to the gradients of G, H, and f:
A ( x ) = { G ( x ) } = 1 5 x , B ( x ) = { H ( x ) } = 4 x , C ( x ) = f ( x ) = x .
Then, the minimization problem above is equivalent to the inclusion problem:
Find x R such that 0 A ( x ) + B ( x ) + C ( x ) .
We know the resolvents of the operators are given by:
J λ A ( x ) = 5 5 + λ x , J λ B ( x ) = 1 1 + 4 λ x ,
and hence,
Ψ λ ( x ) = 4 λ 2 5 λ + 10 ( 4 λ + 1 ) ( λ + 5 ) x .
According to Theorem 5, we proceed by choosing the parameters:
λ = 2 , b n = 1 ( n + 1 ) 2 , d n = 1 n + 1 .

3.5.1. Application of the Algorithm 2

We have the sigle-valued mapping:
Ψ λ ( x ) = 4 λ 2 5 λ + 10 ( 4 λ + 1 ) ( λ + 5 ) x .
For λ = 2 , this becomes:
Ψ 2 ( x ) = 16 63 x .
We initialize the process as:
x 0 = y 0 = 1 ,
and define:
d n = 1 ( n + 1 ) 2 , b n = 1 n + 1 , a n = 1 b n , c n = 1 d n .

Iteration steps

At each iteration n 0 , we update:
y n = c n x n + d n Ψ 2 ( x n ) = 1 1 ( n + 1 ) 2 x n + 16 63 ( n + 1 ) 2 x n , x n + 1 = a n x n + b n Ψ 2 ( y n ) = 1 1 n + 1 x n + 16 63 ( n + 1 ) y n .
These choices still satisfy the assumptions of Theorem 3.3 and ensure that the sequence { x n } converges to the unique fixed point of Ψ 2 , which is:
x * = Ψ 2 ( x * ) x * = 0 .

4. Conclusion

In conclusion, we propose some algorithms for solving the principle problem. We also supported this work with a set of simple examples and observed the convergence of the sequences proposed in this paper to the same solution of the problem. Nevertheless, the development of alternative algorithms under appropriate conditions that effectively address this class of problems remains an open area of research, offering valuable opportunities for further investigation and advancement.

References

  1. Attouch, H. and Baillon, J. and Théra, M. Variational Sum of Monotone Operators. Journal of Convex Analysis 1994, 1, 1–29. [Google Scholar]
  2. Bauschke, H. and Combettes, P. L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer; New York, 2011.
  3. Beddani, A. Finding a Zero of the Sum of Two Maximal Monotone Operators with Minimization Problem. Nonlinear Functional Analysis and Applications, 2022; 27, 895–902. [Google Scholar]
  4. Beddani, A. Finding a Zero of the Sum of Three Maximal Monotone Operators. Journal of Science and Arts, 2022; 22, 795–802. [Google Scholar]
  5. Beddani, A. and Berrailes, A. Zeros of the Sum of a Finite Family of Maximal Monotone Operators. Journal of Optimization Theory and Applications, 2025; 205, 59. [Google Scholar]
  6. Brézis, H. 1973. Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert, 1973. [Google Scholar]
  7. Cegielski, A. 2012.Iterative Methods for Fixed Point Problems in Hilbert Spaces Berlin, Heidelberg: Springer, vol. 2057.
  8. Eckstein, A. and Bertsekas, D. P. On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators. Mathematical Programming, 1992; 55, 293–318. [Google Scholar]
  9. Martinez-Legaz, J. Enrique. Monotone Operators Representable by l.s.c Convex Functions. 2022 27 pp. 895–902.
  10. Ibaraki, T. Approximation of a Zero Point of Monotone Operators with Nonsummable Errors. Fixed Point Theory and Applications, 2016; 53. [Google Scholar]
  11. Qihou, Liu. A Convergence Theorem of the Sequence of Ishikawa Iterates for Quasi-Contractive Mappings. Journal of Mathematical Analysis and Applications, 1990; 146, 301–305.
  12. Mann, R. Mean Value Methods in Iteration. Proceedings of the American Mathematical Society, 1953; 4, 506–510. [Google Scholar]
  13. Moudafi, A. and Théra, M. Finding a Zero of the Sum of Two Maximal Monotone Operators. Journal of Optimization Theory and Applications, 1997; 94, 425–448. [Google Scholar]
  14. Nammanee, K. and Suantai, V. and Cholamjiak, P. Convergence Theorems for Maximal Monotone Operators, Weak Relatively Nonexpansive Mappings and Equilibrium Problems. Journal of Applied Mathematics, 2012; 16. [Google Scholar]
  15. Rockafellar, R. T. Monotone Operators and the Proximal Point Algorithm. SIAM Journal on Control and Optimization, 1976; 14, 877–898. [Google Scholar]
  16. Shehu, Y. Convergence Results of Forward-Backward Algorithms for Sum of Monotone Operators in Banach Spaces. Results in Mathematics, 2019; 74, 138. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated