1. Introduction
Convex analysis, monotone operator theory, and the theory of nonexpansive mappings are foundational pillars of nonlinear analysis, intricately linked through their shared mathematical structures. Notably, a wide range of minimization problems encountered in practice can be elegantly reformulated as monotone inclusion problems, highlighting the unifying role these theories play in addressing complex variational challenges. (see [
2,
9,
13,23]).
To begin, let’s explore one of the most well-known problems involving maximal monotone operators, which can be stated as follows:
where
is real Hilbert space with an inner product
and induced norm
,
A is maximal monotone operator. there are large number of authors have been interested in this problem one of them Rockafellar in 1976 (see[
15]), and they devloped deferent algorithms to find the set of solution of the recent problem, the latter can be written as
, where
the resolvent of
A and one of the most famous of these algorithms is proposed by Mann (see[
12]) which defined as follows:
under suitable conditions on
it is proved that the iterative sequence converges strongly to fixed point of
, which is equevalent to a solution of the original problem. In this research, we focus on characterizing the set of solutions to the following problem:
Find
x in
such that
where
is
-inverse strongly monotone operator,
is a
-strongly monotone operator, and
are a nonexpansive operators.
If
, and
A,
B are two maximal monotone operators defined on a real Hilbert space, this case has been studied by many authors (see [
2,
10,
16]) using the Douglas-Rachford iterative algorithm (DRIA) which is defined by the iterative sequence
as follows:
A.Beddani also proposed a new algorithm to find the set
(see [
3,
5]). The algorithm defined by the following function:
if there exists a pair
such that
, then
.
In our work, we focus on one of the most well-known approaches: the Douglas-Rachford Splitting Algorithm (DRSA) in the case of two maximal monotone operators (see [
7]). In the subsequent sections, we propose an operator defined as :
we propose several algorithms, one of which is the Ishikawa iterative sequence (see [
11]), which guarantees strong convergence under apropriat conditions.
In order to build these algorithms, we need some a preliminary concepts from convex analysis and monotone operator theory.
2. Preliminaries
2.1. Operators and Monotonicity
Let
be a real Hilbert space, and let
be a set-valued operator. We denote the domain of
A by
, i.e.,
We say that
A has full domain if
. The range of
A is defined as
The graph of
A is given by:
Let
be a finite family of operators. Then the sum operator is defined as:
Definition 1 ([
6]).
The operator A is said to be monotone if:
Definition 2 ([
6]).
The operator A is said to be -strongly monotone
with if:
For any operator
A and
, the
resolvent of
A is defined as:
An operator
A is said to be
nonexpansive if:
Proposition 1. For all and :
If A is α-strongly monotone, then is -Lipschitz continuous.
If A is α-inverse strongly monotone, then the Yosida approximation is -Lipschitz continuous.
Proposition 2 ([
1]).
Proposition 3 ([
1]).
For any , , we have
2.2. Maximal Monotone Operators and Convex Functions
An operator A is said to be maximal monotone if it satisfies the following:
A is monotone.
If B is another monotone operator such that the graph of A (i.e., the set of all pairs ) is contained in the graph of B, then .
Let be convex subsets of a Hilbert space , and let be a function.
Definition 3 ([
7]).
A function f is said to be convex if
If the inequality is strict for all , then f is called strictly convex. Moreover, f is said to be α-strongly convex if:
Definition 4 ([
7]).
Let be a convex function. The set
is called the subdifferential of f at x. The function f is said to be subdifferentiable at x if . An element of the subdifferential is called a subgradient.
A famous example of a maximal monotone operator is the subgradient of the function on :
We observe that no operator can contain on , hence it is a maximal monotone operator.
Lemma 1 ([
8]).
Given any maximal monotone operator A, a real number , and , we have if and only if .
Lemma 2 ([
11]).
Let a real sequence satisfy the following condition:
where , , and , . Then, .
3. Main Result
In this section, we present our main results related to the problem under consideration. Our objective is to address and solve various cases of the following monotone inclusion problem.
where
is
-inverse strongly monotone operator,
is
-strongly monotone, and
are finite family of nonexpansive operators
.
Let us define the operator , and S defined as:
First we aim to study the problem defined by the sum of two maximal monotone opertaors
A and
B, both defined on a Hilbert space
, this problem is formulated as: Find element
such that,
So let Propose simple algorithm by using the approximation of yoshida which can solve (3.2).
Proposition 4. For any , , we have
Proof. let
,
, we have
this equevalent to,
therefore,
so we conclude,
This complete the proof. □
Proposition 5.
Let and define the operator by
If is a fixed point of , then is a solution of the monotone inclusion problem (3.2).
Proof. Assume that
is a fixed point of
. By definition, this implies:
Subtracting
from both sides, we obtain:
Let us define
. Then:
Substituting these into the previous equation gives:
Thus, we have:
which means that
z is a solution of equation (3.2), as claimed. □
Theorem 1.
for all , if the sequence which defined as:
Where , and . If converge to l then solve (3.2).
We now turn to the study of the principal problem (3.1).
Theorem 2. For all , if if , then .
Proof. Let
be a fixed point of
, so:
This complete the proof. □
3.1. Algorithm 1
In this algorithm, we impose an additional condition on the family , which is that the operators must be bijective.
Proposition 6.
for all , if , then the system defined as follow:
has a solution .
Proof. let
, then exist
such that
, this implies,
let we pose,
Therfore,
Consequently,
This implies that
is solution of
. □
Theorem 3.
for all , if the system of sequences defined as:
Converge to then is solution of .
Proof. let assume that the last system converge to
in
so, we have
Therefore,
Then,
After simplify,
We have also,
.
So we conclude that is fixed point of , which prove that is solution of . □
Below we will examine the case when
, so the problem (1.1) will define as: Find an element
x in the Hilbert space
such that,
Where
A and
B are two maximal monotone operators defined on Hilbert space
and
C is nonexpansive single valued mapping defined also on
. Then the algorithm will defined as:
3.2. Algorithm 2
Proposition 7.
Let be finite family of nonexpansive operators defined on H, A is α-inverse strongly monotone operator, B be β-strongly monotone operator defined on a real Hilbert space. For all , and , is a L-lipschitzian operator where
Theorem 4. For all λ, α and β positive real numbers: if , and , where . Then is a contractive mapping.
Proof. If
is contractive, then the inequality satisfies:
□
After simplification, the next step is to solve the resulting polynomial inequality involving parameters
,
n, and
:
we have:
This implies:
Hence, we deduce the following conditions:
This completes the proof. □
In this part, we modify the Ishikawa algorithm to acheive faster convergence of aour sequence . Accordingly, we present the following theorem that defines the modified algorithm:
Theorem 5.
Let be a real Hilbert space and be a closed convex subspace of . Let be a contractive mapping. Let be a sequence defined iteratively for each integer by
where , are sequences of positive numbers satisfying the following conditions:
,
,
.
If converges, then it converges to a unique fixed point of .
3.3. Convergence Analysis
Ishikawa has shown that for any points
x,
y,
z in a Hilbert space and any real number
:
Let
be a fixed point of
, then we have
From contraction condition we have:
On the other hand,
which expands to:
Similarly, we can express:
Moreover, we have the following inequality:
By introducing equations (
11), (
10), and (
9) into (
8), we obtain:
Substituting equation (
12) into equation (
7), we get:
This shows that
is decreasing for all sufficiently large
k. Since conditions
and
are satisfied, there exists a subsequence
of
such that:
Now, we show that
is a Cauchy sequence. Indeed,
Taking the limit as
, we have:
Thus, is a Cauchy sequence, hence convergent.
Call the limit
. Then:
Using the contraction of
, we have:
Taking the limit as
, we obtain:
Taking the limit as
, we deduce that
, i.e.,
. Now, we aim to prove that the sequence
converges to the unique fixed point of
.
we know that:
Suppose that
, then:
Hence (
15) can be rewritten as follows:
By substituting (
17) into (
16), we obtain:
Incorporating (
18) into (
13) yields:
Given that
,
,
, and
, there exists a natural number
N such that for
:
Thus, for
, we have
where
.
From the boundedness of
C, it follows that
is bounded. Therefore, we conclude that
From Lemma 2.7, we conclude that . This completes the proof.
3.4. Maximal Monotone Operators and Minimization Problem
We consider the following composite convex optimization problem:
where:
is a continuously differentiable function with a Lipschitz continuous gradient, i.e., is 1-Lipschitz,
G and H are convex, closed, and proper functions.
Proposition 8.
Let , and . Then, the minimization problem (20) is equivalent to finding a zero of the sum of maximal monotone operators, that is:
3.5. Example
Let
f,
G, and
H be three real-valued functions defined on
as follows:
We consider the following minimization problem:
Let us define the following monotone operators corresponding to the gradients of
G,
H, and
f:
Then, the minimization problem above is equivalent to the inclusion problem:
We know the resolvents of the operators are given by:
and hence,
According to Theorem 5, we proceed by choosing the parameters:
3.5.1. Application of the Algorithm 2
We have the sigle-valued mapping:
For
, this becomes:
We initialize the process as:
and define:
Iteration steps
At each iteration
, we update:
These choices still satisfy the assumptions of Theorem 3.3 and ensure that the sequence
converges to the unique fixed point of
, which is:
4. Conclusion
In conclusion, we propose some algorithms for solving the principle problem. We also supported this work with a set of simple examples and observed the convergence of the sequences proposed in this paper to the same solution of the problem. Nevertheless, the development of alternative algorithms under appropriate conditions that effectively address this class of problems remains an open area of research, offering valuable opportunities for further investigation and advancement.
References
- Attouch, H. and Baillon, J. and Théra, M. Variational Sum of Monotone Operators. Journal of Convex Analysis 1994, 1, 1–29. [Google Scholar]
- Bauschke, H. and Combettes, P. L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer; New York, 2011.
- Beddani, A. Finding a Zero of the Sum of Two Maximal Monotone Operators with Minimization Problem. Nonlinear Functional Analysis and Applications, 2022; 27, 895–902. [Google Scholar]
- Beddani, A. Finding a Zero of the Sum of Three Maximal Monotone Operators. Journal of Science and Arts, 2022; 22, 795–802. [Google Scholar]
- Beddani, A. and Berrailes, A. Zeros of the Sum of a Finite Family of Maximal Monotone Operators. Journal of Optimization Theory and Applications, 2025; 205, 59. [Google Scholar]
- Brézis, H. 1973. Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert, 1973. [Google Scholar]
- Cegielski, A. 2012.Iterative Methods for Fixed Point Problems in Hilbert Spaces Berlin, Heidelberg: Springer, vol. 2057.
- Eckstein, A. and Bertsekas, D. P. On the Douglas-Rachford Splitting Method and the Proximal Point Algorithm for Maximal Monotone Operators. Mathematical Programming, 1992; 55, 293–318. [Google Scholar]
- Martinez-Legaz, J. Enrique. Monotone Operators Representable by l.s.c Convex Functions. 2022 27 pp. 895–902.
- Ibaraki, T. Approximation of a Zero Point of Monotone Operators with Nonsummable Errors. Fixed Point Theory and Applications, 2016; 53. [Google Scholar]
- Qihou, Liu. A Convergence Theorem of the Sequence of Ishikawa Iterates for Quasi-Contractive Mappings. Journal of Mathematical Analysis and Applications, 1990; 146, 301–305.
- Mann, R. Mean Value Methods in Iteration. Proceedings of the American Mathematical Society, 1953; 4, 506–510. [Google Scholar]
- Moudafi, A. and Théra, M. Finding a Zero of the Sum of Two Maximal Monotone Operators. Journal of Optimization Theory and Applications, 1997; 94, 425–448. [Google Scholar]
- Nammanee, K. and Suantai, V. and Cholamjiak, P. Convergence Theorems for Maximal Monotone Operators, Weak Relatively Nonexpansive Mappings and Equilibrium Problems. Journal of Applied Mathematics, 2012; 16. [Google Scholar]
- Rockafellar, R. T. Monotone Operators and the Proximal Point Algorithm. SIAM Journal on Control and Optimization, 1976; 14, 877–898. [Google Scholar]
- Shehu, Y. Convergence Results of Forward-Backward Algorithms for Sum of Monotone Operators in Banach Spaces. Results in Mathematics, 2019; 74, 138. [Google Scholar]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).