Paper Code MT-1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

PAPER CODE: MT-1

FIXED POINT THEORY: ALGORITHMS AND ITS APPLICATIONS


TO REAL WORLD PROBLEMS

Abstract

The purpose of this work is to investigate the fixed-point theory-based algorithms and their
applications to real world problems. We present case studies and examine the practical
implementation of fixed point algorithms. We explore a number of algorithms, including the
Newton-Raphson method, the Picard iteration approach and the Banach fixed point theorem.
Then, we look at how these algorithms can be used in a variety of disciplines, including
physics, computer science, and economics. Fixed point theory algorithms, analysis and use
offer priceless insights and solutions to challenging issues encountered in real-world
situations.

Key words: Banach contraction principle, the Picard iteration method, the Newton-Raphson
method, contraction mapping, metric spaces

1.1 Introduction and preliminaries


Fixed-point theorems give adequate conditions under which there exists a fixed point for a
given function and enable us to ensure the existence of a solution of the original problem . The
presence of fixed points in continuous mappings was shown by Brouwer's (Brouwer, 1911)
fixed point theorem, which was developed in 1911, this breakthrough became a major
advance in topology and established a foundation for subsequent discoveries in fixed point
theory. In 1922, Banach (Banach, 1922) introduced the Banach fixed point theorem (Banach
contraction principle), elaborating on Brouwer's work by applying this concept to entire
metric spaces. In 1941, Kakutani (Kakutani, 1941) improved on Brouwer's thesis by
proposing a fixed point theorem for set-valued mappings. These revolutionary contributions
opened the door for in-depth study and practical applications of fixed point theory in a
number of disciplines, such as functional analysis, optimisation, and dynamical systems.
A fixed point of a function is a point that doesn't change as a result of the function's
operation. The behaviour and characteristics of fixed points, which have numerous
applications across a wide range of disciplines, can be studied using a framework provided by
fixed point theory. In the recent years, extraordinary research has been conducted by
profound mathematicians (Gautam, et al.,2020) to enrich the field of Fixed Point Theory and
give us a deeper insight on the topic and paved the way for further research the topic for new
emerging mathematicians.
Definition 1. Let( X , d ) be a complete metric space, and let f : X → X be a contraction
mapping on X . Then, there exists a unique fixed point x ∈ X such that f ( x )=x .
It is also known as Banach Contraction Principle.

Definition 2. (Frechet, 1994) A metric space is a pair (X , d ), where X is a set and d is a


distance function on X, often called a metric. The distance function d : X × X → R satisfies the
following properties for all x , y , and z in X:

1) Non-negativity: d (x , y)≥ 0, and d (x , y)=0 if and only if x= y .


2) Symmetry:d (x , y) = d ( y , x) for all x and y in X .
3) Triangle inequality:d (x , y)+d ( y , z )≥ d ¿) for all x , y , and z in X.

Definition 3. Let (X , d ) be a metric space, and let f : X → X be a function defined on X. We


say that f is a contraction mapping if there exists a constant k ∈(0 , 1) such that for all x, y ∈
X:
d (f (x ), f ( y ))≤ k∗d (x , y ),
where d (x , y) represents the distance between x∧ y in the metric space.

In the next section we go through the various fixed point theory algorithms and further state
the standard equations which are later on used in solving the various real world problems.

1.2 Conquering the complexity of fixed point


By offering techniques to quickly locate fixed points, fixed point theory algorithms play a
significant role in resolving practical issues. These algorithms provide strong instruments for
deconstructing intricate systems and identifying reliable fixes. Exploring the algorithms
employed in fixed point theory helps us comprehend their fundamental ideas better and put
those ideas to work in solving real-world problems.

1.2.1 Banach Fixed Point Theorem and Contraction Mapping: A unique fixed point for a
contraction mapping is established via the Banach fixed point theorem (Banach, 1922). The
importance of the Banach fixed point theorem in ensuring the presence and uniqueness of
fixed points for contraction mappings is demonstrated by this equation.
Fig. 1: A function with three fixed points

1.2.2 Picard Iteration Method: By constantly using a fixed point iteration scheme, the
Picard iteration method is an iterative approach for approximating solutions to equations.
(Picard et al.,1890). Given, an initial value x 0, the Picard iteration method for finding a fixed
point of a function f(x) is given by the recursive equation:
x n+1=f (x n ) 1

1.2.3 Newton-Raphson Method: The Newton-Raphson method (Newton et al., 1890) is a


numerical technique for locating progressively more accurate approximations to a function's
roots. The Newton-Raphson method is used to find the roots of a function f(x).
Given an initial approximation x 0, the iteration equation is given by:
f (x )
x n+1=x n − ' n
f (x n)
We continue our research on the fixed point algorithms and look into the various problems of
economics where fixed point theory is significantly used to solve real world problems like
equilibrium, optimisation and game theory.

1.3 Navigating market dynamics using fixed point


For modelling and researching equilibrium problems, fixed point theory in economics has
been demonstrated to be extremely useful. The concept of a fixed point, where a function
remains its initial configuration after a transformation, can be used to explain equilibrium
conditions in economic models. Economists may investigate and find stable solutions for a
wide range of economic events by expressing the equilibrium condition as a fixed point
equation.
1.3.1 Equilibrium Problems
In modelling and analysing equilibrium issues in economics, such as determining the market's
equilibrium pricing or the best distribution of resources, fixed point theory is crucial. See
(Ansari et. al., 2012). Equilibrium refers to a state in which various economic variables are in
constant state and stable with minimal fluctuations. Fixed point is significant in establishing
economic stability and the various fixed point algorithms help in designing and solving these
equilibrium problems.
The fixed point equation f ( x )=0 can be used to represent the equilibrium state in a general
equilibrium model with m agents and n products. In this instance, x stands for a vector of
prices and allocations, and f is an equational system for both the market clearing
circumstances and the agents' optimisation issues. See (Shubik, 1991)
1.3.2 Optimization in Economics
Finding the best answer to an optimisation problem that maximises or minimises a particular
objective function is a common task in economics. The Newton-Raphson method and other
fixed point theory methods can be used to tackle optimisation issues by locating the fixed
points of the objective function's derivative. See (Boyd et. al., 2004). Optimisation techniques
generally use the various fixed point algorithms (Newton-Raphson method) as a key tool by
performing iterative variable and updating until a fixed point, or the best possible outcome, is
attained. This outcome is the optimised solution. Fixed point theory algorithms are used for
decision-making and economic model optimisation.
1.3.3 Game Theory
The study of strategic interactions between numerous agents or players is a component of
game theory. A robust framework for analysing and resolving equilibrium issues in game
theory is provided by fixed point theory. See (Basar et al.,1999) The fixed points of specific
functions correspond to Nash equilibrium, (Nash, 1950) a crucial idea in game theory. The
algorithms based on fixed point theory aid in the analysis of game theory models by making
it easier to recognise equilibrium and forecast strategic action.
In the next section, we look into the applications of fixed point algorithms in the field of
computer sciences. We proceeded with our investigation on the various problems of
computer science like graph theory, network analysis, machine learning, image processing,
etc and how fixed point is used to solve all these problems which act as an obstacle in the
smooth running of a program.

1.4 Computational harmony: the role of fixed point in computer science


A recursive equation that describes the behaviour of a programme is solved using the fixed
point theorem to prove its existence. The program's conclusion is represented by the fixed
point of the recursive equation.
1.4.1 Graph Theory and Network Analysis
In computer science, graph theory is essential, and methods based on fixed point theory are
useful in this field. For instance, fixed point theory-based algorithms can be used to examine
network architecture, identify stable states in network dynam18ics, and resolve graph
optimisation issues. See (Alfuraidan, et al., 2016). The applications of fixed point theory lies
in social networks, transportation networks, and communication networks, as well as used in
graph theory and network analysis.
1.4.2 Image Processing and Computer Vision
Image processing and computer vision tasks that use fixed point theory methods include
image denoising, image segmentation, and picture registration. See (Gonzalez et al., 2008)
By locating stable solutions that capture significant features, these algorithms enable the
extraction of meaningful information from images. We investigate how fixed point theory
methods can be used to improve image processing approaches (image denoising), hence
boosting image quality and interpretability.
1.4.3 Machine Learning and Neural Networks
In order to discover patterns and make predictions, machine learning algorithms frequently
require the optimisation of objective functions. The Picard iteration method, a fixed point
theory algorithm, can be used to train neural networks, handle optimisation issues in machine
learning, and reach desirable answer. See (Bishop et al., 2006). Fixed point theory methods
are used in machine learning and affect the effectiveness and performance of neural network
models.
Fixed point theory has its applications in physics under the topics of quantum mechanics,
chaos theory, nonlinear dynamics, fluid dynamics, etc. In the following section we discover
use of fixed point methods in these fields of physics.

1.5 Unleashing the power of fixed point in Physics


In physics, the fixed point theorem is beneficial, particularly when analysing nonlinear
dynamics. Nonlinear differential equations are frequently applied to describe the behaviour of
many physical systems. The existence of a steady state solution to these equations is shown
using the fixed point theorem. A stable arrangement of the system is illustrated by the steady
state solution.
1.5.1 Quantum Mechanics and Schrödinger's Equation
In quantum mechanics, fixed point theory is crucial for solving Schrödinger's equation
(Schrödinger, 1926) which captures the behaviour of quantum systems. Finding stable
solutions to Schrödinger's equation requires the use of fixed point techniques, which makes it
possible to anticipate the energy levels and wave functions of quantum systems. See (Sakurai
et al., 2010). Fixed point theory algorithms are used in quantum mechanics and it affects our
comprehension of the microscopic universe.
The Schrödinger equation, a fundamental equation in quantum mechanics, can be written as:
Hψ =Eψ where H is the Hamiltonian operator, the wave function, and E is the energy eigen
value. This equation describes the stationary states of a quantum system.
1.5.2 Chaos Theory and Nonlinear Dynamics
Complex dynamical systems with sensitive initial condition dependence are the focus of
chaos theory. By locating stable fixed points, regular orbits, and attractive sets, fixed point
theory techniques shed light on the behaviour of chaotic systems. See (Cuellar et al. 2023).
Fixed point theory can be used to understand and predict the long-term behaviour of
nonlinear systems in chaos theory.
1. 5.3 Fluid Dynamics and Navier-Stokes Equations
The study of fluid flow and its behaviour is called fluid dynamics. It is possible to analyse
and resolve fluid dynamics issues using fixed point theory techniques, especially when the
Navier-Stokes equations (Navier,1882) are involved. These methods help in identifying
stable solutions that explain fluid characteristics and flow patterns. See(Amrouche et al.,
2011) Fixed point theory methods are used in fluid dynamics, emphasising their role in
explaining and forecasting fluid behaviour.
For a better understanding of how fixed point methods are used in fields of economics,
computer science, we look into two case studies which deeply explain the implementation of
fixed point algorithms

Case studies
In the following section, we look into various case studies in the field of economics,
computer science and physics to further explain the implementation of fixed point using
portfolio optimisation and image denosing.
Case Study 1. Solving the Investment Puzzle: Unveiling the Potential of Fixed Point
Algorithms in Portfolio Optimization.
In this case study, we show how fixed point theory algorithms can be used for portfolio
optimisation. The objective of portfolio optimisation is to identify the best investment mix to
maximise returns while taking risk into account. The optimal portfolio weights that provide
the desired balance between risk and return can be found by expressing the problem as an
optimisation task and using fixed point theory techniques
Expected Return (μ) = w ₁ μ ₁+w ₂ μ ₂ + ... + w n μ n 3

where:
1) μ is the vector of expected returns for each asset,
2) w is the vector of asset weights,
Constraints:
1) Portfolio optimization usually involves several constraints, such as the requirement that the
weights sum up to 1 (fully invested portfolio) and bounds on the weights of individual assets.
These constraints can be represented mathematically as:
w ₁+ w ₂+...+¿ w n = 1 (fully invested portfolio)
w ᵢ≥ 0 for all i (non-negative weights)
2) Newton-Raphson Iteration:
To find the optimal weights that maximize the portfolio's expected return, the Newton-
Raphson method can be used iteratively. The algorithm iterates until a convergence criterion
is met. Here's the update equation for the Newton-Raphson iteration:
w n+1 = w n + ¿ 4

where:
1) w(n +1) represents the updated weights in the (n+1)th iteration,
2) w n represents the current weights in the nth iteration,
3) the Hessian Matrix represents the matrix of second derivatives of the objective
function with respect to the weights,
4) the Gradient Vector represents the vector of first derivatives of the objective function
with respect to the weights.
The Newton-Raphson iteration continues until the weights converge to an optimal solution
that maximizes the expected return, subject to the given constraints. See (Agarwal, et
al.,2005)

Fig. 2 : Efficient frontier of two stocks A and B


The X-axis shows the stock standard deviation, while the Y-axis shows the stock returns.
Investors seeking a high rate of return would migrate to the right towards the high risk-return
trade-off, while risk-averse investors would move to the left of the optimal portfolio
tangential line.

Case Study 2. Revealing the hidden beauty: Image Denoising using Fixed Point
Iteration
In this case study, we emphasise the use of methods based on fixed point theory for picture
denoising. An essential step in image processing is image denoising, which aims to eliminate
undesirable noise while maintaining significant image properties. It is possible to iteratively
update pixel values depending on local image characteristics and statistical models by using
fixed point iteration techniques like the Picard iteration. The objective function is typically
defined as follows:
1 ❑ 5
E(u) = λ ∬❑|∇ u ( x , y ) dx . dy|+ ∬❑ ¿

2
where:

● E ¿) is the objective function representing the energy of the denoised image u,

● λ is a regularization parameter that balances the trade-off between the total variation
and fidelity term,
● ∇ u (x , y ) is the gradient of the image u,

● f (x , y ) is the noisy input image,

● (x , y ) represents the spatial coordinates


Fig. 3: shows an origianl, noisy and denoised image
In the given figure, we see the denoised image using the total variation denoising method.

The Rudin-Osher-Fatemi (ROF) model, a well-liked picture denoising model, is represented


by this research equation. It seeks to increase smoothness by using the total variation term
(TV(u)) with a regularisation parameter (λ) to reduce the difference between the denoised
picture (u) and the noisy image (f).
The ROF model produces the denoised image given by
1 d 6
Argmin{ ‖u−x ‖22 μ ‖u ‖TV :u ∈ R }
2
Where, x ∈ Rd d denotes the noisy image to be denoised and ‖u ‖TV is the total variation of u.
The distinctive feature of the total variation regularization and its various variants is that
edges of images are preserved in the denoised image. See(Chambolle, 2004)

Fig. 4: Image denoising using the ROF model


In the given figure we see denoising an image using the ROF model and then a modified
model.
The case study illustrates how image denoising issues can be resolved by using fixed point
iteration approaches, such as the Picard iteration and the ROF model. It creates a clear link
between fixed point theory techniques and practical image denoising applications.
We study about the limitations and challenges that the fixed point theorem faces while
application in the different domains and further finding solutions on how to overcome them.

1.6 Beyond the ideal: addressing the limitations in fixed point


approaches
Studying the challenges and limitations associated with the fixed point theorem encourages a
more deliberate and nuanced use of the theory. It facilitates in the creation of more efficient
algorithms, the selection of appropriate problem domains, the investigation of different
tactics, and enhancing the level of our understanding of the theory's benefits as well as
drawbacks.
1.6.1 Convergence Issue
Although fixed point theory algorithms are effective tools, convergence problems can occur.
The beginning value selection, the function's characteristics, or the stability of the method
may all affect convergence. We need to talk more about the difficulties with convergence and look
into ways to improve fixed point algorithms' convergence capabilities.

1.6.2 Computational Complexity


The computational cost of fixed point theory techniques can be seen, especially when
working with high-dimensional issues or sizable datasets. In real-world applications, these
algorithms' effectiveness and scalability become vital. We should explore methods to increase
the computational effectiveness of fixed point algorithms in future along with the difficulties
associated with computational complexity.
1.6.3 Applicability to Nonlinear Problem
Algorithms based on fixed point theory are generally made to address issues with nonlinear
functions. Their relevance to extremely nonlinear issues, however, can be restricted. It is
recommended to examine the drawbacks of fixed point theory methods for dealing with
intricate nonlinear systems and talk about possible alternatives.
There are many open problems and emerging concepts in the field of fixed point which
remain undiscovered, some of them are discussed in next section.

1.7 Uncharted territory: unlocking new frontiers in fixed point theory


Researchers are continually motivated to pursue novel avenues of inquiry and expand the
frontiers of knowledge by open problems and emerging concepts in the subject of fixed point
theorem. Despite its wide range of applications, there are still fascinating issues that want
more research. Few of such concepts are mentioned in this section.
1.7.1 Hybrid Methods and Optimization Techniques
In order to address drawbacks and enhance convergence features, future research in fixed
point theory can concentrate on creating hybrid approaches that integrate fixed point
algorithms with other optimisation techniques. Convex optimisation, evolutionary algorithms,
and gradient descent are some examples of optimisation approaches that can be integrated to
provide more effective and reliable algorithms for solving practical issues.
1.7.2 Generalisations and Extensions of Fixed Point Theory
Research can be expanded by investigating fixed point theory's generalisations and
extensions. Investigating fixed point theory in relation to functional analysis, topological
techniques, and convex analysis are some of the topics covered in this. Researchers can take
on more challenging issues and learn more about how fixed points behave in other
mathematical settings by generalising the theory.
1.7.3 Integration with Other Mathematical Models
Interdisciplinary progress can result from the integration of fixed point theory with other
mathematical frameworks and concepts. Fixed point algorithms can improve the capabilities
of machine learning, deep learning, or probabilistic models and open up new avenues for
problem-solving in the real world.

1.8 Conclusion
Algorithms built around fixed point theory offer beneficial solutions for problems
encountered in a wide range of fields. By exploring algorithms like the Newton-Raphson
technique, Picard iteration method, and Banach fixed point theorem, we have demonstrated
the way they could potentially be applied to physics, computer science, and economics.
Convergence problems, however, offer complications and obstacles, particularly in high-
dimensional circumstances. Future avenues for investigation include enhancing convergence
and researching hybrid approaches that combine fixed point algorithms with other
optimisation techniques. There are fresh opportunities in generalising fixed point theory to
include topological methods, convex analysis, and functional analysis. Furthermore, the
capabilities of deep learning, machine learning, and probabilistic models can be strengthened
by incorporating fixed point approaches. Fixed point theory continues to offer helpful
insights and answers to impending challenges across multiple fields as scientific knowledge
develops.
References:
Browder, F. E. (1965). Fixed-point theorems for noncompact mappings in Hilbert space.
Proceedings of the National Academy of Sciences of the United States of America, 53(6),
1272-–1276.
Browder, F. E. (1968). The fixed point theory of multi-valued mappings in topological vector
spaces. Mathematische Annalen, 177(4), 283–301.
Kakutani, S. (1941). A generalization of Brouwer’s fixed point theorem. Duke Mathematical
Journal, 8(3), 457–459.
Banach, S. (1922). Sur les opérations dans les ensembles abstraits et leur application aux
équations intégrals. Fundamenta Mathematicae, 3, 133–181. https://doi.org/10.4064/fm-3-1-
133-181.
Kirk, W. A. (2003). Fixed point theory in metric spaces. In Lecture Notes in Mathematics.
Springer, 128.
Fréchet, M. (1994). Fréchet space p. 2001. Encyclopedia of Mathematics, EMS Press.
Ypma, T. J. (1995). Historical development of the Newton–Raphson method. SIAM Review,
37(4), 531–551.
Nikolova, M. (2004). An algorithm for total variation minimization and applications. Journal
of Mathematical Imaging and Vision, 20(1/2), 89–97
Munkres, J. R. (2000). Topology (2nd ed). Prentice Hall.
Agarwal, A., & Hazan, E. (2005). New algorithms for repeated play and universal portfolio
management. Princeton University Technical Report TR-740, 05.
Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge University Press.
Shubik, M. (1991). Game theory in economics. In the Handbook of mathematical economics,
1. Elsevier.
Ansari, Q. H., Al-Homidan, S., & Yao, J. C. (2012). Equilibrium problems and fixed point
theory. Fixed Point Theory and Applications, 2012(1), 25
Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2007). Numerical
recipes: The art of scientific computing (3rd ed). Cambridge University Press
Gonzalez, R. C., & Woods, R. E. (2008). Digital image processing (3rd ed). Pearson.
Alfuraidan, M., & Ansari, Q. (June 10, 2016). Fixed point theory and graph theory,
foundations and integrative approaches (1st ed).
Sakurai, J. J., & Napolitano, J. (2010). Modern quantum mechanics (2nd ed). Pearson
Huerta-Cuellar, G., & Muhammad Zeeshan, H. (2023). Introductory chapter: Fixed points
theory and chaos. Fixed point theory and chaos.
Amrouche, C., & Rodríguez-Bellido, M. Á. (2011). Stationary Stokes, Oseen and Navier–
Stokes equations with singular data. Archive for Rational Mechanics and Analysis, 199(2),
597–651.
Ciepliński, K. (2012). Open problems in fixed point theory. Topological Methods in
Nonlinear Analysis, 40(1), 141–176.
Fitzpatrick, S. (1993). Open problems in fixed point theory. Indian Journal of Pure and
Applied Mathematics, 24(10), 531–558.

Gautam, P., Ruiz L. M. S. and Verma, S. 2020 Fixed Point Interpolative Rus-Reich-Ćirić
Contaction Mapping on Rectangular Quasi-Partial b-Metric Space Symmetry 13(1) 32
Gautam, P., Verma, S. 2021 fixed Point via implicit contraction mapping on quasi-partial b-
metric space J Anal https://doi.org/10.1007/s41478-021-00309-6
Gautam, P., Verma, S., Sen M and Sundriyal S 2021 Fixed Point Results for ω -Interpolative
chatterjea Type Contraction in Quasi-Partial b-Metric Space International Journal of
Analysis and Appl. 19(2), 280-287
,

You might also like