0% found this document useful (0 votes)
54 views11 pages

An Extension of Golden Section Algorithm For N-Variable Functions With MATLAB Code

This document proposes an extension of the golden section search algorithm to solve optimization problems with more than one variable. It defines region elimination rules for n-variable functions over an n-dimensional search space using function evaluations at 2n mesh points. The algorithm transforms the n-dimensional search space to a unit n-cube and applies golden section search principles to iteratively eliminate regions where the minimum cannot exist. Numerical results applying the algorithm to benchmark functions up to 5 dimensions are presented, along with a comparison to the Nelder-Mead simplex method. MATLAB code for 2D and 3D versions of the algorithm is also provided.

Uploaded by

Yohannes Alemu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views11 pages

An Extension of Golden Section Algorithm For N-Variable Functions With MATLAB Code

This document proposes an extension of the golden section search algorithm to solve optimization problems with more than one variable. It defines region elimination rules for n-variable functions over an n-dimensional search space using function evaluations at 2n mesh points. The algorithm transforms the n-dimensional search space to a unit n-cube and applies golden section search principles to iteratively eliminate regions where the minimum cannot exist. Numerical results applying the algorithm to benchmark functions up to 5 dimensions are presented, along with a comparison to the Nelder-Mead simplex method. MATLAB code for 2D and 3D versions of the algorithm is also provided.

Uploaded by

Yohannes Alemu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

IOP Conference Series: Materials Science and Engineering

PAPER • OPEN ACCESS

An extension of golden section algorithm for n-variable functions with


MATLAB code
To cite this article: G Sandhya Rani et al 2019 IOP Conf. Ser.: Mater. Sci. Eng. 577 012175

View the article online for updates and enhancements.

This content was downloaded from IP address 194.110.89.34 on 07/12/2019 at 17:26


ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

An extension of golden section algorithm for n-variable


functions with MATLAB code

G Sandhya Rani, Sarada Jayan and K V Nagaraja


Department of Mathematics, Amrita School of Engineering, Bengaluru, Amrita
Vishwa Vidyapeetham, India

[email protected], [email protected], [email protected]

Abstract: Golden section search method is one of the fastest direct search algorithms to solve
single variable optimization problems, in which the search space is reduced from to
. This paper describes an extended golden section search method in order to find the
minimum of an n-variable function by transforming its n-dimensional cubic search space to the
zero-one n-dimensional cube. The paper also provides a MATLAB code for two-dimensional
and three-dimensional golden section search algorithms for a zero-one n-dimensional cube.
Numerical results for some benchmark functions up to five dimensions and a comparison of
the proposed algorithm with the Neldor Mead Simplex Algorithm is also provided.

1. Introduction:
Optimization is imperative in almost all fields of engineering, sciences and finance. Researchers in
each of these areas have to hinge on numerical techniques to solve optimization problems as analytical
ways of solving may be impractical in most of the real world application problems. The numerical
optimization problems are mainly classified as gradient-based (derivative based) methods and
gradient-free methods. A wide variety of derivative-based optimization methods are available to find
the minimum of a real valued multivariable function ( ), if ( ) is differentiable and it’s gradient,
( ) can be estimated precisely using finite differences. But in most of the engineering application
problems in optimization, the objective function is discontinuous, non-linear and non-convex for
which the derivative either cannot be directly found or the computationally obtained derivative may be
unreliable. Such problems are often solved using direct search methods. Direct search methods are
referred to as unconstrained optimization techniques which do not require the information about the
derivatives of the objective function. Many direct search methods were established and used since
1960s [1, 2, 3], but in the last two decades there is extensive research happening in this field[4, 5, 6, 7]
due to the development and interest in parallel and distributed computing.
Among the direct search single variable optimization problems, the Golden Section Search(GSS)
is one of the renowned algorithms. Initially Kiefer [8] introduced the golden number, and the Golden
section search algorithm, according to [9] is one of the best direct search algorithms used to find the
minimum of a single variable optimization problem. It is a region-elimination method in which the
search interval is first transformed into a unit interval , using a linear transformation. After
that, two points at distance from both the end of the search space is taken in such a manner that at
every iteration the eliminated region is to that in the previous iteration. Such a is obtained by

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

solving and the solution of this equation, is termed as the golden number. In
GSS, the search interval is reduced to ( ) after function evaluations.
According to [9], GSS needs only one function evaluation in each iteration and also the effective
region elimination per function evaluation is exactly 38.2%, which is higher than many other methods,
which makes this ideal for many application problems.
The authors of [10, 11] have developed and used a two-dimensional GSS for object tracking
using Gabor Wavelet Transform. But in this paper they have not mentioned the 2D-GSS algorithm in
detail. Later Yen Ching Chang[12] proposes an n-dimensional GSS algorithm and its variants.
Recently Chakraborthy and Panda [13] proposed a GSS algorithm over hyper-rectangle.
In this paper, like in [13] we choose an n-dimensional cubic search space and provide an
extension of GSS to find the minimum of n-variable functions, by initially transforming the n-
dimensional cubic search space to an n-dimensional zero-one search space using the transformation in
[16]. The region elimination rules for n-variable functions over an n-dimensional cubic search space
are defined initially in section 2 and the proposed algorithm is explained in detail in section 3. The
percentage of search space eliminated in each iteration is also derived. Numerical results for some
benchmark functions given in [14,15] up to 5 dimensions and a comparison of the proposed algorithm
with the Neldor-Mead Simplex Algorithm is provided in section 4. Section 5 displays a MATLAB
code for the proposed method and the conclusions are presented in section 6.

2. Region Elimination rules defined over a mesh for multivariable functions:


In this section we define region elimination rules for a unimodal n-variable function defined in an n-
dimensional cubic search space, using the function values at 2n mesh points inside the search space.

2.1. Region elimination rules for two variable functions over a rectangle
Consider 22=4 mesh points ( ) ( ) ( ) ( ) in the rectangular search space
, as in Fig.1. For unimodal functions for minimization the following can be
concluded:
 If ( ) { ( ) ( ) ( ) ( )}, then the minimum will not lie in and
(shaded region in Fig.1(a).
 If ( ) { ( ) ( ) ( ) ( )}, then the minimum will not lie in and
.(shaded region in Fig.1(b).
 If ( ) { ( ) ( ) ( ) ( )}, then the minimum will not lie in and
(shaded region in Fig.1(c).
 If ( ) { ( ) ( ) ( ) ( )}, then the minimum will not lie in and
(shaded region in Fig.1(d).
If the minimum function value is equal at two or three different points, then the region to be
eliminated would be the intersection of the corresponding regions in Fig.1. For example, if ( )
( ) { ( ) ( ) ( ) ( )}, then the minimum will not lie in the region
which is the intersection of shaded regions in Fig.1(a) and Fig.1(b) and thus that region can be
eliminated. If the function value is equal at all the points, which is a very rare situation, especially
when computations are performed numerically, then we can conclude that the minimum lies inside the
rectangle , i.e., we can eliminate , , and .

2.2. Region elimination rules for three variable functions over a cuboid
Let us assume the search space of a three-dimensional function to be a cuboid, given by ,
and . The region elimination rules are defined using the function
evaluations at 8(=23) mesh points inside the search space. Let these points be
( ) ( ) ( ) ( ) in the plane and
( ) ( ) ( ) ( ) in the plane.
For unimodal functions for minimization of the following can be concluded:

2
ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

 If ( ) { ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )}, then eliminate ,


and .
 If ( ) { ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )}, then eliminate ,
and .
 If ( ) { ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )}, then eliminate ,
and .
 If ( ) { ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )}, then eliminate ,
and .
 If ( ) { ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )}, then eliminate ,
and .
 If ( ) { ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )}, then eliminate ,
and .
 If ( ) { ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )}, then eliminate ,
and .
 If ( ) { ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )}, then eliminate ,
and .
Like in two-variable case if the minimum function values are at more than one point then the
region corresponding to the intersection of these regions can be eliminated and if all the function
values are equal, it can be concluded that the minimum lies in the cuboid with as
corners.
In a similar manner region elimination rules can be described for n-variable functions too over an
n-dimensional cubic search space.
(𝑎 𝑏 ) (𝑏 𝑏 )
(𝑎 𝑏 ) (𝑏 𝑏 )

𝐶 𝐷 𝐶 𝐷

𝐴 𝐵 𝐴 𝐵

(𝑎 𝑎 ) (𝑏 𝑎 ) (𝑎 𝑎 )
(b) (𝑏 𝑎 )
(a)

(𝑎 𝑏 ) (𝑏 𝑏 ) (𝑎 𝑏 ) (𝑏 𝑏 )

𝐶 𝐷 𝐶 𝐷

𝐴 𝐵 𝐴 𝐵

(𝑎 𝑎 ) (𝑏 𝑎 ) (𝑎 𝑎 )
(d) (𝑏 𝑎 )
(c)

Fig. 1. Region Elimination Pattern for two-dimensional problem

3
ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

3. Golden section search algorithm for n-variable functions:


Step 1
Choose a lower bound and an upper bound for each variable in the minimization problem.
Also choose a termination parameter .
Step 2
Transform the n-dimensional cubic search space,
{( ) }
to an n-dimensional zero-one cube:
{( ) }
using the linear transformation
( )
( )

( )
This would transform the function from ( ) to ( ) where each variable
will have a lower bound and an upper bound for
Step 3
( ) ( )
( ) ( )
Evaluate the mesh points, , where is the
( )
(( ) ( ))
golden number.

For two variable problems, the four initial points would be:
( ) ( ) ( ) ( ).

For three variable problems there will be eight points, which are:
( ) ( ) ( )
( ) ( ) ( ),
( ) ( ).
Step 4
Compute the function value at each of the points generated in Step 4, ( ). Depending on at
which point the minimum function value is there, apply the region elimination rule defined in Section
3 to eliminate a region. Set new , and
Step 5

Evaluate . Is ‖ ‖ ? If no, go to Step 3; Else Terminate.


( )
( )

The point at which the function value is the least in the final iteration is taken as the minimum
point, ( ) for the function ( ) and substituting this point in the
transformation given in Step 2, we get the minimum point ( ) of the given function
( ).
The region eliminated in one-dimensional GSS in each iteration is 38.2%, which is ( ) .
While applying two-dimensional GSS, the region eliminated would be ( )(( ) ( ).
For three-dimensional GSS, it is ( )( ) ( ) and similarly we can generalize
the region eliminated in each iteration while applying the proposed algorithm would be:

4
ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

( )( )
This makes the proposed algorithm fast.

4. Numerical outcomes:
In order to authenticate the performance of the algorithm, we applied the algorithm in some
benchmark functions given in [14,15], few of which are presented in this section. All the results are
obtained in a 6 GB RAM Intel i5 laptop. Table 1 gives the minimum obtained for some standard
functions using the proposed algorithm up to five dimensions. Table 2 provides numerical results for
some fixed dimensional benchmark functions.
The proposed algorithm is also compared with the Neldor-Mead simplex Algorithm(NMSA)
which is incorporated in MATLAB to find a minimum of a function with the command fminsearch.
The comparison of number of iterations and time taken to evaluate the minimum for four functions
using both methods tabulated in Table 3, attests the achievement of the extended GSS that is proposed
in this paper.

Table 1: Minimum of few benchmark functions obtained using the proposed algorithm for up to 5 dimensions
Minimum
Function Search Space
obtained
2D ( )
3D ( )
Sphere
4D ( )
5D ( )
2D ( ) ( )
3D ( ) ( ) ( )
( ) ( ) (
Ridge 4D ( )
)
( ) ( )
5D ( ) ( ( )
)
2D ( )
Sum 3D ( )
squares
function 4D ( )
5D ( )
2D ( )
3D ( )
Schwefel
2.22 4D ( )

5D ( )

2D ( ) ( ) ( )
( ) ( ) ( )
3D ( )

( ) ( ) ( )
( ) (
Brent 4D )

( ) ( ) ( )
( ) ( ) (
5D )

5
ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

Table 2: Minimum of few benchmark functions obtained using the proposed algorithm
Minimum
Function Search Space
obtained
Hyper-
2D ( )
ellipsoid
Ackley 2 2D √ ( )
Matyas 2D ( ) ( )
Powell 4D (( ) ( ) ( ) ( ) ( )
Rotated
2D √ ( )
Ellipse

Table 3: Comparison of the proposed algorithm with Nelder-Mead Simplex Algorithm

Results using proposed algorithm Using Nelder-Mead Simplex Algorithm

Initial domain Iterati Min Point Time Initial guess Iterati Min Point Time
ons (in ons (in
Function sec) sec)

39 (1.0000,1.0000) 0.001 (-3,2) 47 (1.0000,1.0000) 0.01


(-3,-5) 49 0.006

39 (0,0) 0.002 (0,4) 200 (0,0) 0.009


(3,0) 200 0.008

39 (0.0) 0.001 (-5.12,-5.12) 46 (0.0) 0.01


(-5.12,5.12) 46 0.006

42 (1.8122,2.5000) 0.001 (1,1) 41 (1.8122,2.5000) 0.006


(5,1) 50 0.008

√( )

( ) ( )

5. MATLAB code for proposed algorithm:


5.1. MATLAB code for finding the minimum of a two-variable function

f=@(x,y) sqrt(3*x*4*y); % enter the function


a1 = 0; % lower bound for variable x
b1 = 1; % upper bound for variable x
a2 = 0; % lower bound for variable y
b2 = 1; % upper bound for variable y
epsilon =0.00000001; % termination criteria
tau=double((sqrt(5)-1)/2); % golden number

6
ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

k=0; % number of iterations


x1= a1+(1-tau)*( b1- a1); x2= a1+(tau)*( b1- a1);
y1= a2+(1-tau)*( b2- a2); y2= a2+(tau)*( b2- a2);
ek=[x1,y1]; % Point A
fk=[x1,y2]; % Point C
hk=[x2,y1]; % Point B
gk=[x2,y2]; % Point D
fek=f(x1,y1);ffk=f(x1,y2);fhk=f(x2,y1);fgk=f(x2,y2); % function values at points A,
B, C and D.
while sqrt((b1 – a1 )^2+( b2 – a2 )^2) > epsilon % termination condition
k=k+1;
min1=min([fek,fhk,ffk,fgk]);
if min1==fek
b1=x2; b2=y2;
elseif min1==ffk
b1=x2; a2=y1;
elseif min1==fgk
a1 =x1; a2 =y1;
elseif min1==fhk
a1=x1; b2 =y2;
end
x1= a1+(1-tau)*( b1- a1); x2= a1+(tau)*( b1- a1);
y1= a2+(1-tau)*( b2- a2); y2= a2+(tau)*( b2- a2);
fek=f(x1,y1);ffk=f(x1,y2);fhk=f(x2,y1);fgk=f(x2,y2);
min1=min([fek,fhk,ffk,fgk]);
end
if min1==fek
fprintf('minimum at the point %f', x1,y1);
elseif min1==ffk
fprintf('minimum at the point %f', x1,y2);
elseif min1==fhk
fprintf('minimum at the point %f', x2,y1);
elseif min1==fgk
fprintf('minimum at the point %f', x2,y2);
end
fprintf('minimum value %f', min1);
fprintf('number of iterations %f', k);

5.2. MATLAB code for finding the minimum of a three-variable function

g=@(x,y,z) x^2+y^2+z^2; % enter the function


a1=0; % lower bound for variable x
b1=1; % upper bound for variable x
a2=0; % lower bound for variable y
b2=1; % upper bound for variable y
a3=0; % lower bound for variable z
b3=1; % upper bound for variable z
epsilon=0.00000001; % termination criteria
tau=double((sqrt(5)-1)/2); % golden number
k=0; % number of iterations
x1= a1+(1-tau)*( b1- a1); x2= a1+(tau)*( b1- a1);

7
ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

y1= a2+(1-tau)*( b2- a2); y2= a2+(tau)*( b2- a2);


z1= a3+(1-tau)*( b3- a3); z2= a3+(tau)*( b3- a3);
ek=[x1,y1,z1]; % Point A
fk=[x1,y2,z1]; % Point B
gk=[x1,y1,z2]; % Point C
hk=[x1,y2,z2]; % Point D
ik=[x2,y1,z1]; % Point E
jk=[x2,y2,z1]; % Point F
kk=[x2,y1,z2]; % Point G
lk=[x2,y2,z2]; % Point H
gek=g(x1,y1,z1); gfk=g(x1,y2,z1);ggk=g(x1,y1,z2);ghk=g(x1,y2,z2);
gik=g(x2,y1,z1);gjk=g(x2,y2,z1);gkk=g(x2,y1,z2);glk=g(x2,y2,z2);
% function values at points A, B, C, D, E, F, G, and H.
while sqrt((b1- a1)^2+( b2- a2)^2+( b3- a3)^2)>epsilon % termination condition
k=k+1;
min1=min([gek,ghk,gfk,ggk,gik,gjk,gkk,glk]);
if min1==gek
b1=x2; b2=y2; b3=z2;
elseif min1==gfk
b1=x2; a2=y1; b3=z2;
elseif min1==ggk
b1=x2; b2=y2; a3=z1;
elseif min1==ghk
b1=x2; a2=y1; a3=z1;
elseif min1==gik
a1=x1; b2=y2; b3=z2;
elseif min1==gjk
a1=x1; a2=y1; b3=z2;
elseif min1==gkk
a1=x1; b2=y2; a3=z1;
elseif min1==glk
a1=x1; a2=y1; a3=z1;
end
x1= a1+(1-tau)*( b1- a1); x2= a1+(tau)*( b1- a1);
y1= a2+(1-tau)*( b2- a2); y2= a2+(tau)*( b2- a2);
z1= a3+(1-tau)*( b3- a3); z2= a3+(tau)*( b3- a3);
gek=g(x1,y1,z1); gfk=g(x1,y2,z1); ggk=g(x1,y1,z2); ghk=g(x1,y2,z2);
gik=g(x2,y1,z1); gjk=g(x2,y2,z1); gkk=g(x2,y1,z2); glk=g(x2,y2,z2);
min1=min([gek,ghk,gfk,ggk,gik,gjk,gkk,glk]);
end
if min1==gek
fprintf('minimum at the point%f', x1,y1,z1);
elseif min1==gfk
fprintf('minimum at the point%f', x1,y2,z1);
elseif min1==ggk
fprintf('minimum at the point%f', x1,y1,z2);
elseif min1==ghk
fprintf('minimum at the point %f', x1,y2,z2);
elseif min1==gik
fprintf('minimum at the point %f', x2,y1,z1);
elseif min1==gjk

8
ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

fprintf('minimum at the point %f', x2,y2,z1);


elseif min1==gkk
fprintf('minimum at the point %f', x2,y1,z2);
elseif min1==glk
fprintf('minimum at the point %f', x2,y2,z2);
end

fprintf('minimum %f', min1);


fprintf('number of iterations %f', k);

6. Conclusions
The current work describes an extended golden section search algorithm for finding minimum of
multidimensional unconstrained optimization problems and establishes the benefit of using it. We
need not have the information of the derivative of the function in order to apply this algorithm.
Also a global minimum can be obtained using this algorithm for functions that are convex or
quasi-convex. As the size of the eliminated region in each iteration is more in the proposed
method compared to other search techniques, the number of iterations required to reach the
minimum is less and so the time taken to obtain the minimum is also less. This rapidity of the
proposed algorithm makes it practical for many engineers and researchers working in
optimization. The transformation of the search space to a zero-one cube and the MATLAB code
over the zero-one square/cube would enable the user to easily make use of the algorithm for their
work.

7. References
[1] Hooke R and Jeeves T A 1961 Direct search solution of numerical and statistical problems
Journal of the ACM (JACM) 8 212–229.
[2] Spendley W, Hext G R and Himsworth F 1962 Sequential application of simplex designs in
optimisation and evolutionary operation Technometrics 4 441–461.
[3] Nelder J A and Mead R 1965 A simplex method for function minimization The Computer
Journal 7 308–313.
[4] Kolda T G, Lewis R M and Torczon V 2003 Optimization by direct search: new perspectives on
some classical and modern methods SIAM Review 45 385–482.
[5] Conn A R, Scheinberg K and Vicente L N 2009 Introduction to Derivative-Free Optimization
(Philadelphia: SIAM).
[6] Rios L M and Sahinidis N V 2013 Derivative-free optimization: a review of algorithms and
comparison of software implementations Journal of Global Optimization 56 1247–1293.
[7] Sanket M Bhat, Sarraf Nikhil Vineetha K V, Sarada Jayan, Dhanesh G Kurup A parallelized
method for global single variable optimization (Accepted for publication in IEEE Explore)
[8] Kiefer J 1953 Sequential minimax search for a maximum Proceedings of the American
Mathematical Society 4 502–506.
[9] Deb K 1995 Optimization for Engineering Design: Algorithms and Examples (New Delhi:
Prentice Hall).
[10] He C, Dong J, Zheng Y F and Ahalt S C 2001 Object Tracking Using the Gabor Wavelet
Transform and the Golden Section Algorithm Proc. 2001 IEEE Int. Conf. Robot. Autom. 2
1671-1676.
[11] He C, Dong J, Zheng Y F and Ahalt S C 2002 Object Tracking Using the Gabor Wavelet
Transform and the Golden Section Algorithm IEEE Multimedia 4 528-538.
[12] Chang Y C 2009 N-dimension golden section search: its variants and limitations Proceedings
of the IEEE 2nd International Conference on Biomedical Engineering and Informatics
BMEI'09 1–6.
[13] Chakraborty S K and Panda G 2016 Golden Section Search over a hyper-rectangle: a direct

9
ICONAMMA2018 IOP Publishing
IOP Conf. Series: Materials Science and Engineering 577 (2019) 012175 doi:10.1088/1757-899X/577/1/012175

search method International J. Mathematics in Operational Research 8 279-292.


[14] Koupaei J A, Hosseini S M M and Ghaini F M 2016 A new optimization algorithm based on
chaotic maps and golden section search method Engineering Applications of Artificial
Intelligence 50 201-214.
[15] Jamil M and Yang X S 2013 A literature survey of benchmark functions for global optimisation
problems International Journal of Mathematical Modelling and Numerical Optimisation 4
150-194.
[16] Sarada Jayan and Nagaraja K V 2014 Numerical integration over n-dimensional cubes using
generalized Gaussian quadrature Proceedings of the Jangjeon Mathematical Society 17 63-
69.

10

You might also like