0% found this document useful (0 votes)
2 views

Optimized Fuzzy Model in Piecewise Interval for Function Approximation

The paper presents a novel function approximation method using an optimized fuzzy model segmented into piecewise intervals. Each segment is represented by a Mamdani-type fuzzy submodel, with membership function parameters optimized through a differential evolution algorithm. The proposed model demonstrates superior performance in approximating nonlinear functions compared to existing techniques such as support vector regression and radial basis function networks.

Uploaded by

AnupMallick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Optimized Fuzzy Model in Piecewise Interval for Function Approximation

The paper presents a novel function approximation method using an optimized fuzzy model segmented into piecewise intervals. Each segment is represented by a Mamdani-type fuzzy submodel, with membership function parameters optimized through a differential evolution algorithm. The proposed model demonstrates superior performance in approximating nonlinear functions compared to existing techniques such as support vector regression and radial basis function networks.

Uploaded by

AnupMallick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

SSRG International Journal of Electrical and Electronics Engineering Volume 11 Issue 2, 1-10, February 2024

ISSN: 2348-8379/ https://doi.org/10.14445/23488379/IJEEE-V11I2P101 © 2024 Seventh Sense Research Group®

Original Article

Optimized Fuzzy Model in Piecewise Interval for


Function Approximation
Anup Kumar Mallick1, Sumantra Chakraborty2, Kabita Purkait3, Angsuman Sarkar4
1,3,4
Department of Electronics & Communication Engineering, Kalyani Government Engineering College, West Bengal, India.
2
Department of Electronics & Telecommunication Engineering, Gaighata Government Polytechnic, West Bengal, India.
1
Corresponding Author : [email protected]

Received: 10 November 2023 Revised: 29 November 2023 Accepted: 11 January 2024 Published: 16 February 2024

Abstract - Function approximation is a technique for estimating an unknown underlying function from input-output instances
or examples. Researchers have proposed different methods of function approximations, such as the neural network method, the
support vector regression method, the reinforced learning method, the clustering method, the neuro-fuzzy method, etc. This
paper introduces a novel data-driven function approximation scheme where the input-output data set is first segmented into
multiple pieces. A Mamdani-type fuzzy submodel is constructed for each piece or portion, and the membership functions’
parameters for antecedent and consequent are optimally selected through the differential evolution algorithm. The efficacy of
the suggested model is verified on three nonlinear functions, viz., a piecewise polynomial function, an exponentially decreasing
sinusoidal function, and an exponentially increasing sinusoidal function. A comparative analysis is done based on the simulation
results from the proposed model and the results obtained through the two state-of-the-art function approximation techniques,
viz., the support vector regression model and the radial basis function network. The simulation results show that the proposed
function approximator has satisfactorily approximated the three functions examined here, surpasses the two state-of-the-art
techniques in approximating the two sinusoidal functions, and performs the near-best performance for the piecewise polynomial
function. The proposed function approximator is expected to be applied as a new state-of-the-art method for function
approximation.

Keywords - Differential evolution, Function approximation techniques, Membership function generation, Optimal fuzzy model,
Piecewise function.

1. Introduction estimation. There are multiple approaches to function


Function approximation reveals the underlying approximation, such as the polynomial approximation
relationship between input and output variables in a given data approach [2-4], the artificial neural network-based approach
set [1]. Function approximation may also be considered a [5], the support vector regression approach [6, 7], the
mapping from the examples of input to the examples of output. reinforced learning approach [8], the clustering approach [9,
The intention here is to find a relationship between the input 10], etc.
and output. When the relation is approximated using some
function, it is called a function approximation. The fitness of The polynomial function approximation is one of the
a function approximation technique for a given data set (X, Y) direct and most straightforward models for function
is estimated by the error function. The most frequently approximations. In polynomial function approximation, a
employed error function is given by Equation (1). polynomial of a certain degree is considered, and the
polynomial coefficients are selected to minimize the error in
the approximation. The higher the degree of the polynomial,
1 n
E  ( y(i)  f (i))2
2 i 1
(1) the better the approximation accuracy.

However, complexity and computing performance are


Where, E denotes the cost function, n represents the data sacrificed to achieve accuracy. Therefore, efforts are made to
point size, and f is the approximated function. obtain roughly the same performance with a polynomial of a
lesser degree. Different polynomials, such as Chebyshev
The function approximation techniques target to polynomials [11, 12], Weierstrass polynomials [13], and
minimize this error function to enhance accuracy in Bernstein polynomials [14], have been used for function

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)


Anup Kumar Mallick et al. / IJEEE, 11(2), 1-10, 2024

approximation. A chronology of function approximation using selecting an appropriate kernel, as no single kernel is best
polynomial methods has been outlined by Trefethen [15]. suited for all types of nonlinear functions.
Selecting the polynomial, determining the order of the
polynomial, and choosing the coefficients of the polynomial Researchers have also used the clustering technique for
are crucial tasks in function approximation, as they greatly function approximation. Clustering is an unsupervised
influence the performance of the approximators. learning tool that segregates data elements into different
categories. Hence, modifications to conventional clustering
Another widely applied method in Machine Learning for techniques, namely the Alternative Cluster Estimation (ACE)
function approximation is Artificial Neural Network-based algorithm, have been proposed in [9, 10].
approximation. Cybenko [16], and Hornik et al. [17] believed
that a 3-layer neural network can accurately estimate nonlinear The clustering technique has also been coupled with other
functions. Researchers have used various artificial neural function approximation techniques, such as enhanced
networks for function approximation. clustering function approximation for RBFN, which is
proposed in [23]. However, no fixed or standard rule exists to
Ferrari and Stengel [18] presented an algebraic approach select the number of clusters that best approximate the
for smooth function representation using a feed-forward function. Some other notable methods employed for function
neural network. Yang et al. [5] investigated the performance approximation are the gradient boosting method [24],
of Radial Basis Function Network (RBFN), backpropagation, reinforced learning method [8], neuro-fuzzy method [25], etc.
and regression neural network for approximating Sphere,
Rastrigin, and Griewank functions. Zainuddian and Pauline From the previous discussion, it appears that the
employed RBFN and wavelet neural networks to compare performance of different methods for function approximations
continuous functions. DeVore et al. have presented a detailed is influenced by the parameter selection, the architecture, or
survey of different neural networks for approximation [19]. the type of approximator used, such as the order of
polynomials in polynomial function approximation, the kind
The significant drawback of the Artificial Neural or architecture, and number of hidden layers for an artificial
Network-based method is its requirement for a considerable neural network, the types of kernel functions in support vector
number of neurons in the hidden layer. A neural network regression, and the number of clusters in alternative cluster
requires many hidden neurons to approximate a function estimation.
properly. As hidden neurons increase, memory and
computational time requirements increase. Thus, expert knowledge is required while selecting the
parameters, architecture, or types of existing methods;
Another method of function approximation is Support otherwise, the approximators may fail to approximate the
Vector Regression (SVR). SVR is a Support Vector Machine given function properly. With those flaws, this paper proposes
(SVM) extension for regression. The data is transferred into a a novel function approximation model, discussed in the next
higher-dimensional space called the kernel space to obtain section.
greater accuracy with nonlinear functions. Different kernels
are used in SVR, such as linear, Gaussian, polynomial, etc. The proposed work aims to present a new technique of
Although not as popular as SVM, SVR has also been proven function approximation in which less or no prior or expert
effective in function approximation [6]. knowledge is required. In the proposed model for function
approximation, the envelope of the given data set function is
Fernando et al. [20] proposed a Multi-dimensional divided into multiple segments. For each segment, a
Support Vector Regression (MSVR). It employs a cost Mamdani-type fuzzy model is designed.
function with a hyper-spherical insensitive zone and can
perform better than an SVM used separately for each feature. The membership functions’ parameters of the fuzzy
This paper uses an iterative process to the Karush-Kuhn- models are optimally selected by using the differential
Tucker criteria to resolve the MSVR. Chuang et al. [7] have evolutional algorithm. To check the efficacy of the proposed
recommended a robust SVR network for function model, the proposed algorithm is applied to approximate three
approximation, including outliers. Another robust SVR model nonlinear functions. It is compared with two well-known
is suggested in [21], where the rough set is used to tackle function approximation methods: radial basis function
imprecise information in the support vector regression model. network and support vector regression. The remainder of this
paper is arranged as given.
Lin et al. [22] introduced a hybrid model, i.e., Support
Vector Regression-based Fuzzy Neural Network (SVRFNN), Section 2 illustrates the proposed method. The simulation
to integrate the reasoning efficiency of Fuzzy Neural Network results are reported and discussed in sections 3 and 4. Finally,
(FNN) with high accuracy and robustness of SVR for function the paper concludes with a future work scope in section 5.
approximation. One of the significant problems in SVR lies in

2
Anup Kumar Mallick et al. / IJEEE, 11(2), 1-10, 2024

2. Proposed Method yi  yi 1 and yi  yi 1 for i [in, f ] (2)


The proposed fuzzy model for function approximation is
developed broadly in two stages: dividing the data set or Condition 2: Similarly, xi is one of the minima points, if
function envelope into multiple pieces and generating a fuzzy
submodel for each piece. Figure 1 presents the proposed
yi  yi 1 and yi  yi 1 for i [in, f ] (3)
framework. A detailed description of the proposed method is
illustrated in the following subsections.
Here, yi is the corresponding output of xi . For illustration
of this step, one example data set (X, Y) is considered in Figure
2.

Function For the data set shown in Figure 2, assume that y is


under defined in the discrete points: in, in+1, in+2,…, f. Using the
Approximation above two conditions given in Equation (2) and Equation (3),
the extreme points of the data set envelope are first found.

Let us assume that the maxima points of the data set given
in Figure 2 are denoted by (xa, ya), (xc, yc), and (xe, ye), and the
Uniformly Sample the Function into N minima points are represented by (xb, yb), (xd, yd). Then, the
Numbers of Data Points given data set is segmented into six piecewise intervals
denoted by P1, P2, P3, P4, P5, and P6. The input and output
ranges of each piece are given in Table 1.

Find the Extreme (Maxima & Minima) of


the Data Points (xa, ya)
(xc, yc)
(xe, ye)
Output (y)

Based on Extreme Points, Divide the

P5
P3

P6
P1

P4
Whole Data Set into p Numbers of Pieces P2
(xf, yf)
(xd, yd)
(xin,, yin) (xb, yb)
For Each Piece, Design A Fuzzy System
with its Membership Functions being
Optimized by Differential Evolution
Input (x)

Fig. 2 An example data set


Construct A Data Base Containing Input
Range, Optimized Membership 2.2. Generation of Fuzzy Submodel for Each Piece
Functions’ Parameters & Rule Base for Next, a fuzzy submodel is constructed for each piece of
Each Piece the data set envelope. In this paper, the fuzzy model is of the
Fig. 1 Proposed framework Mamdani type. The shapes of the membership functions are
considered to be Gaussian.
2.1. Division of Data Set Envelope into Pieces
At first, the function considered for approximation is Both the antecedent and consequent fuzzy sets consist of
uniformly sampled into N number of discrete points to three fuzzy subsets: Low, Medium, and High. The low subset
generate a data set of discrete points. Then, the data set is is regarded as a right-sided Gaussian, and the high subset is a
divided into multiple pieces. To find the ranges of different left-sided Gaussian. The membership function parameters are
pieces or portions, the extreme points, i.e., the maxima and the optimally generated using the differential evolution algorithm.
minima points, are first estimated. The following two
conditions are used to find the extreme points of the data set. Differential Evolution (DE) is a metaheuristic
optimization tool that helps find the optimal parameter value
Condition 1: A point of x, say xi, is one of the maxima in the search space [26-28].
points, if

3
Anup Kumar Mallick et al. / IJEEE, 11(2), 1-10, 2024

In this paper, DE is used to select the parameters of Algorithm 1: Membership function generation of fuzzy
membership functions for the antecedent and the consequent. submodel for the jth piece
Each individual in the optimization technique is represented get no. of data points (kj) in j-th piece, maxitr
by the following structure in Figure 3. generate initial population or target vectors
calculate cost functions of the target vectors (Equation 4)
Table 1. Input and output ranges of each piece set itr=0
while itr < maxitr
Piece Input Range Output Range
set itr=itr+1
P1 [xin, xa] [yin, ya] for each individual
perform mutation to generate donor vector
P2 [xa+1, xb] [ya+1, yb] perform recombination and create a trial vector
calculate the cost function of the trial vector
P3 [xb+1, xc] [yb+1, yc] (Equation 4)
select the target vector for the next iteration
P4 [xc+1, xd] [yc+1, yd] end for
end while
P5 [xd+1, xe] [yd+1, ye] return the best individual of the last iteration

P6 [xe+1, xf] [ye+1, yf] In general, the nonlinear functions contain maxima and
minima points. As a result, a single rule base to design fuzzy
function approximators will not apply to the whole data set.
σLA cMA σMA σHA σLC cMC σMC σHC This is the primary motive for dividing the data set into
multiple pieces. The extreme points create the different
Fig. 3 Individual representing membership functions parameters
portions, so each piece represents either a monotonically
In Figure 3, the following notations are used. increasing or a monotonically decreasing piece. Then, two sets
of rule bases (Rule Base 1 for the monotonically increasing
pieces and Rule Base 2 for the monotonically decreasing
σLA, σMA, σHA : Standard deviations of antecedent subsets,
pieces) are used.
low, medium, and high, respectively.
cMA, cMC : Mean of subset medium for antecedent and
consequent, respectively. Rule Base 1:
σLC, σMC, σHC: Standard deviations of consequent subsets, Rule 1 : If the Antecedent is Low, Then the Consequent is
Low.
low, medium, and high, respectively.
Rule 2 : If the Antecedent is Medium, Then Consequent is
Medium.
In the optimization technique, the cost function for the jth
Rule 3 : If the Antecedent is High, Then the Consequent is
piece, say, f(j), is calculated by the sum squared errors of all
High.
kj data points belonging to the jth piece, as given in Equation
(4).
kj
Rule Base 2:

 (act (m)  fs(m))2 ;


Rule 1 : If the Antecedent is Low, Then the Consequent is
f ( j)  j  (1, p) (4) High.
m 1
Rule 2 : If the Antecedent is Medium, Then the Consequent
is Medium.
Here, act(m) is the actual or given output value of the mth Rule 3 : If the Antecedent is High, Then the Consequent is
input data point, fs(m) is the output obtained through the Low.
proposed model for the m-th point, p denotes the total number
of pieces of the given data set. For each input training data point, three rules of the
corresponding Rule Base (either Rule Base 1 or Rule Base 2)
The optimization technique at each stage aims to reduce are inferred with varying firing strengths, and the
the above cost/objective function. The best individual found corresponding consequents are estimated. Outputs of all three
at the final iteration provides the membership functions’ rules are aggregated using the fuzzy MAX aggregation
parameters for the antecedent and, consequently, the fuzzy method [29].
submodel constructed for the jth piece data segment.
The outcome of the inference engine is fuzzy, with the
An algorithm for selecting the parameters of membership system being Mamdani-type. Hence, the output needs to be
functions using differential evolution is given in Algorithm 1. defuzzified to get a crisp output.

4
Anup Kumar Mallick et al. / IJEEE, 11(2), 1-10, 2024

This paper uses one popular defuzzification technique, Algorithm 3: Function approximation using the proposed
i.e., Center of Gravity (COG) [30] for defuzzification. The model
steps involved in generating the suggested fuzzy model are get the database generated after training
depicted in Algorithm 2. perform sampling the function uniformly into n numbers
of data points
Algorithm 2: Design of the proposed model for each input data point
get function to be approximated check in which piece the data point belongs to
perform sampling the function uniformly into n number of fetch the rule base and membership functions’
data points parameter of that piece from the database
for each data point use Mamdani fuzzy model with MAX
check if it is an extreme point (maximum or minimum aggression and COG defuzzification to
point) estimate output
end for end for
perform division of data points in p numbers of pieces return predicted output for each input data point
according to the extreme points
for each piece By joining the test input-output data points, the function
design a fuzzy sub system is approximated. The proposed model is named a piecewise
end for optimum fuzzy model, or, in short, POFM.
return database of the proposed model (input range, rule
base, optimized membership functions’ parameters 3. Simulation Results
for each piece) This section presents and analyzes the experimental
function design of fuzzy subsystem for each piece results in approximating three nonlinear functions (a
if the piece is an increasing piece piecewise polynomial function, an exponentially decreasing
set Rule Base 1 for the fuzzy sub system sinusoidal, and an exponentially increasing sinusoidal
else if function).
set Rule Base 2 for the fuzzy sub system
end if The Performance of the Proposed Model (POFM) is
perform optimization using differential evolution algorithm evaluated with the results obtained through the Radial Basis
for membership functions’ parameters Function Network (RBFN) and the Support Vector Regression
return fuzzy sub system parameters for each piece (SVR).
An example database generated after constructing the In RBFN, the goal of min squared error and the spread
proposed model for the function depicted in Figure 2 is given were set at their default values of 0 and 1, respectively, and
in Table 2. the number of epochs was 500. For SVR, the kernel function
is of the Gaussian type.
After constructing the proposed fuzzy model, the
uniformly sampled input data points are again used to check For DE in POFM, the coefficient F was generated using a
the performance of the designed model in function Cauchy distribution. The crossover probability in DE was kept
approximation. The steps for approximating the function at 0.8, and the number of iterations was fixed at 500.
using the designed model are depicted in Algorithm 3.

Table 2. Generated database after training

Piece Input Range Membership Parameters Rule Base

P1 [xin, xa] Rule Base 1

P2 [xa+1, xb] Rule Base 2


The value of the best individual
P3 [xb+1, xc] Rule Base 1
(Figure 3) as obtained from the
optimization technique for the
P4 [xc+1, xd] Rule Base 2
respective piece
P5 [xd+1, xe] Rule Base 1

P6 [xe+1, xf] Rule Base 2

5
Anup Kumar Mallick et al. / IJEEE, 11(2), 1-10, 2024

The simulation results of the three models (RBFN, SVR, comparison of performance based on the errors of the models
and POFM) were compared based on two error measures: for the piecewise polynomial function is given in Table 3.
average sum squared error and average error, as given in
Equation (5), and in Equation (6), respectively. Example 2: Exponentially Decreasing Sinusoidal Function
The function given by Equation (8) comprises one
Average Sum Squared Error (ASSE) = sinusoidal part and another exponentially decreasing
1 m component [1]. The combined effect is shown in Figure 5(a).
 (exact (i)  predicted (i))2 (5)
m i 1
y  sin(4 x)exp( | 5x |) for 1  x  1 (8)
1 m
Average Error (AE) =  | exact (i)  predicted (i) | (6)
m i 1 The simulation results for approximating the function in
Example 2 with three different models are depicted in Figures
In Equation (5), and Equation (6), m denotes the test data 5(b) - 5(d). The approximation errors of the models for the
set size. The simulations were done in MATLAB 2016a, and exponentially decreasing function are reported in Table 4.
the results are given below in Examples 1-3.
Example 3: Exponentially Increasing Sinusoidal Function
Example 1: Piecewise Polynomial Function In contrast to the function considered in Example 2, the
The underlying function is a piecewise polynomial [9], as function in Example 3 is an exponentially increasing
given in Equation (7), with x being the input and y being the sinusoidal function, as given in Equation (9).
output.
y  sin(4 x)exp(| 5x |) for 1  x  1 (9)
exp(0.5( x  4)) for 0 x4
y (7) The functions approximated by RBFN, SVR, and POFM
exp(0.5( x  4)) for 4 x8
for the Example given in Equation (9) are shown in Figures
6(b) - 6(d). The errors in the approximation of the models are
The functions approximated by different models (RBFN, compared in Table 5.
SVR, and POFM) are shown in Figures 4(b) - 4(d). A

Table 3. Errors of different approximators for the function in example 1

Approximator Used Average Sum Squared Error Average Error


RBFN 0.000025 0.002966
SVR 0.000467 0.020371
POFM 0.000051 0.005034

1 1

0.8 0.8

0.6 0.6
Output

Output

0.4 0.4

0.2 0.2

0 0
0 2 4 6 8 0 2 4 6 8
Input Input

(a) (b)

6
Anup Kumar Mallick et al. / IJEEE, 11(2), 1-10, 2024

1 1

0.8 0.8

0.6 0.6
Output

Output
0.4 0.4

0.2 0.2

0 0
0 2 4 6 8 0 2 4 6 8
Input Input
(c) (d)
Fig. 4(a) Exact function and approximated functions by, (b) RBFN, (c) SVR, and (d) POFM for the function in example 1.

0.4
0.5

0.2
Output

0 Output 0

-0.2

-0.5
-0.4
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
Input Input

(a) (b)

0.015
0.5
0.01

0.005
Output

0
Output

-0.005
-0.5
-0.01
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
Input Input
(c) (d)
Fig. 5(a) Exact function and approximated functions by, (b) RBFN, (c) SVR, and (d) POFM for the function in example 2.

Table 4. Errors of different approximators for the function in example 2


Approximator Used Average Sum Squared Error Average Error
RBFN 0.028153 0.132018
SVR 0.043062 0.122560
POFM 0.000016 0.002276

7
Anup Kumar Mallick et al. / IJEEE, 11(2), 1-10, 2024

100 100

50 50
Output

Output
0 0

-50 -50

-100 -100
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
Input Input
(a) (b)

60 100

40
50
20
Output

Output
0 0

-20
-50
-40

-60 -100
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
Input Input
(c) (d)
Fig. 6(a) Exact function and approximated functions by, (b) RBFN, (c) SVR, and (d) POFM for the function in example 3.

Table 5. Errors of different approximators for the function in example 3

Approximator Used Average Sum Squared Error Average Error


RBFN 6.790763 2.185588
SVR 417.123242 14.040023
POFM 0.355411 0.324149

4. Discussion Model (POFM) yields the least Average Sum Square Error
Figure 4 shows that all three models (POFM, RBFN, and (0.000016) and the least Average Error (0.002276) in
SVR) can approximate the piecewise polynomial function comparison to the average mean square error (0.028153) and
with certain deviations at some points from the exact average error (0.132018) for the RBFN model and the
polynomial function. Table 3 indicates that the Proposed Average Sum Square Error (0.043062) and Average Error
Model (POFM) for the piecewise polynomial function does (0.122560) of the SVR model.
not show the best performance, but its performance is much
closer to the best approximator (RBFN). For the piecewise Figure 5 shows that RBFN and SVR fail to adequately
polynomial function, the Average Sum Squared Error and approximate the exponentially decreasing sinusoidal function
Average Error are 0.000025 and 0.002966, respectively, for properly, whereas the Proposed Model (POFM) has
RBFN; 0.000467 and 0.020371, respectively, for SVR; and satisfactorily approximated the function.
0.000051 and 0.005034, respectively, for POFM.
Regarding the exponentially increasing sinusoidal
From Table 4, it is found that for the exponentially function, Figure 6(b) shows that RBFN performs well. Still, it
decreasing sinusoidal function approximation, the Proposed yields a sizeable Average Sum Squared Error (6.790763) and

8
Anup Kumar Mallick et al. / IJEEE, 11(2), 1-10, 2024

a significant average sum square error (2.185588), as reported membership functions’ parameters, which have been selected
in Table 5. Figure 6(c) and Table 5 indicate the failure of the using the optimization technique. Therefore, not much expert
SVR in approximating the exponentially increasing sinusoidal knowledge is required to choose the parameters of the
function. Figure 6(d) shows that the Proposed Model (POFM) suggested approximator.
has satisfactorily approximated the exponentially increasing
sinusoidal function and has encountered the lowest average From the investigation results, it can be seen that the
sum squared error and the lowest average error compared to suggested framework has satisfactorily approximated the
RBFN and SVR. three nonlinear functions considered here. Compared to two
widely used function approximation models (Support Vector
5. Conclusion Regression & Radial Basis Function Network), the suggested
In this paper, a new function approximation model is model performs best for two nonlinear functions and the
proposed. The proposed model consists of multiple fuzzy second best for one nonlinear function in terms of
submodels, where each submodel is employed for an approximation errors. A similar piecewise function
individual interval of the given data. The only parameters approximation technique may be developed, with each piece
required to be designed in the proposed model are the formulated by an optimized T-S-type fuzzy model.

References
[1] Zarita Zainuddin, and Ong Pauline, “Function Approximation Using Artificial Neural Networks,” International Journal of Systems
Applications, Engineering & Development, vol. 1, no. 4, pp. 173-178, 2007. [Google Scholar] [Publisher Link]
[2] Ivy Kidron, “Polynomial Approximation of Functions: Historical Perspective and New Tools,” International Journal of Computers for
Mathematical Learning, vol. 8, pp. 299-331, 2003. [CrossRef] [Google Scholar] [Publisher Link]
[3] Victor Zalizniak, Essentials of Scientific Computing: Numerical Methods for Science and Engineering, Horwood Publishing, England,
2008. [Google Scholar] [Publisher Link]
[4] Michael A. Cohen, and Can Ozan Tan, “A Polynomial Approximation for Arbitrary Functions,” Applied Mathematics Letters, vol. 25,
no. 11, pp. 1947-1952, 2012. [CrossRef] [Google Scholar] [Publisher Link]
[5] Sibo Yang et al., “Investigation of Neural Networks for Function Approximation,” Procedia Computer Science, vol. 17, pp. 586-594,
2013. [CrossRef] [Google Scholar] [Publisher Link]
[6] Mariette Awad, and Rahul Khanna, Efficient Learning Machines - Theories, Concepts, and Applications for Engineers and System
Designers, Apress Open, 2015. [Google Scholar] [Publisher Link]
[7] Chen-Chia Chuang et al., “Robust Support Vector Regression Networks for Function Approximation with Outliers,” IEEE Transactions
on Neural Networks, vol. 13, no. 6, pp. 1322-1330, 2002. [CrossRef] [Google Scholar] [Publisher Link]
[8] Xin Xu, Lei Zuo, and Zhenhua Huang, “Reinforcement Learning Algorithms with Function Approximation: Recent Advances and
Applications,” Information Sciences, vol. 261, pp. 1-31, 2014. [CrossRef] [Google Scholar] [Publisher Link]
[9] T.A. Runkler, and J.C. Bezdek, “Alternating Cluster Estimation: A New Tool for Clustering and Function Approximation,” IEEE
Transactions on Fuzzy Systems, vol. 7, no. 4, pp. 377-393, 1999. [CrossRef] [Google Scholar] [Publisher Link]
[10] J. Gonzalez et al., “A New Clustering Technique for Function Approximation,” IEEE Transactions on Neural Networks, vol. 13, no. 1,
pp. 132-142, 2002. [CrossRef] [Google Scholar] [Publisher Link]
[11] S.Y. Reutskiy, and C.S. Chen, “Approximation of Multivariate Functions and Evaluation of Particular Solutions Using Chebyshev
Polynomial and Trigonometric Basis Functions,” International Journal of Numerical Methods in Engineering, vol. 67, no. 13, pp. 1811-
1829, 2006. [CrossRef] [Google Scholar] [Publisher Link]
[12] Theodore J. Rivlin, Chebyshev Polynomials, 2nd ed., Courier Dover Publications, 2020. [Google Scholar] [Publisher Link]
[13] Dilcia Perez, and Yamilet Quintana, “A Survey on the Weierstrass Approximation Theorem,” Arxiv, 2006. [CrossRef] [Google Scholar]
[Publisher Link]
[14] Rida T. Farouki, “The Bernstein Polynomial Basis: A Centennial Retrospective,” Computer Aided Geometric Design, vol. 29, no. 6, pp.
379-419, 2012. [CrossRef] [Google Scholar] [Publisher Link]
[15] Lloyd N. Trefethen, Approximation Theory and Approximation Practice, Extended ed., Society for Industrial and Applied Mathematics
(SIAM) Publications, 2019. [Google Scholar] [Publisher Link]
[16] G. Cybenko, “Approximation by Superposition of a Sigmoidal Function,” Mathematics Control, Signals and Systems, vol. 2, pp. 303-314,
1989. [CrossRef] [Google Scholar] [Publisher Link]
[17] Kurt Hornik, Maxwell Stinchcombe, and Halbert White, “Multilayer Feedforward Networks are Universal Approximator,” Neural
Networks, vol. 2, no. 5, pp. 359-366, 1989. [CrossRef] [Google Scholar] [Publisher Link]
[18] S. Ferrari, and R.F. Stengel, “Smooth Function Approximation Using Neural Networks,” IEEE Transactions on Neural Networks, vol.
16, no. 1, pp. 24-38, 2005. [CrossRef] [Google Scholar] [Publisher Link]

9
Anup Kumar Mallick et al. / IJEEE, 11(2), 1-10, 2024

[19] Ronald DeVore, Boris Hanin, and Guergana Petrova, “Neural Network Approximation,” Acta Numerica, vol. 30, pp. 327-444, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Fernando Pérez-Cruz et al., “Multi-Dimensional Function Approximation and Regression Estimation,” International Conference on
Artificial Neural Networks, vol. 2415, pp. 757-762, 2002. [CrossRef] [Google Scholar] [Publisher Link]
[21] Chih-Ching Hsiao, Shun-Feng Su, and Chen-Chia Chuang, “A Rough-Based Robust Support Vector Regression Network for Function
Approximation,” 2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), Taipei, Taiwan, pp. 2814-2818, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Chin-Teng Lin et al., “Support-Vector-Based Fuzzy Neural Network for Pattern Classification,” IEEE Transactions on Fuzzy Systems,
vol. 14, no. 1, pp. 31-41, 2006. [CrossRef] [Google Scholar] [Publisher Link]
[23] H. Pomares et al., “An Enhanced Clustering Function Approximation Technique for A Radial Basis Function Neural Network,”
Mathematical and Computer Modelling, vol. 55, no. 3-4, pp. 286-302, 2012. [CrossRef] [Google Scholar] [Publisher Link]
[24] Jerome H. Friedman, “Greedy Function Approximation: A Gradient Boosting Machine,” The Annals of Statistics, vol. 29, no. 5, pp. 1189-
1232, 2001. [Google Scholar] [Publisher Link]
[25] Paulo Vitor de Campos Souza, “Fuzzy Neural Networks and Neuro-Fuzzy Networks: A Review the Main Techniques and Applications
Used in the Literature,” Applied Soft Computing, vol. 92, 2020. [CrossRef] [Google Scholar] [Publisher Link]
[26] Rainer Storn, and Kenneth Price, “Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous
Spaces,” Journal of Global Optimization, vol. 11, pp. 341-359, 1997. [CrossRef] [Google Scholar] [Publisher Link]
[27] Rainer Storn, “Differential Evolution Research -Trends and Open Questions,” Advances in Differential Evolution, vol. 143, pp. 1-31,
2008. [CrossRef] [Google Scholar] [Publisher Link]
[28] Swagatam Das, and Ponnuthurai Nagaratnam Suganthan, “Differential Evolution: A Survey of the State-of-the-Art,” IEEE Transactions
on Evolutionary Computation, vol. 15, no. 1, pp. 4-31, 2011. [CrossRef] [Google Scholar] [Publisher Link]
[29] Samir Roy, and Udit Chakraborty, Introduction to Soft Computing, Neuro-Fuzzy and Genetic Algorithms, Pearson, India, 2013. [Google
Scholar] [Publisher Link]
[30] Snehashish Chakraverty, Deepti Moyi Sahoo, and Nisha Rani Mahato, Concepts of Soft Computing, Fuzzy and ANN with Programming,
Springer, Singapore, 2019. [CrossRef] [Google Scholar] [Publisher Link]

10

You might also like