IEEE Final Paper
IEEE Final Paper
Abstract—This paper analyzes the potential of various Ma- They require specialized electrical infrastructure and cater to
chine Learning and Deep Learning algorithms for replacement the demand for rapid charging, playing a crucial role in reduc-
of the traditional Proportional Integral (PI) Controller in a ing charging downtime for EVs and improving the efficiency
single-phase two-stage on-board Electric Vehicle (EV) charging
system to improve its adaptability and efficiency. Traditional of charging networks. These chargers are integral components
PI controllers, though reliable, often do not perform optimally of EV charging systems, which can utilize either single-
under the dynamic demands of EV charging, leading to in- phase or three-phase power supplies depending on application
efficiencies. Our investigation aims to understand how these needs. Furthermore, chargers can be categorized as onboard
ML/DL methodologies can better handle the variable nature or offboard, with onboard chargers integrated into the EV and
of EV charging through adaptability, accuracy, and efficiency
in managing charging operations. Experimental results from responsible for AC to DC conversion, while offboard chargers,
both simulations and real-world data reveal the effectiveness of located externally, supply power to the vehicle’s onboard
these ML/DL models in enhancing charging system performance charger. Fast-charging technologies, such as DC fast charging,
over standard PI controllers. Decision Trees provide clear, rule- are becoming increasingly prevalent, offering EV owners the
based control paths; Neural networks excel in modeling complex, convenience of rapid charging during long journeys.
non-linear relationships for real-time predictions; and Linear
Regression offers a straightforward approach for linear charging A Proportional-Integral (PI) controller is a fundamental
dynamics. feedback control system utilized in engineering and automa-
Index Terms—PI Controllers, Decision Trees, Artificial Neural tion. It functions by continuously computing an error signal,
Networks, Regression representing the disparity between the desired setpoint and
the actual value of the process variable. The controller then
I. I NTRODUCTION adjusts the system’s output to minimize this error, combining
Electric vehicles (EVs) rely on chargers to replenish their proportional and integral control actions. Traditional PI con-
batteries, with single-phase and three-phase chargers being trollers face several challenges in addressing the complexities
primary options. single-phase chargers, utilizing a single alter- of modern systems. These challenges include difficulties in ac-
nating current (AC) waveform, offer moderate charging speeds curately modeling nonlinear dynamics, handling disturbances
suitable for residential settings and light-duty commercial use, and uncertainties, and adapting to changing operating condi-
boasting ease of installation and compatibility with most EVs. tions. Linear Regression (LR) models offer a potential solution
They are commonly found in home charging setups, providing by providing a simple yet effective means of capturing linear
convenient overnight charging for EV owners. Single-phase relationships within the system, allowing for better prediction
chargers typically operate at lower power outputs, ranging of system behavior. Artificial Neural Networks (ANNs) offer
from 3 kW to 7.4 kW, making them cost-effective solutions. a more flexible approach by leveraging their ability to learn
While they may not offer the rapid charging speeds of three- complex nonlinear mappings from data, enabling them to
phase chargers, single-phase chargers are sufficient for meeting handle nonlinear dynamics and adapt to changing conditions.
the daily charging needs of many EV drivers, especially Decision Trees (DTs) provide interpretable models that can
in residential environments. In contrast, three-phase chargers capture complex decision-making processes, aiding in un-
leverage three AC waveforms, providing significantly higher derstanding system behavior and identifying optimal control
power output and faster charging rates, making them ideal for strategies.
commercial and public charging stations. By incorporating these advanced modeling techniques, we
can enhance the performance and robustness of control sys- In decision tree analysis, a fitness function evaluates the
tems, ultimately overcoming the limitations associated with effectiveness of divisions within the tree, generally employ-
traditional PI controllers and better addressing the challenges ing criteria like Gini impurity or information gain. Consider
faced in modern applications. a training set L, where L = (x1 , c1 ), (x2 , c2 ), ..., (xi , cj ),
with (x1 , x2 , ..., xi ) being a set of input feature vectors and
II. M ATERIAL AND M ETHOD
(c1 , c2 , ..., cj ) representing corresponding class labels. In this
A. Linear Regression context, xi represents a vector comprising input attributes, and
Linear regression is a fundamental statistical method used the conditions for splitting are determined based on one of
to model the relationship between a dependent variable Y and these attributes. Let pi denote the likelihood that a random
one or more independent variables X1 , X2 , X3 , X4 , ..., Xn ,It sample falls into class ci, then pi can be calculated as
aims to find the best-fitting linear equation that describes how
ci
the independent variables X predict the dependent variable Y pi = (3)
.The equation for a simple linear regression model with one L
predictor variable X is represented as:- 1) Entropy: Entropy is a metric used to quantify the disor-
der or variability within a dataset, foundational for computing
Y = β0 + β1 X + ϵ (1) information gain. In situations where subsets of data are
• Y is the dependent variable. consistent, or uniform, the dataset is considered to have no
• X is the independent variable. disorder or unpredictability. A dataset whose subsets all pertain
• β0 is the slope coefficient, representing the change in Y to a single classification will exhibit an entropy value of zero.
for a one-unit change in X The formula for entropy is the aggregate of the product of
• ϵ is the error term, capturing the difference between the the probability of each class label and the logarithm of this
observed and predicted values. probability.
In multiple linear regression, the equation extends to include j
X
multiple predictor variables Entropy(L) = − pi log2 (pi ) (4)
i=1
Y = β0 + β1 X1 + β2 X2 + · · · + βp Xp + ε (2)
2) Information Gain: Information gain measures how much
where X1 , X2 , ..., Xp are the predictor variables and a feature f reduces uncertainty or entropy in a dataset before
β0 , β1 , β2 , ..., βp are their respective coefficients. and after it’s divided based on f . It indicates the effectiveness
One of the key strengths of linear regression lies in its of f in separating the dataset L into subsets |Lv |, where each
simplicity and interpretability. The linear relationship between subset represents data with a distinct value of f . A higher
the variables can be easily visualized using plots and un- information gain suggests a feature is better at improving
derstood through the coefficients of the regression equation. classification clarity. The best feature for splitting is the one
This transparency makes linear regression particularly useful with the highest information gain, implying it offers the most
for exploratory data analysis and communicating findings to significant reduction in uncertainty.
stakeholders.
B. Decision Trees V
X |Lv |
IG(L, f ) = Entropy(L) − Entropy(Lv ) (5)
Decision trees are a kind of analytical model used in both |L|
v=1
machine learning and statistical analysis to aid in decision-
making and forecast results from given data. They outline the 3) GINI Index: GINI index determines the purity of a
potential outcomes of decisions, incorporating the probabilities specific class after splitting along a particular attribute. The
of different events, the costs associated with resources, and best split increases the purity of the sets resulting from the
their respective utilities. Structured similarly to a flowchart, split. If L is a dataset with j different class labels, GINI is
each internal node in a decision tree corresponds to an defined as
”evaluation” of a feature (for example, the result of flipping a j
coin), while each branch signifies the result of that evaluation, X
GINI(L) = 1 − p2i (6)
and every leaf node denotes a category label (the resolution
i=1
reached after considering all features). The routes from the
root to each leaf delineate the rules for classification or the Where pi is relative frequency if class iin L. If the dataset
sequences of decisions. A decision tree is a hierarchical model is split on attribute A into two subsets L1 and L2 with sizes
with internal nodes that segment instances based on input N1 and N2 respectively, GINI is calculated as
variables and leaves representing outcomes. In its construction, N1 N2
an algorithm seeks the optimal tree by minimizing a fitness GIN IA (L) = GIN I(L1 ) + GIN I(L2 ) (7)
N N
function that assesses split quality. For regression, it identifies Reduction in impurity is calculated as
split points that reduce the prediction error for the target
variable. ∆GIN I(A) = GIN I(L) − GIN IA (L) (8)
The fitness function is calculated concerning MSE, MAE, and Y as arguments. In our case, the independent variable is the
R2 square input variable VDC . and Pin , the dependent variable is the
output variable Pout .
1X
MSE = (Yi − Ȳt )2 (9) After training the model, additional steps may include data
n i∈n
preprocessing, such as handling missing values, normalizing
1X features, and splitting the dataset into training and testing
MAE = |Yi − Ȳt | (10)
n i∈n sets. It is essential to perform thorough data preprocessing
to ensure the accuracy and reliability of the model.After
(Yi − Ȳt )2
P
fitting or training the linear regression models, we analyze
R = 1 − Pi∈n
2
(11) their variation and behavior with different constant functions.
2
i∈n (Yi − Ỹ )
Initially, we experimented with constant values of 400 and
C. Neural Networks 500. However, these models did not exhibit satisfactory per-
Neural networks, inspired by the structure and function of formance compared to other models such as decision trees
the human brain, are a class of machine learning algorithms and neural networks. The linear regression models showed
capable of learning complex patterns and relationships from limitations in their saturation behavior, failing to accurately
data. Comprising interconnected nodes organized in layers, predict outcomes beyond certain ranges. This comparison
ANNs process input data through a series of mathematical highlights the importance of exploring alternative modeling
operations, known as activation functions, to generate output techniques and considering the limitations of linear regression
predictions. These models have gained prominence due to their in certain scenarios.
ability to tackle a wide range of tasks, including classification,
regression, and pattern recognition, across various domains
such as computer vision, natural language processing, and
predictive analytics.
Neural networks have found extensive application beyond
power grid control. In fields such as computer vision, natural
language processing, finance, and healthcare, neural networks
are utilized for various tasks such as image classification, senti-
ment analysis, financial forecasting, and medical diagnosis.The
versatility and adaptability of neural networks make them
a valuable tool in solving complex problems across diverse
domains.
The fundamental operation of a neural network involves
propagating input data forward through the network to produce
output predictions. Mathematically, this process can be repre-
sented using equations ‘(12)’,‘(13)’,‘(14)’,‘(15)’ that describe
the computations performed at each layer of the network.
TABLE I
T RAINING P ERFORMANCE
TABLE II
T ESTING P ERFORMANCE