ADENZY
ADENZY
ON
MADE BY
COMPUTER ENGINEERING
LEVEL ND2
CERTIFICATION
This Is To CERTIFY That This Seminar Report Was Carried Out By Adekanye Daniel Adeniyi
With The Matriculation Number 21010111089 In The Department Computer Engineering Of
Gateway (ICT) Polytechnic Saapade, Ogun State. Under The Appropriate Supervision.
Supervisor Date
This Seminar Is Dedicated To Almighty God For His Mercy That Saw Me Through The
Completion Of My Seminar Report And My National Diploma.
Also To My Lovely Parents Who Thought Its Wise to Educate Me They Are Wonderful and
Special.
ACKNOWLEDGMENT
My Profound Gratitude Goes To The Almighty God Who Helped Me Through My Time I This
Institution I Thank Him For His Grace And Mercy In My Life
Furthermore, I Appreciate The Efforts Of My Parents Mr. And Mrs. ADEKANYE For Their
Financial Assistance And Prayers Throughout My Stays In School And Towards My Seminar
Work. I Am Indeed
Indebted To Them Forever.
I Am Indebted To My Supervisor Mrs. Kayode Who Gave Full Support. Technical Advices.
Corrections, Enlightenment During The Course Of This Project.
Would Like To Express My Gratitude To My H.O.D For His Good Works In My Departments.
May God Reward Them For Their Good Deeds.
ABSTRACT
.
TABLE OF CONTENTS
Certification……………………………………………………………………………………….………………………………ii
Dedication………………………………………………………………………………………………………………………………………iii
Acknowledgement…………………………………………….………………………………………………………………………….iv
CHAPTER ONE
1.0. INTRODUCTION
CHAPTER TWO
2.1 DEFINATION
2.3 BRANCHES
CHAPTER THREE
3.2 OBJECTIVE
CHAPTER FOUR
CHAPTER 5
5.1 CONCLUTION
5.2 REFRENCE
CHAPTER ONE
INTRODUCTION
input neurons which may be activated by the pixels of an input
Image . After being weighted and transformed by a function (determined
by the network's designer), the activations of these neurons are then
passed on to other neurons. This process is repeated until finally, an
output neuron is activated. This determines which character was read.
The artificial neural network (ARTIFICIAL NEURAL NETWORK), a soft
computing technique, has been successfully applied in different fields of
science, such as pattern recognition, fault diagnosis, forecasting and
prediction. However, as
far as we are aware, not much research on predicting student academic
performance takes advantage of artificial neural network. Kanakana and
Predicting student academic performance has long been an important
research topic. Among the issues of education system, questions
concerning admissions into academic institutions (secondary and tertiary
level) remain important (Ting, 2008). The main objective of the admission
system is to determine the candidates who would likely perform well after
being accepted into the school. The quality of admitted students has a
great influence on the level of academic performance, research and
training within the institution. The failure to perform an accurate
admission
decision may result in an unsuitable student being admitted to the
program. Hence, admission officers want to know more about the
academic Potential of each student. Accurate predictions help admission
officers to distinguish between suitable and unsuitable candidates for an
academic program, and identify candidates who would likely do well in the
school (Ayan and Garcia, 2013). The results obtained from the prediction
of academic performance may be used for classifying students, which
enables educational managers to offer them additional support, such as
customized assistance and tutoring resources.
The results of this prediction can also be used by instructors to specify the
most suitable teaching actions for each group of students, and provide
them with further assistance tailored to their needs. In addition, the
prediction results may help students develop a good understanding of
how
well or how poorly they would perform, and then develop a suitable
learning strategy. Accurate prediction of student achievement is one way
to
enhance the quality of education and provide better educational services
(Romero and Ventura, 2007). Different approaches have been applied to
predicting student academic performance, including
Artificial Neural Networks Are A Type Of Machine Learning Model That Are Inspired By The
Structure And Function Of The Human Brain. They Consist Of A Network Of Interconnected
Nodes, Or "Neurons," That Are Organized Into Layers. Each Neuron Receives Input From
Other Neurons And Produces An Output, Which Is Then Passed On To Other Neurons In The
Network. The Connections Between Neurons Are Weighted, Meaning That Some Inputs Are
More Important Than Others In Determining The Neuron's Output. By Adjusting The
Weights Of The Connections Between Neurons, Artificial Neural Networks Can Learn To
Recognize Patterns In Data And Make Predictions Based On That Data. Artificial Neural
Networks Have Been Used For A Wide Range Of Applications, Including Image And Speech
Recognition, Natural Language Processing, And Predictive Analytics.
In A More Simple Language Artificial Neural Networks Are A Type Of Computer Program
That Can Learn To Recognize Patterns In Data, Like How The Human Brain Does. They Are
Made Up Of Interconnected "Neurons" That Receive Input And Produce Output, And The
Connections Between Neurons Are Weighted To Determine How Important Each Input Is. By
Adjusting These Weights, The Neural Network Can Learn To Make Accurate Predictions
Based On The Patterns It Recognizes In The Data.
3 THE OUTPUTLAYER
The Input Layer: The Input Layer Of An Artificial Neural Network Receives The Data That The
Network Will Process. The Data Can Take Many Forms, Such As An Image, A Sound
Recording, Or A Set Of Numerical Values. The Input Layer Passes The Data On To The Hidden
Layers, Which Perform The Bulk Of The Processing. The Input Layer Is Responsible For
Encoding The Data In A Way That The Neural Network Can Understand And Use To Make
Predictions
The Hidden Layer: The Hidden Layers Of An Artificial Neural Network Perform The Bulk Of
The Processing, Transforming The Input Data Into A Format That Can Be Used To Make
Predictions. Each Neuron In A Hidden Layer Receives Input From The Neurons In The
Previous Layer, And Produces An Output That Is Passed On To The Neurons In The Next
Layer. The Connections Between Neurons In The Hidden Layers Are Weighted, Which
Determines How Important Each Input Is. By Adjusting These Weights, The Neural Network
Can Learn To Recognize Patterns In The Input Data And Make Accurate Predictions. The
Number Of Hidden Layers And The Number Of Neurons In Each Hidden Layer Can Vary
Depending On The Complexity Of The Task The Neural Network Is Trying To Perform.
The Output Layer : The Output Layer Of An Artificial Neural Network Produces The Final
Result Of The Network's Processing. The Output Can Take Many Forms, Depending On The
Type Of Task The Neural Network Is Performing. For Example, In A Classification Task, The
Output Layer Might Produce A Set Of Probabilities That Represent The Likelihood That The
Input Belongs To Each Of The Possible Classes. In A Regression Task, The Output Layer Might
Produce A Single Numerical Value That Represents The Predicted Output. The Output Layer
Receives Input From The Neurons In The Final Hidden Layer And Uses This Information To
Produce The Final Output Of The Neural Network.
Artificial Neural Networks (ANNs) are made up of interconnected nodes, or "neurons," that are
organized into layers. Each neuron in an ANN receives input from other neurons in the previous
layer, processes that input, and then produces an output that is sent to the next layer of neurons.
The output of the final layer of neurons is the model's prediction.
When training an ANN model to predict student performance, the model is first presented with a set
of training data that includes both the input data (such as student demographics, prior academic
achievement, and socioeconomic status) and the corresponding output data (such as the student's
final grade or GPA). The model then adjusts the strength of the connections between the neurons in
order to minimize the difference between the predicted output and the actual output.
This process of adjusting the connections between the neurons is known as "backpropagation," and
it involves calculating the error between the predicted output and the actual output and then
propagating that error backwards through the network, adjusting the strength of the connections as
it goes.
Once the ANN model has been trained on the training data, it can be used to make predictions on
new data that it has not seen before. The input data for the new data is fed into the input layer of
the ANN, and the output of the final layer is the model's prediction for the student's performance.
Overall, ANNs are a powerful tool for predicting student performance, but they require a significant
amount of data to be trained effectively, and the quality of the predictions depends on the quality of
the input data and the design of the ANN model
The processing methods used in artificial neural networks to predict student academic performance
include data collection, data preprocessing, model training, and model evaluation. Data collection
involves gathering historical student data, which can include demographic information, previous
academic performance, attendance records, and other relevant variables. Data preprocessing
involves cleaning and transforming the data to make it suitable for use in the neural network. Model
training involves feeding the preprocessed data into the neural network and adjusting the weights
and biases of the network to minimize the prediction error. Model evaluation involves testing the
trained neural network on a separate dataset to determine its accuracy, precision, recall, and F1
score. These processing methods allow the neural network to learn from the data and make
accurate predictions of student academic performance
Advantages:
1. Neural networks are capable of learning and adapting to new situations, which makes
them highly flexible and useful in a wide range of applications.
2. They can handle large amounts of complex data and can recognize patterns that are
difficult or impossible for humans to detect.
3. Neural networks can be trained to perform tasks that are too difficult or time-consuming
for humans to do, such as image or speech recognition.
4. Neural networks can be used to solve a wide range of problems, from image and speech
recognition to natural language processing and even game playing.
Disadvantages:
1. Neural networks require a large amount of data to be trained effectively, which can be
time-consuming and expensive to acquire.
2. Overfitting can be a problem with neural networks, where the network becomes too
specialized to the training data and performs poorly on new data.
3. Neural networks can be difficult to interpret, which can make it hard to understand how
they arrived at a particular decision or prediction.
4. Neural networks can be computationally expensive to train and run, requiring a lot of
processing power and memory. This can make them difficult to use on low-power devices or
in real-time applications.
CHAPTER THREE
WHEN ATIFICAL NEURAL IS BEEN PERFORMED THEYARE
TOOLS THAT ARE USED
1. TensorFlow: A popular open-source software library for building and training neural networks.
TensorFlow provides a high-level API for building and training models, as well as low-level operations
for customizing models.
2. Keras: A high-level neural networks API, written in Python and capable of running on top of
TensorFlow, Theano, and CNTK. Keras is designed to be user-friendly, modular, and extensible.
3. PyTorch: An open-source machine learning library based on the Torch library. PyTorch provides a
dynamic computational graph, which allows for more flexible and efficient training of neural
networks.
4. Caffe: A deep learning framework developed by Berkeley AI Research (BAIR). Caffe is optimized for
image recognition tasks and is widely used in computer vision research.
5. Torch: An open-source machine learning library based on the Lua programming language. Torch
provides efficient implementations of common neural network operations and is designed to be easy
to use and extend.
6. MXNet: A flexible and efficient deep learning library developed by Amazon. MXNet supports a
wide range of neural network architectures and is optimized for distributed training.
7. Theano: A Python library for efficient numerical computation, including support for building and
training neural networks. Theano is designed to be fast and efficient, and can run on both CPUs and
GPUs.
8. ONNX: An open format for representing deep learning models that allows models to be
transferred between frameworks. ONNX enables interoperability between deep learning
frameworks, making it easier to use multiple frameworks in a single project.
9. CNTK: The Microsoft Cognitive Toolkit is a deep learning framework developed by Microsoft. CNTK
supports a wide range of neural network architectures and is designed to be scalable and efficient.
10. OpenCV: An open-source computer vision library that includes support for deep learning.
OpenCV can be used for a wide range of computer vision tasks, including object detection, face
recognition, and image segmentation.
OBJECTIVE
1. Neurons: The basic building blocks of an artificial neural network. Neurons receive input signals,
process them, and generate output signals.
2. Layers: A group of neurons that are connected to each other. Layers can be used to learn different
features of the input data.
3. Weights: The strength of the connections between neurons. Weights are adjusted during training
to improve the accuracy of the network.
4. Bias: An additional input to a neuron that is used to adjust the output. Bias can help to improve
the accuracy of the network.
5. Activation function: A function that determines the output of a neuron based on its input.
Activation functions can be used to introduce non-linearity into the network.
6. Loss function: A function that measures the difference between the predicted output of the
network and the actual output. Loss functions are used to optimize the weights of the network
during training.
7. Backpropagation: A method for adjusting the weights of the network based on the error between
the predicted output and the actual output.
8. Gradient descent: An optimization algorithm that is used to minimize the loss function during
training.
9. Dropout: A regularization technique that randomly drops out some neurons during training to
prevent overfitting.
10. Convolution: A mathematical operation that is used to extract features from images and other
types of data. Convolutional neural networks (CNNs) use convolution to learn features from images.
These objects are all fundamental to the design and implementation of artificial neural networks.
This study will educate on the design and implementation of Artificial Neural Network. It will
also educate on how Artificial Neural Network can be used in predicting students academic
performance.
This research will also serve as a resource base to other scholars and researchers interested
in carrying out further research in this field subsequently, if applied will go to an extent to
provide new explanation to the topic
This study will cover the mode of operation of Artificial Neural Network and how it can be
used to predict student academic performance.
LIMITATION OF STUDY
Financial constraint- Insufficient fund tends to impede the efficiency of the researcher in
sourcing for the relevant materials, literature or information and in the process of data
collection (internet, questionnaire and interview).
Time constraintThe researcher will simultaneously engage in this study with other academic
work. This consequently will cut down on the time devoted for the research work
Artificial Neural Networks (ANN) have some limitations in Nigeria, such as:
1. Data quality: ANN requires large amounts of high-quality data to be trained effectively. In
Nigeria, there may be challenges with data quality, completeness, and accuracy, which can
affect the performance of ANN.
2. Infrastructure: ANN requires significant computing resources, such as processing power
and memory, to train and operate effectively. In Nigeria, there may be challenges with
access to high-performance computing resources, which can limit the use of ANN.
3. Expertise: ANN is a complex technology that requires specialized knowledge and skills to
develop and operate effectively. In Nigeria, there may be a shortage of qualified experts
with the necessary skills and knowledge to develop and operate ANN.
4. Cost: ANN can be expensive to develop and operate, particularly if large amounts of data
are required. In Nigeria, there may be constraints on the availability of funding and
resources to support the development and operation of ANN.
These are some of the limitations of using ANN in Nigeria. However, with appropriate
investments in infrastructure and expertise, these limitations can be overcome, and ANN
can be an effective tool for solving complex problems in Nigeria.
-
Conclusion
REFERENCES
Ayan, M.N.R.; Garcia, M.T.C. 2013. Prediction of university students' academic achievement
by linear and logistic models. Span. J. Psychol.
11, 275-288.
Kanakana,
G.M.;
Olanrewaju,
A. O. 2001. Predicting student performance in engineering education using an artificial
neural network at Tshwane university of technology. In Proceedings of the International
Conference on Industrial Engineering, Systems Engineering and Engineering Management
for Sustainable Global Development, Stellenbosch, South Africa, 21-23 September 2011; pp.
1-7.
Romero, C.; Ventura, S. 2007, Educational Data mining: A survey from 1995 to 2005. Expert
Syst. Appl. 33, 135-146.
Ting, S.R. 2008, Predicting academic success of first-year engineering students from
standardized test scores and psychosocial variables. Int. J. Eng. Educ., 17, 75-80