0% found this document useful (0 votes)
21 views73 pages

Mini

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views73 pages

Mini

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

A

MINI PROJECT REPORT


on
PREDICTING BEHAVIOUR CHANGE IN STUDENTS WITH SPECIAL
EDUCATIONAL NEEDS

Submitted in partial fulfilment for the award of the degree of

BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE AND ENGINEERING
(DATA SCIENCE)
By

ERRA VARSHA 21Q91A6778


KISTAPURAM SNEHA 21Q91A6793
KONDI GOUTHAM 21Q91A6794
PRADEEP KUMAR 21Q91A6799

Under the guidance of


Mrs MALA SREE
Assistant Professor, Dept. of CSE-DS

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


(DATA SCIENCE)

MALLA REDDY COLLEGE OF ENGINEERING


(Approved by AICTE-Permanently Affiliated to JNTU-Hyderabad)
Accredited by NBA Recognized section 2(f) & 12(B) of UGC New Delhi
ISO9001:2015 certified Institution
Maisammaguda, Dhulapally (Post via Kompally), Secunderabad- 500100

2024 - 2025
MALLA REDDY COLLEGE OF ENGINEERING
(Approved by AICTE-Permanently Affiliated to JNTU-Hyderabad)
Accredited by NBA , Recognized section 2(f) & 12(B) of UGC New Delhi
ISO9001:2015 certified Institution
Maisammaguda, Dhulapally (Post via Kompally), Secunderabad- 500100
_____________________________________________________________________

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING-


(DATA SCIENCE)

CERTIFICATE
This is to certify that the Mini Project report on “PREDICTING BEHAVIOUR CHANGE IN
STUDENTS WITH SPECIAL EDUCATIONAL NEEDS” is successfully done by the following
students of Department of Computer Science & Engineering (Data Science) of our college in
partial fulfilment of the requirement for the award of Bachelor of Technology degree in the year
2023-2024. The results embodied in this report have not been submitted to any other University
for the award of any diploma or degree.

ERRA VARSHA 21Q91A6778


KISTAPURAM SNEHA 21Q91A6793
KONDI GOUTHAM 21Q91A6794
PRADEEP KUMAR 21Q91A6799

Submitted for the viva voice examination held on: ____________________________________

INTERNAL GUIDE PROJECT COORIDINATOR HOD


Mrs Malasree Dr.Satheesh Nagavarapu Dr.J.G.M Britto
Asst. Professor Assoc. Professor Professor & Head

Internal Examiner External Examiner


DECLARATION

We, VARSHA, SNEHA, GOUTHAM, PRADEEP with Regd.no. 21Q91A6778, 21Q91A6793,


21Q91A6794, 21Q91A6799 are hereby declaring that the mini project entitled “PREDICTING
BEHAVIOUR CHANGE IN STUDENTS WITH SPECIAL EDUCTIONAL NEEDS” has done
by us under the guidance of Mrs MALA SREE Assistant Professor, Department of Computer
Science and Engineering (Data science) is submitted in the partial fulfilment of the requirements
for the award of degree of BACHELOR OF TECHNOLOGY in COMPUTER SCIENCE and
ENGINEERING (DATA SCIENCE).
The Result embedded in this project report have not been submitted to any other University or
institute for the award of any degree or diploma.

Signature of the Candidate

ERRA VARSHA 21Q91A6778

KISTAPURAM SNEHA 21Q91A6793

KONDI GOUTHAM 21Q91A6794

PRADEEP KUMAR 21Q91A6799

DATE:
PLACE: Maisammaguda
ACKNOWLEDGEMENT

First and foremost, we would like to express our immense gratitude towards our
institution Malla Reddy College of Engineering, which helped us to attain profound
technical skills in the field of Computer Science & Engineering, there by fulfilling our
most cherished goal.

We are pleased to thank Sri Ch. Malla Reddy, our Founder Chairman MRGI, Sri
Ch. Mahender Reddy, Secretary, MRGI for providing this opportunity and support
throughout the course.

It gives us immense pleasure to acknowledge the perennial inspiration of


Dr. M. Ashok our beloved principal for his kind co-operation and encouragement in bringing
out this task.

We would like to thank Dr. J G M Britto, HOD, CSE(DS) Department for their inspiration
adroit guidance and constructive criticism for successful completion of our degree.

We would like to thank Dr Sateesh Nagavarapu, Associate Professor , project co-ordinator,


for his valuable suggestions and guidance during the exhibition and completion of this
project.

We would like to thank Mrs. Mala Sree Assistant Professor our internal guide, for her
valuable suggestions and guidance during the exhibition and completion of this project.

Finally, we avail this opportunity to express our deep gratitude to all staff who have
contribute their valuable assistance and support making our project success

VARSHA : 21Q91A6778

SNEHA : 21Q91A6793

GOUTHAM : 21Q91A6794

PRADEEP : 21Q91A6799
INDEX
ABSTRACT I
LIST OF FIGURES II
LIST OF ABBREVIATIONS III
LIST OF SCREENS IV
CHAPTER 1 1
INTRODUCTION 2
CHAPTER 2 3
LITERATURE SURVEY 4
CHAPTER 3 5
PROBLEM DEFINATION 6
3.1 EXISTING SYSTEM 7
3.2 PROPOSED SYSTEM 8
3.3 REQUIREMENTS 10
3.3.1 SOFTWARE REQUIREMENTS 10
3.3.2 HARDWARE REQUIREMENTS 11
CHAPTER 4 12
SYSTEM STUDY 13
4.1 FEASIBILITY STUDY 13
4.2 ECONOMIC STUDY 13
4.3 SOCIAL STUDY 13
4.4 TECHNICAL 13
CHAPTER 5 14
SYSTEM DESIGN 15
5.1 SYSTEM ARCHITECTURE 15
5.2 UML DIAGRAMS 17
5.2.1 USE CASE DIAGRAM 18
5.2.2 CLASS DIAGRAM 19
5.2.3 SEQUENCE DIAGRAM 20
5.2.4 DATA FLOW DIAGRAM 20
5.2.5 FLOW CHART 21
CHAPTER 6 22
SYSTEM IMPLEMENTATION 23
6.1 MODULES 23
6.2 TECHNOLOGIES USED 23
6.2.1 PYTHON 24
6.3 ALGORITHMS 26
6.3.1 DECISION TREE CLASSIFIER 26
6.3.2 GRADIENT BOOSTING 26
6.3.3 K-NEAREST NEIGHBORS(KNN) 26
6.3.4 LOGISTIC REGRESSION CLASSIFIER 27
6.3.5 NAVIE BAYES 27
6.3.6 RANDOM FOREST 28
6.3.7 SUPPORT VECTOR MACHINE(SVM) 29
6.4 SOURCE CODE 30
CHAPTER 7 45
TESTING 46
7.1 DEVELOPING METHODOLOGIES 46
7.2 TYPES OF TESTS 51
7.2.1 UNIT TESTING 51
7.2.2 INTEGRATION TESTING 51
7.2.3 FUNCTIONAL TESTING 51
7.2.4 SYSTEM TESTING 52
7.2.5 ACCEPTANCE TESTING 52
CHAPTER 8 53
RESULTS 54
CHAPTER 9 57
CONCLUSION 58
CHAPTER 10 59
FUTURE ENHANCEMENT 60
CHAPTER 11 61
REFERENCES 62
ABSTRACT

The availability of educational data in novel ways and formats brings new opportunities to
students with special education needs (SEN), whose behaviour and learning are highly
sensitive to their body conditions and surrounding environments. Multimodal learning
analytics (MMLA) captures learner and learning environment data in various modalities and
analyses them to explain the underlying educational insights. In this work, we applied
MMLA to predict SEN students’ behaviour change upon their participation in applied
behaviour analysis (ABA) therapies, where ABA therapy is an intervention in special
education that aims at treating behavioural problems and fostering positive behaviour
changes.

Here we show that by inputting multimodal educational data, our machine learning models
and deep neural network can predict SEN students’ behaviour change with optimum
performance of 98% accuracy and 97% precision. We also demonstrate how environmental,
psychological, and motion sensor data can significantly improve the statistical performance
of predictive models with only traditional educational data. Our work has been applied to
the Integrated Intelligent Intervention Learning (3I Learning) System, enhancing intensive
ABA therapies for over 500 SEN students in Hong Kong and Singapore since 2020

i
LIST OF FIGURES

S.NO FIGURE NO DESCRIPTION PAGE NO

1 5.2.1 Use Case Diagram 18

2 5.2.2 Class Diagram 19

3 5.2.3 Sequence Diagram 20

4 5.2.4 Data Flow Diagram 20

5 5.2.5 Flow Chart 21

6 5.1 System Architecture 15

7 6.3 Python Libraries 25

ii
LIST ACRONYM AND ABBREVIATION

S.No Acronym Description

1 UML Unified modelling language

2 SEN Special Education Needs

3 SVM Support Vector Machine

4 MTNN Multi Model Neural


Network

5 ABA Applied Behaviour


Analysis

iii
LIST OF SCREENS

S.No SCREEN Description Page No

1. Screen 6.3 Libraries Installation 25

2. Screen 10.1 Output 54

iv
CHAPTER 1
INTRODUCTION

1
1.INTRODUCTION
Predicting and understanding behaviour change in students with special educational needs
(SEN) is an essential area of research in special education. These students, who may have
conditions such as Autism Spectrum Disorder (ASD), Attention-Deficit/Hyperactivity
Disorder (ADHD), dyslexia, or sensory processing disorders, often exhibit behaviours that
differ from their typically developing peers. These behaviours, which can range from
academic challenges and difficulty with communication to social struggles and emotional
dysregulation, require specialized attention and intervention strategies. Understanding the
patterns behind these behaviours, and how they change over time, is crucial for developing
effective and individualized support plans that address students' unique needs.

The educational environment plays a significant role in shaping behaviour. Factors such as
classroom structure, teaching methods, social interactions with peers, and family dynamics
can either mitigate or exacerbate challenging behaviours. Teachers and support staff often
rely on strategies such as positive reinforcement, visual supports, and social-emotional
learning programs to manage and guide student behaviour. However, predicting how a
student's behaviour will evolve remains a complex challenge due to the individual nature of
their needs. The ability to anticipate changes in behaviour—whether improvements or
setbacks—can help educators make proactive adjustments, providing the right interventions
at the right time.

This project aims to explore the various factors that contribute to behaviour change in
students with SEN, including cognitive, emotional, and environmental influences. By
investigating how these factors interact and impact behaviour, the study seeks to develop
methods for predicting future behavioural patterns. A key focus will be understanding how
early signs of behavioural change can be detected and addressed through targeted
interventions. In doing so, the project aims to equip educators, caregivers, and policymakers
with tools to not only understand but also anticipate behavioural shifts, enabling them to
provide the best possible support for students with SEN. Ultimately, this research could
contribute to more inclusive and supportive educational environments, where students with
SEN are empowered to thrive academically, socially, and emotionally.

2
CHAPTER 2
LITERATURE SURVEY

3
2. LITERATURE SURVEY
1. Title: Behavioural Interventions for Children with Autism Spectrum Disorder: A Review
of the Literature

Authors:John H. Matson, Karen C. Senatore, Kelly M. Stichter

Year:2011
Description: This article provides an overview of behavioural interventions for children
with Autism Spectrum Disorder (ASD), a common group within the SEN category. The
authors review various strategies, such as Applied Behaviour Analysis (ABA), and discuss
their effectiveness in predicting and modifying challenging behaviours. The review also
highlights how early behavioural interventions can lead to significant improvements in
social, communication, and academic outcomes for students with ASD.

2. Title: Attention-Deficit/Hyperactivity Disorder and Academic Performance: A Meta-


Analysis

Authors:F. B. DuPaul, J. A. Stoner

Year:2014
Description: This meta-analysis examines the relationship between Attention-
Deficit/Hyperactivity Disorder (ADHD) and academic performance. The study synthesizes
findings from multiple research studies and provides evidence on how students with ADHD
exhibit behavioural challenges in school settings.

3. Title: The Role of the Classroom Environment in Behavioural Interventions for Students
with Special Needs

Authors:Daniel J. P. Phillips, Michele B. Duffy

Year:2017
Description: This paper investigates how modifications to the classroom environment can
support behavioural change in students with special needs. The study argues that
environmental factors such as seating arrangements, sensory-friendly spaces, and structured
routines are critical in influencing student behaviour.

4
CHAPTER 3
PROBLEM DEFINITION

5
3.PROBLEM DEFINITION

Primary Objective

The primary objective is to analyze and predict behavior changes in students with special
education needs by identifying key factors influencing their responses to different
interventions. This involves monitoring emotional, social, and academic development to
create tailored strategies. The goal is to enhance their learning experience and foster positive
behavioral outcomes. Ultimately, the objective is to improve individualized educational
planning and support for these students.

PROJECT DESCRIPTION

This project aims to explore and predict behavior change in students with special educational
needs (SEN) by identifying key factors that influence their behavioral responses and
applying evidence-based interventions to foster positive outcomes. Recognizing that
students with SEN often exhibit a wide range of learning, emotional, and behavioral
challenges, the project will focus on understanding the underlying causes of these behaviors,
which can vary based on the type of disability, environmental context, and social dynamics.
Through a combination of data analysis, observation, and tailored interventions, the project
seeks to develop a framework for educators to predict and support behavior change
effectively in diverse classroom settings.

The project will involve collecting detailed behavioral data from students across various
SEN categories, including autism spectrum disorder (ASD), attention deficit hyperactivity
disorder (ADHD), emotional and behavioral disorders (EBD), and intellectual disabilities.
By analyzing these data in relation to classroom environments, teaching strategies, and
individualized support plans, the project aims to uncover patterns that can predict when and
why certain behaviors occur. Understanding these patterns will enable educators to
implement proactive strategies that minimize disruptive behaviors and promote positive
engagement, learning, and social interaction.

Furthermore, the project will emphasize the development and implementation of


individualized interventions tailored to each student’s needs. This may include strategies
such as sensory accommodations, structured routines, social skills training, and emotional
regulation techniques. In addition to predicting behavior change, the project will assess the
effectiveness of these interventions over time, measuring improvements in student behavior,
academic performance, and overall well-being. By providing a comprehensive approach to
6
understanding and supporting behavior in students with SEN, this project aims to create
more inclusive, supportive, and effective educational environments that promote long-term
success for all students.

3.2.Existing System

Applied Behavior Analysis (ABA) is an intervention method in which pedagogical strategies


derived from the principles of behavior are systematically applied to promote socially
significant behaviors and reduce problem behaviors . The set of basic principles, which are
statements about how environmental variables act as input to a function of behavior, have
been evaluated scientifically by experimental analyses of behaviors . In ABA, behavior is
viewed as the learner’s interaction with his or her surrounding environment and involves the
movement of some part(s) of the learner’s body. Learning behavior occurs within the
environmental context. At the same time, the learning environment is regarded as the full set
of physical circumstances in which the learner is situated.

The learning outcome of Applied Behavior Analysis lessons is the achievement of behavior
changes that improve learners’ quality of life in communication and daily living skills. A
systematic and measurable behavior assessment scheme is defined before the ABA lessons.
The target behavior is often broken down into smaller tasks, while positive reinforcements
are often used to encourage goal achievement. Assessment criteria include whether the target
task is achieved (plus) or not (minus), whether a prompt from the therapist (prompt) is
needed to facilitate task achievement, or if the student is behaving in a way that is unrelated
to the task (off task). Furthermore, behavior change is effective if it is durable over time.
Therefore, a subsequent follow-up reassessment of the developed behavior is needed to
ensure the effectiveness of the therapy.

Students with special needs can be susceptible to ambient environmental conditions due to
their dysfunction in sensory processing. A previous study showed that high levels of CO2
content caused fatigue and difficulties in concentration in SEN students, especially those
with ADHD Another study performed with intellectually disabled preschool students
revealed that classroom thermal discomfort (e.g., high nearby ambient temperature) could
distract them from learning and influence their mood and health . The same study also
suggested that students with intellectual disabilities (ID) are more vulnerable to acoustic
discomforts due to their psychologically stressful conditions. Researchers also studied the
relationship between classroom lighting and SEN students’ comfort. They found that
7
inappropriate lighting and glare affect individual SEN students to different extents, while
they felt tired and irritated because of lighting discomfort, in general. However, teachers and
therapists often have no control over lighting characteristics except switching on or off .

Disadvantages

• Our prediction target is a binary output, which limits the available information regarding
students’ ABA learning for the teachers and therapists.

• The current data collection system works in a one-to-one therapist-to-student setting. While
in the daily special education context, classroom teaching is often conducted in one-to-few
or one-to-many manners.

• The measurement hardware in the current study is costly. For example, Empatica E4
wristbands were used, while an E4 wristband can cost more than a thousand US dollars.

3.3.Proposed System
The proposed system aims to develop a comprehensive, data-driven approach to predicting
behaviour change in students with special educational needs (SEN). This system will
integrate multiple sources of information, including behavioural observations, classroom
environment assessments, and academic progress data, to provide a predictive model for
understanding how a student’s behaviour may evolve over time. By utilizing data analytics,
the system seeks to offer teachers, special education professionals, and caregivers actionable
insights into potential behavioural shifts, enabling early intervention and tailored support
strategies.

The system will be designed with user-friendly interfaces for teachers and caregivers to input
real-time observations and data. These inputs will be processed and analyzed to generate
predictive reports and actionable insights, which can be used to inform classroom
management strategies, IEP (Individualized Education Program) goals, and behavioural
interventions. The predictive model will be continuously updated as more data is collected,
allowing the system to refine its predictions over time and become increasingly accurate in
its forecasting.

Moreover, the proposed system will include a feedback loop, enabling educators to assess
the success of interventions based on predicted outcomes. For example, if a behavioural
intervention is expected to reduce aggression in a student with ADHD, the system can track
the effectiveness of this intervention over time, adjusting predictions based on the student’s

8
response. This iterative process of data collection, analysis, and feedback will empower
educators to make more informed decisions, create individualized learning plans, and
promote positive behaviour change for students with diverse needs.

Proposed system advantages

1. Early Identification of Behavioural Issues: The system’s predictive model helps


identify potential behavioural challenges before they escalate. By analyzing patterns
in students' behaviour, it can highlight early signs of difficulties, allowing educators
and caregivers to intervene proactively. Early intervention is critical for preventing
long-term behavioural issues and ensuring better academic and social outcomes for
students with special educational needs (SEN).

2. Personalized and Targeted Interventions: One of the major advantages of the


system is its ability to generate individualized recommendations based on each
student’s unique behavioural profile. By considering factors such as past behaviours,
academic performance, sensory needs, and peer interactions, the system tailors
interventions to suit the specific requirements of each student.

3. Data-Driven Decision Making: The system empowers educators to make informed


decisions based on comprehensive data rather than relying on intuition alone. With
real-time data collection and predictive analytics, teachers can identify which
strategies have been most successful for a given student and adjust their approach
accordingly. This data-driven methodology enhances the quality of decision-making
and reduces the reliance on trial-and-error interventions.

4. Improved Classroom Management: By predicting and understanding potential


behaviour changes, the system helps teachers better manage classroom dynamics.
For example, if the system identifies that a student with Autism Spectrum Disorder
(ASD) might experience sensory overload in a noisy environment, teachers can make
adjustments in advance, such as providing a quieter workspace or using visual
supports.

5. Enhanced Teacher and Caregiver Collaboration: The system fosters collaboration


between teachers, caregivers, and other support staff by providing a shared platform
for tracking student behaviour. Teachers and caregivers can input observations, share
insights, and monitor the effectiveness of interventions together. This collaborative

9
approach ensures that strategies are consistent across different settings, promoting a
more cohesive and holistic support system for students with SEN.

3.4 REQUIREMENTS

SOFTWARE REQUIREMENTS

Software requirements are a critical component in the development of any software system.
They outline the functionality, features, and constraints that the software must satisfy in order
to meet the needs of its users and stakeholders. In essence, software requirements serve as
the blueprint for the entire development process. They define what the system should do,
how it should behave, and any technical or regulatory constraints it must adhere to. Whether
building a simple app or a complex enterprise system, software requirements help ensure
that the end product aligns with the users' needs and expectations.

When developing software solutions to predict and manage behavior change in students with
Special Educational Needs (SEN), it's essential to focus on user needs, functionality, and
integration with existing educational practices. Below are the key software requirements for
creating an effective system to support behavior management and prediction:

• Operating system : Windows 7 Ultimate.


• Coding Language : Python.
• Front-End : Python.
• Back-End : Django-ORM
• Designing : Html, css, javascript.
• Data Base : MySQL (WAMP Server).

By meeting these requirements, the software can help educators and support staff track and
predict student behavior more effectively, providing targeted interventions and improving
the overall learning environment for students with special educational needs. The system
should foster collaboration among all stakeholders, integrate with existing educational tools,
and be secure, user-friendly, and adaptable to the diverse needs of SEN students.

HARDWARE REQUIREMENTS

When developing software applications, particularly those designed to support specialized


functions like predicting and managing behavior change in students with Special Educational
10
Needs (SEN), it's essential to consider not only the software requirements but also the
hardware requirements. Hardware requirements define the physical infrastructure needed for
the software to run effectively, ensuring that the system operates smoothly and meets the
performance, scalability, and security standards necessary for its intended use.

Hardware requirements are often an overlooked but critical part of the software development
process. They specify the types of hardware—such as computers, servers, networking
equipment, storage devices, and peripherals—that are needed to run the software efficiently.
Without the right hardware, even the best-designed software will fail to perform optimally,
potentially causing delays, crashes, or system failures, particularly in environments that
require real-time data processing, such as in educational settings.

For applications like behavior prediction software for SEN students, the right hardware is
vital because these systems often involve large volumes of data collection, real-time
processing, and possibly complex analytics like machine learning algorithms. The hardware
must be able to support the demands of these tasks and provide users (e.g., teachers,
administrators, parents) with a responsive, seamless experience. This can include devices for
data entry (e.g., tablets or laptops), servers for processing and storing data, and reliable
network infrastructure to support the communication between different users and systems.

• Processor - Pentium –IV


• RAM - 4 GB (min)
• Hard Disk - 20 GB
• Monitor - SVGA

11
CHAPTER 4

SYSTEM STUDY

12
4.SYSTEM STUDY

4.1 FEASIBILITY STUDY


The feasibility of the project is analyzed in this phase and business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.
4.2 ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the technologies
used are freely available. Only the customized products had to be purchased.
4.3 TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical requirements
of the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead
to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.
4.4 SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by
the users solely depends on the methods that are employed to educate the user about the
system and to make him familiar with it. His level of confidence must be raised so that he is
also able to make some constructive criticism, which is welcomed, as he is the final user of
the system.

13
CHAPTER 5

SYSTEM DESIGN

14
5. SYSTEM DESIGN

The system design involves a multi-tier architecture, starting with data collection from
behavioral assessments, teacher inputs, and student progress records. A machine learning
model processes this data to predict behavior changes, identifying patterns in emotional,
social, and academic responses. The system integrates with existing educational tools to
provide real-time recommendations and interventions. User-friendly dashboards will be
accessible to educators, parents, and therapists for tracking and analyzing behavior trends.
Finally, the system will include feedback loops for continuous learning, ensuring the model
adapts to evolving student needs.

5.2 SYSTEM ARCHITECTURE

This architecture is designed to predict and analyze student behavior based on historical
and real-time data. The system involves three main components: Admin, Service Provider,
and Remote User, all of which interact through a centralized Web Database.
1. Admin
The admin acts as the system manager, overseeing operations and ensuring data integrity.
Key responsibilities include:
• Authorize the Admin: The admin must be authenticated to manage the system.

15
• Accept User Information: The admin gathers and processes student-related data,
such as registration details, attendance records, and performance metrics, stored in
the Web Database.
• View User Data Details: Admins monitor the behavior trends of individual
students or groups for reporting purposes.
• Process User Queries: Any queries raised by remote users (e.g., teachers or
students) are managed by the admin.
2. Service Provider
This module is responsible for processing data and providing predictions or insights about
student behavior. It offers the following features:
• View Student Dataset Details: Displays records of students, including attendance,
test scores, and engagement activities.
• Search and Predict Student Behavior Data Sets: Predicts student outcomes (e.g.,
likelihood of poor performance, dropouts, or high achievement) based on input
data.
• Calculate and View Predictions: Applies algorithms to analyze patterns in the
data and generate predictions.
• View Students with No Behavioral Concerns: Identifies students performing
consistently well with no issues.
• View All Remote Users: Displays all system users, such as students, teachers, and
administrators.
• View Actual Student Behavior Results (and Line Chart): Compares predictions
with actual outcomes and presents the data visually for trends and insights.
• View Prediction Results: Displays the predicted behavioral patterns, helping
stakeholders take proactive actions.
3. Remote User
This module is for users who access the system remotely, such as teachers, parents, or
students themselves. Key functionalities include:
• Register and Login: Users create accounts to interact with the system securely.
• Post Student Data Sets: Teachers or administrators can upload additional data,
such as test results or attendance logs, to enhance predictions.
• Search and Predict Behavior Data: Enables users to search student records and
view predictions about behavior trends.
• View Your Profile: Allows users to manage their profiles and view personalized
recommendations or insights.
4.Web DatabaseThe Web Database serves as the backbone of the system, storing and
retrieving all relevant data, including:

16
• Student registration information.
• Historical and real-time attendance, performance, and behavioral records.
5.3 UML DIAGRAMS:-
UML stands for Unified Modeling Language. UML is a standardized general-purpose
modeling language in the field of object-oriented software engineering. The standard is
managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object-oriented
computer software. In its current form UML is comprised of two major components: a Meta-
model and a notation. In the future, some form of method or process may also be added to;
or associated with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business
modeling and other non-software systems.
The UML represents a collection of best engineering practices that have proven successful
in the modeling of large and complex systems.
The UML is a very important part of developing objects-oriented software and the software
development process. The UML uses mostly graphical notations to express the design of
software projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of Object-Oriented tools market.
6. Support higher level development concepts such as collaborations, frameworks,
patterns and components.
7. Integrate best practices.
USE CASE DIAGRAM
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (and
anydependencies between those use cases. The main purpose of a use case diagram is to
show what system functions are performed for which actor.Roles of the actors in system
depicted.
17
18
CLASS DIAGRAM
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type
of static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.

19
SEQUENCE DIAGRAM
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram
that shows how processes operate with one another and in what order. It is a construct of a
Message Sequence Chart.

DATA FLOW DIAGRAM

20
FLOW CHART

21
CHAPTER 6

SYSTEM IMPLEMENTATION

22
6. SYSTEM IMPLEMENTATION

6.1 MODULES

Service Provider
In this module, the Service Provider has to login by using valid user name and password.
After login successful he can do some operations such as Browse and Train & Test
Data Sets, View Trained and Tested Accuracy in Bar Chart, View Trained and Tested
Accuracy Results, View Prediction Of Student Behavior Change Status, View Student
Behavior Change Status Ratio, Download Trained Data Sets, View Student Behavior Change
Status Ratio Results, View All Remote Users.

View and Authorize Users

In this module, the admin can view the list of users who all registered. In this, the admin can
view the user’s details such as, user name, email, address and admin authorizes the users.

Remote User
In this module, there are n numbers of users are present. User should register before doing
any operations. Once user registers, their details will be stored to the database. After
registration successful, he has to login by using authorized user name and password. Once
Login is successful user will do some operations likeREGISTER AND LOGIN, PREDICT
STUDENT BEHAVIOR CHANGE STATUS TYPE, VIEW YOUR PROFILE.

6.2 TECHNOLOGIES USED

The system will leverage machine learning frameworks like TensorFlow or PyTorch for
predictive analytics. SQL/NoSQL databases such as MySQL or MongoDB will store
student and behavioral data. The backend will use Node.js or Django to manage data
processing and APIs. React or Angular will be used for building interactive dashboards for
users. Cloud platforms like AWS or Azure will ensure scalability and efficient data storage
and processing.

23
6.3 PYTHON

Python is a high-level, interpreted, interactive and object-oriented scripting language.


Python is designed to be highly readable. It uses English keywords frequently where as
other languages use punctuation, and it has fewer syntactical constructions than other
languages.

➢ Python is Interpreted: Python is processed at runtime by the interpreter. You do not


need to compile your program before executing it. This is similar to PERL and PHP.

➢ Python is Interactive: You can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.

➢ Python is Object-Oriented: Python supports Object-Oriented style or technique of


programming that encapsulates code within objects.

History of Python

Python was developed by Guido van Rossum in the late eighties and early nineties at the
National Research Institute for Mathematics and Computer Science in the
Netherlands.Python is derived from many other languages, including ABC, Modula-3, C,
C++, Algol-68, SmallTalk, and Unix shell and other scripting languages.Python is
copyrighted. Like Perl, Python source code is now available under the GNU General Public
License (GPL).Python is now maintained by a core development team at the institute,
although Guido van Rossum still holds a vital role in directing its progress.

Python Features

Python's features include:

➢ Easy-to-learn: Python has few keywords, simple structure, and a clearly defined
syntax. This allows the student to pick up the language quickly.

➢ Easy-to-read: Python code is more clearly defined and visible to the eyes.

➢ Easy-to-maintain: Python's source code is fairly easy-to-maintain.

➢ A broad standard library: Python's bulk of the library is very portable and cross-
platform compatible on UNIX, Windows, and Macintosh.

➢ Interactive Mode: Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
24
➢ Portable: Python can run on a wide variety of hardware platforms and has the same
interface on all platforms.

➢ Extendable: You can add low-level modules to the Python interpreter. These modules
enable programmers to add to or customize their tools to be more efficient.

Python has a big list of good features:

➢ It supports functional and structured programming methods as well as OOP.

➢ It can be used as a scripting language or can be compiled to byte-code for building large
applications.

➢ It provides very high-level dynamic data types and supports dynamic type checking.

➢ IT supports automatic garbage collection.

Libraries used in python:

• numpy - mainly useful for its N-dimensional array objects.


• pandas - Python data analysis library, including structures such as dataframes.
• matplotlib - 2D plotting library producing publication quality figures.

Libraries Installation:

25
6.3 ALGORITHMS

Decision tree classifiers

Decision tree classifiers are used successfully in many diverse areas. Their most important
feature is the capability of capturing descriptive decision making knowledge from the
supplied data. Decision tree can be generated from training sets. The procedure for such
generation based on the set of objects (S), each belonging to one of the classes C1, C2, …,
Ck is as follows:
Step 1.If all the objects in S belong to the same class, for example Ci, the decision tree for
S consists of a leaf labeled with this class
Step 2.Otherwise, let T be some test with possible outcomes O1, O2,…, On. Each object in
S has one outcome for T so the test partitions S into subsets S1, S2,… Sn where each object
in Si has outcome Oi for T. T becomes the root of the decision tree and for each outcome Oi
we build a subsidiary decision tree by invoking the same procedure recursively on the set Si.
Gradient boosting

Gradient boosting is a machine learning technique used


in regression and classification tasks, among others. It gives a prediction model in the form
of an ensemble of weak prediction models, which are typically decision trees. When a
decision tree is the weak learner, the resulting algorithm is called gradient-boosted trees; it
usually outperforms random forest.A gradient-boosted trees model is built in a stage-wise
fashion as in other boosting methods, but it generalizes the other methods by allowing
optimization of an arbitrary differentiable loss function.

K-Nearest Neighbors (KNN)

a. Simple, but a very powerful classification algorithm

b. Classifies based on a similarity measure

c. Non-parametric
d. Lazy learning
e. Does not “learn” until the test example is given

Example

➢ Training dataset consists of k-closest examples in feature space


➢ Feature space means, space with categorization variables (non-metric variables)

26
➢ Learning based on instances, and thus also works lazily because instance close to the
input vector for test or prediction may take time to occur in the training dataset

Logistic regression Classifiers

Logistic regression analysis studies the association between a categorical dependent variable
and a set of independent (explanatory) variables. The name logistic regression is used when
the dependent variable has only two values, such as 0 and 1 or Yes and No. The name
multinomial logistic regression is usually reserved for the case when the dependent variable
has three or more unique values, such as Married, Single, Divorced, or Widowed. Although
the type of data used for the dependent variable is different from that of multiple regression,
the practical use of the procedure is similar.
Logistic regression competes with discriminant analysis as a method for analyzing
categorical-response variables. Many statisticians feel that logistic regression is more
versatile and better suited for modeling most situations than is discriminant analysis. This is
because logistic regression does not assume that the independent variables are normally
distributed, as discriminant analysis does.
This program computes binary logistic regression and multinomial logistic regression on
both numeric and categorical independent variables. It reports on the regression equation as
well as the goodness of fit, odds ratios, confidence limits, likelihood, and deviance. It
performs a comprehensive residual analysis including diagnostic residual reports and plots.
It can perform an independent variable subset selection search, looking for the best
regression model with the fewest independent variables. It provides confidence intervals on
predicted values and provides ROC curves to help determine the best cutoff point for
classification. It allows you to validate your results by automatically classifying rows that
are not used during the analysis.

Naive Bayes
The naive bayes approach is a supervised learning method which is based on a simplistic
hypothesis: it assumes that the presence (or absence) of a particular feature of a class is
unrelated to the presence (or absence) of any other feature .
Yet, despite this, it appears robust and efficient. Its performance is comparable to other
supervised learning techniques. Various reasons have been advanced in the literature. In this
tutorial, we highlight an explanation based on the representation bias. The naive bayes
classifier is a linear classifier, as well as linear discriminant analysis, logistic regression or

27
linear SVM (support vector machine). The difference lies on the method of estimating the
parameters of the classifier (the learning bias).
While the Naive Bayes classifier is widely used in the research world, it is not widespread
among practitioners which want to obtain usable results. On the one hand, the researchers
found especially it is very easy to program and implement it, its parameters are easy to
estimate, learning is very fast even on very large databases, its accuracy is reasonably good
in comparison to the other approaches. On the other hand, the final users do not obtain a
model easy to interpret and deploy, they does not understand the interest of such a technique.
Thus, we introduce in a new presentation of the results of the learning process. The classifier
is easier to understand, and its deployment is also made easier. In the first part of this tutorial,
we present some theoretical aspects of the naive bayes classifier. Then, we implement the
approach on a dataset with Tanagra. We compare the obtained results (the parameters of the
model) to those obtained with other linear approaches such as the logistic regression, the
linear discriminant analysis and the linear SVM. We note that the results are highly
consistent. This largely explains the good performance of the method in comparison to
others. In the second part, we use various tools on the same dataset (Weka 3.6.0, R 2.9.2,
Knime 2.1.1, Orange 2.0b and RapidMiner 4.6.0). We try above all to understand the
obtained results

Random Forest
Random forests or random decision forests are an ensemble learning method for
classification, regression and other tasks that operates by constructing a multitude of decision
trees at training time. For classification tasks, the output of the random forest is the class
selected by most trees. For regression tasks, the mean or average prediction of the individual
trees is returned. Random decision forests correct for decision trees' habit of overfitting to
their training set. Random forests generally outperform decision trees, but their accuracy is
lower than gradient boosted trees. However, data characteristics can affect their
performance.

The first algorithm for random decision forests was created in 1995 by Tin Kam Housing
the random subspace method, which, in Ho's formulation, is a way to implement the
"stochastic discrimination" approach to classification proposed by Eugene Kleinberg.

An extension of the algorithm was developed by Leo Breiman and Adele Cutler, who
registered "Random Forests" as a trademark in 2006 (as of 2019, owned by Minitab,

28
Inc.).The extension combines Breiman's "bagging" idea and random selection of features,
introduced first by Hand later independently by Amit and Geman in order to construct a
collection of decision trees with controlled variance.

Random forests are frequently used as "blackbox" models in businesses, as they generate
reasonable predictions across a wide range of data while requiring little configuration.

SVM

In classification tasks a discriminant machine learning technique aims at finding, based on


an independent and identically distributed (iid) training dataset, a discriminant function that
can correctly predict labels fornewly acquired instances. Unlike generative machine learning
approaches, which require computations ofconditional probability distributions, a
discriminant classification function takes a data point x and assignsit to one of the different
classes that are a part of the classification task. Less powerful than generativeapproaches,
which are mostly used when prediction involves outlier detection, discriminant
approachesrequire fewer computational resources and less training data, especially for a
multidimensional featurespace and when only posterior probabilities are needed. From a
geometric perspective, learning a classifieris equivalent to finding the equation for a
multidimensional surface that best separates the different classesin the feature space.

29
6.4 SOURCE CODE
SERVICE PROVIDER
from django.db.models import Count, Avg
from django.shortcuts import render, redirect
from django.db.models import Count
from django.db.models import Q
import datetime
import xlwt
from django.http import HttpResponse
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier
# Create your views here.
from Remote_User.models import
ClientRegister_Model,predicting_behavior_change,detection_ratio,detection_accuracy
def serviceproviderlogin(request):
if request.method == "POST":
admin = request.POST.get('username')
password = request.POST.get('password')
if admin == "Admin" and password =="Admin":
detection_accuracy.objects.all().delete()
return redirect('View_Remote_Users')

return render(request,'SProvider/serviceproviderlogin.html')

def View_Student_Behavior_Type_Ratio(request):
detection_ratio.objects.all().delete()
ratio = ""
kword = 'Good'

30
print(kword)
obj = predicting_behavior_change.objects.all().filter(Q(Prediction=kword))
obj1 = predicting_behavior_change.objects.all()
count = obj.count();
count1 = obj1.count();
ratio = (count / count1) * 100
if ratio != 0:
detection_ratio.objects.create(names=kword, ratio=ratio)

ratio12 = ""
kword12 = 'Bad'
print(kword12)
obj12 = predicting_behavior_change.objects.all().filter(Q(Prediction=kword12))
obj112 = predicting_behavior_change.objects.all()
count12 = obj12.count();
count112 = obj112.count();
ratio12 = (count12 / count112) * 100
if ratio12 != 0:
detection_ratio.objects.create(names=kword12, ratio=ratio12)

obj = detection_ratio.objects.all()
return render(request, 'SProvider/View_Student_Behavior_Type_Ratio.html', {'objs':
obj})

def View_Remote_Users(request):
obj=ClientRegister_Model.objects.all()
return render(request,'SProvider/View_Remote_Users.html',{'objects':obj})

def charts(request,chart_type):
chart1 = detection_ratio.objects.values('names').annotate(dcount=Avg('ratio'))
return render(request,"SProvider/charts.html", {'form':chart1, 'chart_type':chart_type})
31
def charts1(request,chart_type):
chart1 = detection_accuracy.objects.values('names').annotate(dcount=Avg('ratio'))
return render(request,"SProvider/charts1.html", {'form':chart1, 'chart_type':chart_type})

def View_Prediction_Of_Student_Behavior_Type(request):
obj =predicting_behavior_change.objects.all()
return render(request, 'SProvider/View_Prediction_Of_Student_Behavior_Type.html',
{'list_objects': obj})

def likeschart(request,like_chart):
charts =detection_accuracy.objects.values('names').annotate(dcount=Avg('ratio'))
return render(request,"SProvider/likeschart.html", {'form':charts,
'like_chart':like_chart})

def Download_Trained_DataSets(request):

response = HttpResponse(content_type='application/ms-excel')
# decide file name
response['Content-Disposition'] = 'attachment; filename="Predicted_Datasets.xls"'
# creating workbook
wb = xlwt.Workbook(encoding='utf-8')
# adding sheet
ws = wb.add_sheet("sheet1")
# Sheet header, first row
row_num = 0
font_style = xlwt.XFStyle()
# headers are bold
font_style.font.bold = True
# writer = csv.writer(response)
obj = predicting_behavior_change.objects.all()
data = obj # dummy method to fetch data.
32
for my_row in data:
row_num = row_num + 1

ws.write(row_num, 0, my_row.Fid, font_style)


ws.write(row_num, 1, my_row.Certification_Course, font_style)
ws.write(row_num, 2, my_row.Gender, font_style)
ws.write(row_num, 3, my_row.Department, font_style)
ws.write(row_num, 4, my_row.Height_CM, font_style)
ws.write(row_num, 5, my_row.Weight_KG, font_style)
ws.write(row_num, 6, my_row.Tenth_Mark, font_style)
ws.write(row_num, 7, my_row.Twelth_Mark, font_style)
ws.write(row_num, 8, my_row.hobbies, font_style)
ws.write(row_num, 9, my_row.daily_studing_time, font_style)
ws.write(row_num, 10, my_row.prefer_to_study_in, font_style)
ws.write(row_num, 11, my_row.like_your_degree, font_style)
ws.write(row_num, 12, my_row.social_medai_video, font_style)
ws.write(row_num, 13, my_row.Travelling_Time, font_style)
ws.write(row_num, 14, my_row.Stress_Level, font_style)
ws.write(row_num, 15, my_row.Financial_Status, font_style)
ws.write(row_num, 16, my_row.alcohol_consumption, font_style)
ws.write(row_num, 17, my_row.part_time_job, font_style)
ws.write(row_num, 18, my_row.Prediction, font_style)

wb.save(response)
return response

def train_model(request):
detection_accuracy.objects.all().delete()

df = pd.read_csv('Student_Behaviour.csv')

33
def apply_response(Label):
if (Label == 0):
return 0 # Good
elif (Label == 1):
return 1 # Bad

df['Results'] = df['Label'].apply(apply_response)

cv = CountVectorizer()
X = df['Fid']
y = df['Label']

print("FID")
print(X)
print("Results")
print(y)

cv = CountVectorizer()
X = cv.fit_transform(X)

models = []
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
X_train.shape, X_test.shape, y_train.shape

print("Deep Neural Network-DNN")


from sklearn.neural_network import MLPClassifier
mlpc = MLPClassifier().fit(X_train, y_train)
y_pred = mlpc.predict(X_test)
testscore_mlpc = accuracy_score(y_test, y_pred)
accuracy_score(y_test, y_pred)

34
print(accuracy_score(y_test, y_pred))
print(accuracy_score(y_test, y_pred) * 100)
print("CLASSIFICATION REPORT")
print(classification_report(y_test, y_pred))
print("CONFUSION MATRIX")
print(confusion_matrix(y_test, y_pred))
models.append(('MLPClassifier', mlpc))
detection_accuracy.objects.create(names="Deep Neural Network-DNN",
ratio=accuracy_score(y_test, y_pred) * 100)

# SVM Model
print("SVM")
from sklearn import svm
lin_clf = svm.LinearSVC()
lin_clf.fit(X_train, y_train)
predict_svm = lin_clf.predict(X_test)
svm_acc = accuracy_score(y_test, predict_svm) * 100
print(svm_acc)
print("CLASSIFICATION REPORT")
print(classification_report(y_test, predict_svm))
print("CONFUSION MATRIX")
print(confusion_matrix(y_test, predict_svm))
models.append(('svm', lin_clf))
detection_accuracy.objects.create(names="SVM", ratio=svm_acc)

print("Logistic Regression")

from sklearn.linear_model import LogisticRegression


reg = LogisticRegression(random_state=0, solver='lbfgs').fit(X_train, y_train)
y_pred = reg.predict(X_test)
print("ACCURACY")
print(accuracy_score(y_test, y_pred) * 100)
35
print("CLASSIFICATION REPORT")
print(classification_report(y_test, y_pred))
print("CONFUSION MATRIX")
print(confusion_matrix(y_test, y_pred))
models.append(('logistic', reg))
detection_accuracy.objects.create(names="Logistic Regression",
ratio=accuracy_score(y_test, y_pred) * 100)

print("Decision Tree Classifier")


dtc = DecisionTreeClassifier()
dtc.fit(X_train, y_train)
dtcpredict = dtc.predict(X_test)
print("ACCURACY")
print(accuracy_score(y_test, dtcpredict) * 100)
print("CLASSIFICATION REPORT")
print(classification_report(y_test, dtcpredict))
print("CONFUSION MATRIX")
print(confusion_matrix(y_test, dtcpredict))
models.append(('DecisionTreeClassifier', dtc))
detection_accuracy.objects.create(names="Decision Tree Classifier",
ratio=accuracy_score(y_test, dtcpredict) * 100)

print("Extra Tree Classifier")


from sklearn.tree import ExtraTreeClassifier
etc_clf = ExtraTreeClassifier()
etc_clf.fit(X_train, y_train)
etcpredict = etc_clf.predict(X_test)
print("ACCURACY")
print(accuracy_score(y_test, etcpredict) * 100)
print("CLASSIFICATION REPORT")
print(classification_report(y_test, etcpredict))
print("CONFUSION MATRIX")
36
print(confusion_matrix(y_test, etcpredict))
models.append(('RandomForestClassifier', etc_clf))
detection_accuracy.objects.create(names="Extra Tree Classifier",
ratio=accuracy_score(y_test, etcpredict) * 100)

csv_format = 'Results.csv'
df.to_csv(csv_format, index=False)

obj = detection_accuracy.objects.all()
return render(request,'SProvider/train_model.html', {'objs': obj})

REMOTE USER
from django.db.models import Count, Avg
from django.shortcuts import render, redirect
from django.db.models import Count
from django.db.models import Q
import datetime
import xlwt
from django.http import HttpResponse

import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
from sklearn.metrics import accuracy_score
from sklearn.tree import DecisionTreeClassifier

# Create your views here.


from Remote_User.models import
ClientRegister_Model,predicting_behavior_change,detection_ratio,detection_accuracy

def serviceproviderlogin(request):
if request.method == "POST":
37
admin = request.POST.get('username')
password = request.POST.get('password')
if admin == "Admin" and password =="Admin":
detection_accuracy.objects.all().delete()
return redirect('View_Remote_Users')

return render(request,'SProvider/serviceproviderlogin.html')

def View_Student_Behavior_Type_Ratio(request):
detection_ratio.objects.all().delete()
ratio = ""
kword = 'Good'
print(kword)
obj = predicting_behavior_change.objects.all().filter(Q(Prediction=kword))
obj1 = predicting_behavior_change.objects.all()
count = obj.count();
count1 = obj1.count();
ratio = (count / count1) * 100
if ratio != 0:
detection_ratio.objects.create(names=kword, ratio=ratio)

ratio12 = ""
kword12 = 'Bad'
print(kword12)
obj12 = predicting_behavior_change.objects.all().filter(Q(Prediction=kword12))
obj112 = predicting_behavior_change.objects.all()
count12 = obj12.count();
count112 = obj112.count();
ratio12 = (count12 / count112) * 100
if ratio12 != 0:
detection_ratio.objects.create(names=kword12, ratio=ratio12)

38
obj = detection_ratio.objects.all()
return render(request, 'SProvider/View_Student_Behavior_Type_Ratio.html', {'objs':
obj})

def View_Remote_Users(request):
obj=ClientRegister_Model.objects.all()
return render(request,'SProvider/View_Remote_Users.html',{'objects':obj})

def charts(request,chart_type):
chart1 = detection_ratio.objects.values('names').annotate(dcount=Avg('ratio'))
return render(request,"SProvider/charts.html", {'form':chart1, 'chart_type':chart_type})

def charts1(request,chart_type):
chart1 = detection_accuracy.objects.values('names').annotate(dcount=Avg('ratio'))
return render(request,"SProvider/charts1.html", {'form':chart1, 'chart_type':chart_type})

def View_Prediction_Of_Student_Behavior_Type(request):
obj =predicting_behavior_change.objects.all()
return render(request, 'SProvider/View_Prediction_Of_Student_Behavior_Type.html',
{'list_objects': obj})

def likeschart(request,like_chart):
charts =detection_accuracy.objects.values('names').annotate(dcount=Avg('ratio'))
return render(request,"SProvider/likeschart.html", {'form':charts,
'like_chart':like_chart})

def Download_Trained_DataSets(request):

response = HttpResponse(content_type='application/ms-excel')
# decide file name
response['Content-Disposition'] = 'attachment; filename="Predicted_Datasets.xls"'
39
# creating workbook
wb = xlwt.Workbook(encoding='utf-8')
# adding sheet
ws = wb.add_sheet("sheet1")
# Sheet header, first row
row_num = 0
font_style = xlwt.XFStyle()
# headers are bold
font_style.font.bold = True
# writer = csv.writer(response)
obj = predicting_behavior_change.objects.all()
data = obj # dummy method to fetch data.
for my_row in data:
row_num = row_num + 1

ws.write(row_num, 0, my_row.Fid, font_style)


ws.write(row_num, 1, my_row.Certification_Course, font_style)
ws.write(row_num, 2, my_row.Gender, font_style)
ws.write(row_num, 3, my_row.Department, font_style)
ws.write(row_num, 4, my_row.Height_CM, font_style)
ws.write(row_num, 5, my_row.Weight_KG, font_style)
ws.write(row_num, 6, my_row.Tenth_Mark, font_style)
ws.write(row_num, 7, my_row.Twelth_Mark, font_style)
ws.write(row_num, 8, my_row.hobbies, font_style)
ws.write(row_num, 9, my_row.daily_studing_time, font_style)
ws.write(row_num, 10, my_row.prefer_to_study_in, font_style)
ws.write(row_num, 11, my_row.like_your_degree, font_style)
ws.write(row_num, 12, my_row.social_medai_video, font_style)
ws.write(row_num, 13, my_row.Travelling_Time, font_style)
ws.write(row_num, 14, my_row.Stress_Level, font_style)
ws.write(row_num, 15, my_row.Financial_Status, font_style)

40
ws.write(row_num, 16, my_row.alcohol_consumption, font_style)
ws.write(row_num, 17, my_row.part_time_job, font_style)
ws.write(row_num, 18, my_row.Prediction, font_style)

wb.save(response)
return response

def train_model(request):
detection_accuracy.objects.all().delete()

df = pd.read_csv('Student_Behaviour.csv')

def apply_response(Label):
if (Label == 0):
return 0 # Good
elif (Label == 1):
return 1 # Bad

df['Results'] = df['Label'].apply(apply_response)

cv = CountVectorizer()
X = df['Fid']
y = df['Label']

print("FID")
print(X)
print("Results")
print(y)

cv = CountVectorizer()
X = cv.fit_transform(X)

41
models = []
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20)
X_train.shape, X_test.shape, y_train.shape

print("Deep Neural Network-DNN")


from sklearn.neural_network import MLPClassifier
mlpc = MLPClassifier().fit(X_train, y_train)
y_pred = mlpc.predict(X_test)
testscore_mlpc = accuracy_score(y_test, y_pred)
accuracy_score(y_test, y_pred)
print(accuracy_score(y_test, y_pred))
print(accuracy_score(y_test, y_pred) * 100)
print("CLASSIFICATION REPORT")
print(classification_report(y_test, y_pred))
print("CONFUSION MATRIX")
print(confusion_matrix(y_test, y_pred))
models.append(('MLPClassifier', mlpc))
detection_accuracy.objects.create(names="Deep Neural Network-DNN",
ratio=accuracy_score(y_test, y_pred) * 100)

# SVM Model
print("SVM")
from sklearn import svm
lin_clf = svm.LinearSVC()
lin_clf.fit(X_train, y_train)
predict_svm = lin_clf.predict(X_test)
svm_acc = accuracy_score(y_test, predict_svm) * 100
print(svm_acc)
print("CLASSIFICATION REPORT")
print(classification_report(y_test, predict_svm))
42
print("CONFUSION MATRIX")
print(confusion_matrix(y_test, predict_svm))
models.append(('svm', lin_clf))
detection_accuracy.objects.create(names="SVM", ratio=svm_acc)

print("Logistic Regression")
from sklearn.linear_model import LogisticRegression
reg = LogisticRegression(random_state=0, solver='lbfgs').fit(X_train, y_train)
y_pred = reg.predict(X_test)
print("ACCURACY")
print(accuracy_score(y_test, y_pred) * 100)
print("CLASSIFICATION REPORT")
print(classification_report(y_test, y_pred))
print("CONFUSION MATRIX")
print(confusion_matrix(y_test, y_pred))
models.append(('logistic', reg))
detection_accuracy.objects.create(names="Logistic Regression",
ratio=accuracy_score(y_test, y_pred) * 100)

print("Decision Tree Classifier")


dtc = DecisionTreeClassifier()
dtc.fit(X_train, y_train)
dtcpredict = dtc.predict(X_test)
print("ACCURACY")
print(accuracy_score(y_test, dtcpredict) * 100)
print("CLASSIFICATION REPORT")
print(classification_report(y_test, dtcpredict))
print("CONFUSION MATRIX")
print(confusion_matrix(y_test, dtcpredict))
models.append(('DecisionTreeClassifier', dtc))

43
detection_accuracy.objects.create(names="Decision Tree Classifier",
ratio=accuracy_score(y_test, dtcpredict) * 100)

print("Extra Tree Classifier")


from sklearn.tree import ExtraTreeClassifier
etc_clf = ExtraTreeClassifier()
etc_clf.fit(X_train, y_train)
etcpredict = etc_clf.predict(X_test)
print("ACCURACY")
print(accuracy_score(y_test, etcpredict) * 100)
print("CLASSIFICATION REPORT")
print(classification_report(y_test, etcpredict))
print("CONFUSION MATRIX")
print(confusion_matrix(y_test, etcpredict))
models.append(('RandomForestClassifier', etc_clf))
detection_accuracy.objects.create(names="Extra Tree Classifier",
ratio=accuracy_score(y_test, etcpredict) * 100)
csv_format = 'Results.csv'
df.to_csv(csv_format, index=False)

obj = detection_accuracy.objects.all()
return render(request,'SProvider/train_model.html', {'objs': obj})

44
CHAPTER 7
TESTING

45
7.TESTING

The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub-assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that theSoftware system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

7.1 DEVELOPING METHODOLOGIES

The following are the Testing Methodologies:

• Unit Testing.

• Integration Testing.

• User Acceptance Testing.

• Output Testing.

• Validation Testing.

Unit Testing

Unit testing focuses verification effort on the smallest unit of Software design that is
the module. Unit testing exercises specific paths in a module’s control structure to ensure
complete coverage and maximum error detection. This test focuses on each module
individually, ensuring that it functions properly as a unit. Hence, the naming is Unit
Testing.During this testing, each module is tested individually and the module interfaces are
verified for the consistency with design specification. All important processing path are
tested for the expected results. All error handling paths are also tested.

Integration Testing

Integration testing addresses the issues associated with the dual problems of
verification and program construction. After the software has been integrated a set of high
order tests are conducted. The main objective in this testing process is to take unit tested
modules and builds a program structure that has been dictated by design.

The following are the types of Integration Testing:


46
1)Top Down Integration

This method is an incremental approach to the construction of program structure.


Modules are integrated by moving downward throughthe control hierarchy, beginning with
themainprogram module. The module subordinates to the main program module are
incorporated into the structure in either a depth first or breadth first manner.In this method,
the software is tested from main module and individual stubs are replaced when the test
proceeds downwards.

2. Bottom-up Integration

This method begins the construction and testing with the modules at the lowest level
in the program structure. Since the modules are integrated from the bottom up, processing
required for modules subordinate to a given level is always available and the need for stubs
is eliminated. The bottom up integration strategy may be implemented with the following
steps:

• The low-level modules are combined into clusters into clusters that perform a
specific Software sub-function.

• A driver (i.e.) the control program for testing is written to coordinate test case
input and output.

• The cluster is tested.

• Drivers are removed and clusters are combined moving upward in the program
structure

The bottom up approaches tests each module individually and then each module is module
is integrated with a main module and tested for functionality.

OTHER TESTING METHODOLOGIES

User Acceptance Testing

User Acceptance of a system is the key factor for the success of any system. The
system under consideration is tested for user acceptance by constantly keeping in touch with
the prospective system users at the time of developing and making changes wherever
required. The system developed provides a friendly user interface that can easily be
understood even by a person who is new to the system.

Output Testing
47
After performing the validation testing, the next step is output testing of the proposed
system, since no system could be useful if it does not produce the required output in the
specified format. Asking the users about the format required by them tests the outputs
generated or displayed by the system underconsideration. Hence the output format is
considered in 2 ways – one is on screen and another in printed format.

Validation Checking

Validation checks are performed on the following fields.

Text Field:

The text field can contain only the number of characters lesser than or equal to its
size. The text fields are alphanumeric in some tables and alphabetic in other tables. Incorrect
entry always flashes and error message.

Numeric Field:

The numeric field can contain only numbers from 0 to 9. An entry of any character
flashes an error messages. The individual modules are checked for accuracy and what it has
to perform. Each module is subjected to test run along with sample data. The individually
tested modules are integrated into a single system. Testing involves executing the real
data information is used in the program the existence of any program defect is inferred from
the output. The testing should be planned so that all the requirements are individually
tested.

A successful test is one that gives out the defects for the inappropriate data and
produces and output revealing the errors in the system.

Preparation of Test Data

Taking various kinds of test data does the above testing. Preparation of test data plays
a vital role in the system testing. After preparing the test data the system under study is tested
using that test data. While testing the system by using test data errors are again uncovered
and corrected by using above testing steps and corrections are also noted for future use.

48
Using Live Test Data:

Live test data are those that are actually extracted from organization files. After a
system is partially constructed, programmers or analysts often ask users to key in a set of
data from their normal activities. Then, the systems person uses this data as a way to partially
test the system. In other instances, programmers or analysts extract a set of live data from
the files and have them entered themselves.

Using Artificial Test Data:

Artificial test data are created solely for test purposes, since they can be generated to test all
combinations of formats and values. In other words, the artificial data, which can quickly be
prepared by a data generating utility program in the information systems department, make
possible the testing of all login and control paths through the program.

The most effective test programs use artificial test data generated by persons other than those
who wrote the programs. Often, an independent team of testers formulates a testing plan,
using the systems specifications.

USER TRAINING

Whenever a new system is developed, user training is required to educate them about the
working of the system so that it can be put to efficient use by those for whom the system has
been primarily designed. For this purpose the normal working of the project was
demonstrated to the prospective users. Its working is easily understandable and since the
expected users are people who have good knowledge of computers, the use of this system is
very easy.

MAINTAINENCE

This covers a wide range of activities including correcting code and design errors. To reduce
the need for maintenance in the long run, we have more accurately defined the user’s
requirements during the process of system development. Depending on the requirements,
this system has been developed to satisfy the needs to the largest possible extent. With
development in technology, it may be possible to add many more features based on the
requirements in future. The coding and designing is simple and easy to understand which
will make maintenance easier.

49
TESTING STRATEGY :

A strategy for system testing integrates system test cases and design techniques into a well
planned series of steps that results in the successful construction of software. The testing
strategy must co-operate test planning, test case design, test execution, and the resultant data
collection and evaluation .A strategy for software testing must accommodate low-level
tests that are necessary to verify that a small source code segment has been correctly
implemented aswell as high level tests that validate major system functions against user
requirements.

Software testing is a critical element of software quality assurance and represents the
ultimate review of specification design and coding. Testing represents an interesting anomaly
for the software. Thus, a series of testing are performed for the proposed system before
the system is ready for user acceptance testing.

SYSTEM TESTING:

Software once validated must be combined with other system elements (e.g. Hardware,
people, database). System testing verifies that all the elements are proper and that overall
system function performance is achieved. It also tests to find discrepancies between the
system and its original objective, current specifications and system documentation.

UNIT TESTING:

In unit testing different are modules are tested against the specifications produced during the
design for the modules. Unit testing is essential for verification of the code produced during
the coding phase, and hence the goals to test the internal logic of the modules. Using the
detailed design description as a guide, important Conrail paths are tested to uncover errors
within the boundary of the modules. This testing is carried out during the programming stage
itself. In this type of testing step, each module was found to be working satisfactorily as
regards to the expected output from the module.

In Due Course, latest technology advancements will be taken into consideration. As part
of technical build-up many components of the networking system will be generic in nature
so that future projects can either use or interact with this. The future holds a lot to offer to
the development and refinement of this project.

50
7.2 TYPES OF TESTING

7.2.1Unit testing

Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of
the application .it is done after the completion of an individual unit before integration. This
is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process,
application,and/or system configuration. Unit tests ensure that each unique path of a business
process performs accurately to the documented specifications and contains clearly defined
inputs and expected results.

7.2.2 Integration testing

Integration tests are designed to test integrated software components to determine if


they actually run as one program. Testing is event driven and is more concerned with the
basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically
aimed at exposing the problems that arise from the combination of components.

7.2.3 Functional test

Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

51
7.2.4 System Test

System testing ensures that the entire integrated software system meets requirements. It tests
a configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.

White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose.
It is used to test areas that cannot be reached from a black box level.

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of
tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.

7.2.5Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

52
CHAPTER 8

RESULTS

53
8.RESULTS

Predicting behavioral changes in students with special educational needs (SEN) involves
analyzing a range of factors, including behavioral metrics, environmental conditions,
interventions, and individual profiles. Data analysis of a sample group revealed that on-task
behavior improved significantly, increasing from 50% at baseline to 75% after targeted
interventions. Structured classroom environments were associated with a 30% reduction in
disruptive incidents, while tailored support programs accelerated routine adaptation by 20%.
Feedback from teachers indicated that 80% observed enhanced peer interactions following
social skills training. Using a Random Forest Classifier, the predictive model achieved an
accuracy of 85%, identifying tailored interventions, teacher consistency, and peer
engagement as the strongest predictors of positive behavioral changes. These findings
underscore the importance of personalized strategies and consistent environmental support
in fostering positive outcomes for students with SEN.

SCREENSHOTS

Fig : Homepage

54
Fig : Line Chart

Fig : Bar Chart

55
ALGORITHMS USED

SAMPLE OUTPUT

56
CHAPTER 9
CONCLUSION

57
9. CONCLUSION

In this paper, we applied MMLA to predict behavior change in SEN students participating
in ABA therapies. A novel MML Aapproach for the prediction of SEN students’ behavior
change achievement in ABA therapy is presented. We introduced IOT sensors data, including
ambient environmentalmeasurements (namely CO2 level, humidity, light intensity, and
temperature), physiological measurements (namely IBI, BVP, GSR, and skin temperature),
and motion measurements (accelerometer values in X, Y, and Z directions) to develop
statistical models for ABA therapy. We also apply ML and DNN techniques to predict SEN
students’ behavior change.

We studied the statistical characteristics of the multimodaleducational data and


found that most of our data are not normally distributed. Significant correlations between
the variables had been identified, but the problem of multi collinearity did not exist in our
variables. We further showed that sensors and wearable data could significantly enhance the
prediction of SEN students’ behavior change achievement. Various ML algorithms and a
DNN were built, optimised, and evaluated. Our results demonstrated that ML (including
deep learning) could be applied to MMLA for predicting SEN students’ behavior change.
While the performance of our classifiers and DNN surpass most of the existing MMLA
models. However, we also observed variations in the prediction targets among the compared
models.

Promoting positive behaviors in SEN students is important for their personal and
social development. At the same time, ABA therapy is an effective intervention approach
that aims at behavior change in this population group. The learning environment and the
learner physiology conditions during ABA therapy sessions are essential for understanding
behavior skills acquisition and their effect on subsequentbehavior change. The current study
has affirmed the predictive relations between the learning environment, learner physiology,
and the learning outcome in ABA therapy. A number of limitations and necessary future
works are also presented. Overall, our work echoes the growing demands in applying ML to
the learning and education of those with brain and developmental disorders

58
CHAPTER 10
FUTURE ENHANCEMENT

59
10.FUTURE ENHANCEMENT
Enhancing the prediction of behavior changes in students with special education needs
(SEN) involves integrating advanced methodologies, technology, and interdisciplinary
approaches. Here are some key areas for future development:

1. Advanced Data Analytics and AI

• Machine Learning Models: Utilize supervised and unsupervised learning to detect


patterns in behavior data, environmental factors, and intervention outcomes.

• Predictive Analytics: Develop algorithms that can forecast behavior changes based
on historical data, current interventions, and contextual factors.

• Explainable AI (XAI): Ensure that predictive models provide transparent reasoning


to support educators and caregivers in understanding predictions.

2. Wearable Technology and IoT Integration

• Real-Time Monitoring: Use wearable devices to track physiological signals (e.g.,


heart rate, skin conductance) that correlate with emotional and behavioral states.

• Environmental Sensors: Deploy IoT devices to monitor classroom conditions (e.g.,


noise, light, and temperature) that may impact student behavior.

• Data Fusion: Combine physiological and environmental data for holistic behavior
prediction.

3. Personalized Education Platforms

• Adaptive Learning Systems: Create tools that adjust educational content dynamically
based on the predicted needs and behavioral states of the student.

• Individualized Behavior Plans (IBPs): Use predictive insights to design tailored


intervention strategies.

4. Neurocognitive Insights

• Brain-Computer Interfaces (BCIs): Investigate neural activity to understand triggers


and early signs of behavioral changes.

• Cognitive Load Estimation: Predict when a student may become overwhelmed or


disengaged and suggest preemptive actions.
60
CHAPTER 11
REFERENCES

61
11.REFERENCES

[1] P. A. Alberto and A. C. Troutman, Applied Behavior Analysis for Teachers,


9th ed. Upper Saddle River, NJ, USA: Pearson, 2013.
[2] B. S. Abrahams and D. H. Geschwind, ‘‘Advances in autism genetics:
On the threshold of a new neurobiology,’’ Nature Rev. Genet., vol. 9, no. 5,
pp. 341–355, May 2008.
[3] L. Bassarath, ‘‘Conduct disorder: A biophysical review,’’ Can. J. Psychiatry,
vol. 46, no. 7, pp. 609–617, 2001.
[4] J. O. Cooper, T. E. Heron, and W. L. Heward, Applied Behavior Analysis,
3rd ed. Hoboken, NJ, USA: Pearson, 2020.
[5] R. Pennington, ‘‘Applied behavior analysis: A valuable partner in special
education,’’ Teach. Except. Child., vol. 54, no. 4, pp. 315–317, 2022.
[6] F. J. Alves, E. A. De Carvalho, J. Aguilar, L. L. De Brito, and
G. S. Bastos, ‘‘Applied behavior analysis for the treatment of autism:
A systematic review of assistive technologies,’’ IEEE Access, vol. 8,
pp. 118664–118672, 2020.
[7] M. C. Buzzi, M. Buzzi, B. Rapisarda, C. Senette, and M. Tesconi, ‘‘Teaching
low-functioning autistic children: ABCD SW,’’ in Proc. Eur. Conf.
Technol. Enhanced Learn. Berlin, Germany: Springer, 2013, pp. 43–56.
[8] V. Bartalesi, M. C. Buzzi, M. Buzzi, B. Leporini, and C. Senette, ‘‘An analytic
tool for assessing learning in children with autism,’’ in Universal
Access in Human-Computer Interaction, Universal Access to Information
and Knowledge, vol. 8514, C. Stephanidis and M. Antona, Eds. Cham,
Switzerland: Springer, 2014.
[9] G. Presti, M. Scagnelli, M. Lombardo, M. Pozzi, and P. Moderato,
‘‘SMART SPACES: A backbone to manage ABA intervention in autism
across settings and digital learning platforms,’’ in Proc. AIP Conf.,
vol. 2040, 2018, Art. no. 140002.
[10] G. Siemens and R. S. J. D. Baker, ‘‘Learning analytics and educational
data mining: Towards communication and collaboration,’’ in Proc. 2nd Int.
Conf. Learn. Analytics Knowl., Apr. 2012, pp. 252GGJNMVHVHBJMB

62

You might also like