0% found this document useful (0 votes)
422 views54 pages

Student Grade

The document discusses a student grade prediction system that uses the C4.5 decision tree algorithm. The system takes in a student's basic information and past academic performance to predict their final grade. It has features for administrators to add student details and academic records. Users (students) can view their predicted grades. The system aims to help students identify areas for improvement and can be used by educational institutions. It uses the C4.5 algorithm to accurately predict grades based on training data. The document outlines the advantages, disadvantages and applications of the proposed student grade prediction system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
422 views54 pages

Student Grade

The document discusses a student grade prediction system that uses the C4.5 decision tree algorithm. The system takes in a student's basic information and past academic performance to predict their final grade. It has features for administrators to add student details and academic records. Users (students) can view their predicted grades. The system aims to help students identify areas for improvement and can be used by educational institutions. It uses the C4.5 algorithm to accurately predict grades based on training data. The document outlines the advantages, disadvantages and applications of the proposed student grade prediction system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 54

STUDENT GRADE PREDICTION USING C4.

5 DECISION TREE

ABSTRACT:

To come up with a system where student final grade is predicted based on the marks he had
scored during his previous course and years. In order to predict the grade of the student we
need some data to analyze and to predict the grade. So we will input student basic
information and their previous academic information into the system which will be used to
predict the grade of the student. We here used an effective data mining algorithm to predict
the result. We used C4.5 decision tree algorithm to predict the grade of the student.C4.5 is a
program for inducing classification rules in the form of decision trees from a set of given
examples. C4.5 is a software extension of the basic ID3 algorithm designed by Quinlan.
Admin and user will use the system. Here user will be the student. Admin and user will use
their credentials to access the system. Admin can add student details with basic information.
Admin must add academic details of the student. Like his SSC, HSC, Graduation marks. User
can view the grade and here system will generate a report where he will get grade prediction
using C4.5 algorithm. This system can be used in schools, colleges and other educational
institutes.
Advantages of the Proposed Project:

 Since C4.5 datamining algorithm used to get accurate grade prediction.


 System helps the student where he must do improvisation
 User friendly
Disadvantages:

o This system is an web application requires internet connection.

Application:

This system can be used in schools, colleges and other educational institutes.
INTRODUCTION

Every year, educational institutes admit students under various courses from different
locations, educational background and with varying merit scores in entrance examinations.
Moreover, schools and junior colleges may be affiliated to different boards, each board
having different subjects in their curricula and also different level of depths in their subjects.
Analyzing the past performance of admitted students would provide a better perspective of
the probable academic performance of students in the future. This can very well be achieved
using the concepts of data mining. For this purpose, we have analysed the data of students
enrolled in first year of engineering. This data was obtained from the information provided by
the admitted students to the institute. It includes their full name, gender, application ID,
scores in board examinations of classes X and XII, scores in entrance examinations, category
and admission type. We then applied the ID3 and C4.5 algorithms after pruning the dataset to
predict the results of these students in their first semester as precisely as possible.
CT SUMMARY

Student grade prediction using c4.5 decision tree is a System that manages the records of
student regarding admission and examination part.

it is designed to help collages for management of dental student. Extensive


information is available at your fingertips through this System. Viewing student data,
managing admission and reshuffling ,managing seats, quota, board, semester, faculty,
category and for examination, block allocation, subject management , scheduling exam, result
and related issues are made simple and easy. There are custom search capabilities to aid in
finding student information and working on student records. This can make the system easier
to navigate and to use maximizing the effectiveness of time and other resources. SMS allows
the keeping of personnel data in a form that can be easily accessed and analyzed in a
consistent way.

The SMS module is a component covering many other student aspects from
application to retirement. The system records basic personal information, admission
information, education information regarding student. Leading edge systems provide the
ability to "read" applications and enter relevant data to applicable database fields, notify
student and provide result. Student management function involves

 Manage new admission and enrolment


 Manage quota
 Manage board
 Manage category
 Manage Fees Structure
 Roll number generation
 Fees payment
 student Basic Information
 Manage faculty
 Manage designation
 Manage course and specialty
 Manage semester and year
 admission seat management
 Exam scheduling
 Result management
 Subject management
 Block management

In SMS, every user has a Login ID and Password. Also all the users have different
permission rights to access the applications. These rights are Dynamic and can be changed.

There are three main roles in the system. Admin, accountant and operator. Admin has
complete access to the whole system, while accountant is only concerned with payment of
fees for the admission of the student. Operator is the role that is responsible for the use of the
system.

The Admin role can be as follow:

 Introduce new quota, board, category, course, etc


 Set fees structures
 Manage faculties
 Manage subjects
 Seat management
 Management of semester
 Generation of student roll number
 Set examination
operator role can:

 New admission and enrolment


 Search student
 Block allocation
 Result, etc

Now when the user with the particular role Logs on he can see only those pages which are
allowed to them.

PU

The project is about to handle all the information of the student regarding admission and
examination. Also it manages resources which were managed and handled by manpower
previously. The main purpose of the project is to integrate distinct sections of the organization
into consistent manner so that complex functions can be handled smoothly by any technical
or non-technical persons.

The project aims at the following matters:

 Automation of admission and enrolment as per board, quota, category and available
seats.
 Assistance in decision-making.
 To manage information of student, faculty and courses.
 Consistently update information of all the students.
 Reports- To gather all the related information about any application of the HRMS.

All the above-mentioned matters are to be incorporated in the application along with some
additional requirements.

The main purpose of the Admin Module is to introduce new things and configure
important aspects. For e.g. only admin is authorized to introduce quota, board, subject,
category, etc. and only admin is allowed to configure exam and set fees structure. So the
master screens for all these are visible to only admin role. This is done by the Admin Module.
It also can create the users and Physical and Logical Locations. Thus the main purpose of the
Admin Module is to managing the dynamic working of the system.

1.3 SCOPE
The scope of the project includes the following
 Any college can use this system as it is not client centric.
 All admission and examination related work for the student can be done using this
system.
 Deliver Electronic Workplace
 Provide Bi-lingual support
 Application Support & Maintenance after deployment to production
 The Admin Module can be reused for projects as well which have many users with
different rights. Hence it is reusable.

1.4 TECHNOLOGY & LITERATURE REVIEW

We are not having any past work


system. We are designing this project for the
first time. So we are free to use any
technology that we want .Online
Recruitment is a web application developed
using ASP.Net using C# used as front end
with Sql server-2005 used as back end.

The .NET Framework is a set of objects


and blueprints from

Fig 1.1 .NET framework Architecure

Microsoft for building applications.


The .NET Framework provides the underlying functionality of ASP.NET. All applications developed
under the .NET framework including ASP.NET applications, have certain key feature that ensure
compatibility, security, and stability.

Common Language Runtime

The Common Language Runtime (CLR) is an environment that manages the execution of
code. In other words, it runs and maintains any code that you write. With the .NET framework and
CLR you still write code and compile it. However, instead of compiling it into something that
computer understands, you compile it into a language called the Microsoft Intermediate Language
(MSIL). This language is shorthand way of representing all the code you have written. ASP.NET
pages are compiled into MSIL as well. When you compile to MSIL, your application produces
something called metadata. This is descriptive information about your application. It tells what the
application can do, where it belongs, and so on.

1.4.2 Introduction about ASP.NET

ASP.NET, the latest version of Active Server Pages, is Microsoft’s technology for building
dynamic pages, database-driven Web sites. Active Server Pages is one of the most popular languages
for building scalable, interactive Web sites. Several of the highest traffic Web sites on the Internet
employs Active Server Pages. Examples include Dell Online, Barnes and Noble, 1-800-flowers, and
the Microsoft site itself.

1.4.2.1 Easy Programming Model

ASP.NET makes building real world Web applications dramatically easier. ASP.NET server
controls enable an HTML-like style of declarative programming that let you build great pages with far
less code than with classic ASP. Displaying data, validating user input, and uploading files are all
amazingly easy. Best of all, ASP.NET pages work in all browsers -- including Netscape, Opera, AOL,
and Internet Explorer.

1.4.2.2 Flexible Language Options


ASP.NET lets you leverage your current programming language skills. Unlike classic ASP,
which supports only interpreted VBScript and JScript, ASP.NET now supports more than 25 .NET
languages (including built-in support for VB.NET, C#, and JScript.NET -- no tool required), giving
you unprecedented flexibility in your choice of language.

1.4.2.3 Great Tool Support

You can harness the full power of ASP.NET using any text editor -- even Notepad! But Visual
Studio 2005 adds the productivity of Visual Basic-style development to the Web. Now you can
visually design ASP.NET Web Forms using familiar drag-drop-double-click techniques, and enjoy
full-fledged code support including statement completion and color-coding. VS.NET also provides
integrated support for debugging and deploying ASP.NET Web applications.

The Professional version of Visual Studio 2005 delivers life-cycle features to help
organizations plan, analyze, design, build, test, and coordinate teams that develop ASP.NET Web
applications. These include UML class modeling, database modeling (conceptual, logical, and
physical models), testing tools (functional, performance and scalability), and enterprise frameworks
and templates, all available within the integrated Visual Studio .NET environment.

1.4.2.4 Rich Class Framework

Application features that used to be hard to implement, or required a 3rd-party component,


can now be added in just a few lines of code using the .NET Framework. The .NET Framework
offers over 4500 classes that encapsulate rich functionality like XML, data access, file upload, regular
expressions, image generation, performance monitoring and logging, transactions, message queuing,
SMTP mail, and much more!

1.4.2.5 Compiled execution


ASP.NET is much faster than classic ASP, while preserving the "just hit save" update model
of ASP. However, no explicit compile step is required! ASP.NET will automatically detect any
changes, dynamically compile the files if needed, and store the compiled results to reuse for
subsequent requests. Dynamic compilation ensures that your application is always up to date, and
compiled execution makes it fast.

1.4.2.6 Rich output caching

ASP.NET output caching can dramatically improve the performance and scalability of your
application. When output caching is enabled on a page, ASP.NET executes the page just once, and
saves the result in memory in addition to sending it to the user. When another user requests the same
page, ASP.NET serves the cached result from memory without re-executing the page. Output
caching is configurable, and can be used to cache individual regions or an entire page. Output
caching can dramatically improve the performance of data-driven pages by eliminating the need to
query the database on every request.

1.4.2.7 Web-Farm Session State

ASP.NET session state lets you share session data user-specific state values across all
machines in your Web farm. Now a user can hit different servers in the web farm over multiple
requests and still have full access to her session. And since business components created with the
.NET Framework are free-threaded, you no longer need to worry about thread affinity.

1.4.2.8 Memory Leak, DeadLock and Crash Protection

ASP.NET automatically detects and recovers from errors like deadlocks and memory leaks to
ensure your application is always available to your users.
For example, say that your application has a small memory leak, and that after a week the leak has
tied up a significant percentage of your server's virtual memory. ASP.NET will detect this condition,
automatically start up another copy of the ASP.NET worker process, and direct all new requests to the
new process. Once the old process has finished processing its pending requests, it is gracefully
disposed and the leaked memory is released. Automatically, without administrator intervention or any
interruption of service, ASP.NET has recovered from the error.

1.4.2.9 Dynamic update of running application

ASP.NET now lets you update compiled components without restarting the web
server. In the past with classic COM components, the developer would have to restart the
web server each time he deployed an update. With ASP.NET, you simply copy the
component over the existing DLL -- ASP.NET will automatically detect the change and start
using the new code.

1.4.2.10 Easy Migration Path

You do not have to migrate your existing applications to start using ASP.NET.
ASP.NET runs on IIS side-by-side with classic ASP on Windows 2000 and Windows XP
platforms. Your existing ASP applications continue to be processed by ASP.DLL, while new
ASP.NET pages are processed by the new ASP.NET engine. You can migrate application by
application, or single pages. And ASP.NET even lets you continue to use your existing
classic COM business components. ASP.Net represents a radical departure from previous
versions of Active Server Pages.

Following are some of the significant new features of ASP.NET 2.0 Framework:

ASP.NET uses compiled code written in Common Language Runtime language such as
Visual Basic and C#. Unlike previous versions of Active Server Pages, this version not use
interpreted scripting language such as VBScript.
An advanced version of .NET 1.1 which has proved to be a milestone in web
technology of today’s time. ASP.NET pages are built out of server – side controls. Web server
controls enable you to represent and program against Hypertext Markup Language (HTML)
elements using an intuitive object model.

ASP.NET includes a new technology called Web Services. You can use Web Services
to access methods and properties and transfer database data across the Internet. ASP.NET is
part of Microsoft’s .NET framework. You can access thousands of .NET classes in your code
that enable you to perform such wondrously diverse tasks as generating images on - the - fly
and saving an array to a file. ASP.Net includes page and data caching mechanisms that
enable you to easily and dramatically improve the performance of your Web Site.

1.4.2.11 Faces Of Asp.Net

With ASP.NET 3.5, Microsoft aims to continue its success by refining and enhancing
ASP.NET. The good news is that Microsoft hasn’t removed features, replaced functionality,
or reversed direction. Instead, almost all the changes add higher-level features that can make
your programming more productive.

All in all, there have been four major releases of ASP.NET:

• ASP.NET 1.0: This first release created the core ASP.NET platform and introduced a wide
range of essential features.

• ASP.NET 1.1: This second release added performance tune-ups and bug fixes, but no new
features.

• ASP.NET 2.0: This third release piled on a huge set of new features, all of which were built
on top of the existing ASP.NET plumbing. The overall emphasis was to supply developers
with prebuilt goodies that they could use without writing much (if any) code. Some of the
new features included built-in support for website navigation, a theming feature for
standardizing web page design, and an easier way to pull information out of a database.

• ASP.NET 3.5: This fourth release keeps the same basic engine as ASP.NET 2.0, but adds a
few frills and two more dramatic changes. The most significant enhancement is the ASP.NET
AJAX toolkit, which gives web developers better tools for creating highly responsive web
pages that incorporate rich effects usually seen in desktop applications (such as drag-and-
drop and auto complete). The other innovation is support for LINQ, a set of language
enhancements included with .NET 3.5 that allows you to search in-memory data in the same
way that you query a database.

If you’re wondering what happened to ASP.NET 3.0—well, it doesn’t exist! Somewhat


confusingly,

Microsoft used the .NET 3.0 name to release a set of new technologies, including Windows
Presentation Foundation (WPF), a platform for building slick Windows applications;
Windows Workflow Foundation (WF), a platform for modelling application logic using
flowchart-style diagrams; and Windows Communication Foundation (WCF), a platform for
designing services that can be called from other computers. However, .NET 3.0 did not
include an updated version of ASP.NET.

1.4.2.12 Visual Studio 2008

Visual Studio has come a long way since its inception in 1997. Visual Studio 97 hit the street
with the goals of enabling developers to share and see large projects through a complete
development cycle regardless of the different languages and deployment schemes.

That was followed up by Visual Studio 6.0 with its integrated development environment and
built-in data designers for architecting large-scale and multi-tier applications, with the goals
of supporting distributed and heterogeneous environments and architectures.
1.4.2.13 Linq

Many of the new language features and enhancements in Visual Studio 2008—both in Visual
C# and Visual Basic .NET—make many of the LINQ features possible and enable you to take
advantage of some of the LINQ capabilities.

Included with the new Visual Studio release are a number of designers that can help
developers visually create many aspects of their SQL entity classes and associations. For
example, the Object Relational Designer (O/R Designer) provides a visual interface for
creating and designing LINQ to SQL entity classes and associations of database objects.

Visual Studio 2008 also comes with the DataSet Designer, a visual tool used for creating and
manipulating typed DataSets and the associated items of which the datasets are made,
providing a visual image of the objects within the DataSets.

LINQ will be released in the next version of Visual Studio and the .NET Framework,
currently slated for version 3.5. Because much of the LINQ functionality is based on the new
features of the .NET Framework, this chapter explores those features and enhancements that
help support LINQ and provide LINQ with the foundation it needs from a language
perspective. It looks at the new language-specific features in both C# and Visual Basic .NET

 WPF (Windows Presentation Foundation)—New technology for building rich content,

‘‘Windows Vista’’–type user interfaces, and experiences combining application UI and

media content.

 WCF (Windows Communication Foundation)—New technology for building and

deploying reliable, secure, and interoperable connected systems across distributed systems

and environments.

 WF (Windows Workflow Foundation)—A programming engine for building


workflow-enabled applications.

 WCS (Windows CardSpace)—Microsoft’s technology for managing digital identities.

Today, Visual Studio 2008 focuses on providing developers with a rich experience for
Windows Vista, the web, and Office 2008, while continuing to improve its development
languages and innovations. Visual Studio 2008 contains a number of new features, including
C# and Visual Basic .NET language features, improved data features such as multi-tier
support for typed datasets and hierarchical update capabilities, and a web application project
model.

However, the most exciting new feature of Visual Studio 2008 (in my opinion) is LINQ,
Microsoft’s new Language Integrated Query, which extends powerful query capabilities into
your favourite .NET programming language.

1.4.3 Architecture Used/Followed (4-TIER ARCHITECTURE)

Presentation Control Object Business Data access


Logic

10101010
10101010
01110101
00101011
10101010
10100010
10101010
10011110

ASP.net .ascx VB + JS SQL Server


page

ASP ASP

Fig 1.2 4-Tier Architecture SQL Server


ASP + JS

For designing the entire software we have divided the whole software into four main
layers. And each layer provides service to the other layer. So we can easily proceed towards
the target. These layers are namely
 Presentation layer
 Business layer
 Control layer
 Data Access layer

Presentation Layer

The Presentation layer is responsible for the user interface and communicates directly
with the business logic layer. Separating the presentation layer from the rest of the application
enables the development of different user interface (i.e. Web form, Windows form, mobile
devices) that all uses the same business logic and database access code.

Business Layer

The logic layer separates the code specific to the application, for the way company
does the business, from the user interface and the database specific code. Other line of
business Applications a company build can use the business logic layer if needed,
maximizing the code reuse.

Control Layer

The Control layer is responsible for communication between business layer and
presentation layer. It connects the logic and data with each other and gives a better
connectivity and separation between layers.

Data Access Layer

Project Flow Lines and Artificial Lift use a Microsoft SQL Server Express Edition database.
LITERATURE SURVEY

2.1. Data Mining

Data mining is the process of discovering interesting knowledge, such as associations,


patterns, changes, significant structures and anomalies, from large amounts of data stored in
databases or data warehouses or other information repositories [1]. It has been widely used in
recent years due to the availability of huge amounts of data in electronic form, and there is a
need for turning such data into useful information and knowledge for large applications.
These applications are found in fields such as Artificial Intelligence, Machine Learning,
Market Analysis, Statistics and Database Systems, Business Management and Decision
Support [2].

2.1.1. Classification
Classification is a data mining technique that maps data into predefined groups or classes. It
is a supervised learning method which requires labelled training data to generate rules for
classifying test data into predetermined groups or classes [2]. It is a two-phase process. The
first phase is the learning phase, where the training data is analyzed and classification rules
are generated. The next phase is the classification, where test data is classified into classes
according to the generated rules. Since classification algorithms require that classes be
defined based on data attribute values, we had created an attribute “class” for every student,
which can have a value of either “Pass” or “Fail”.

2.1.2. Clustering
Clustering is the process of grouping a set of elements in such a way that the elements in the
same group or cluster are more similar to each other than to those in other groups or clusters
[1]. It is a common technique for statistical data analysis used in the fields of pattern
recognition, information retrieval, bioinformatics, machine learning and image analysis.
Clustering can be achieved by various algorithms that differ about the similarities required
between elements of a cluster and how to find the elements of the clusters efficiently. Most
algorithms used for clustering try to create clusters with small distances among the cluster
elements, intervals, dense areas of the data space or particular statistical distributions.

2.2. Selecting Classification over Clustering :


In clustering, classes are unknown apriori and are discovered from the data. Since our goal is
to predict students’ performance into either of the predefined classes - “Pass” and “Fail”,
clustering is not a suitable choice and so we have used classification algorithms instead of
clustering algorithms.

2.3. Issues Regarding Classification


2.3.1. Missing Data

Missing data values cause problems during both the training phase and to the classification
process itself. For example, the reason for non-availability of data may be due to [2]:
•Equipment malfunction
•Deletion due to inconsistency with other recorded data

4.1. Student Database


We were provided with a training dataset consisting of information about students admitted to
the first year. This data was in the form of a Microsoft Excel 2003 spreadsheet and had details
of each student such as full name, application ID, gender, caste, percentage of marks obtained
in board examinations of classes X and XII, percentage of marks obtained in Physics,
Chemistry and Mathematics in class XII, marks obtained in the entrance examination,
admission type, etc. For ease of performing data mining operations, the data was filled into a
MySQL database.
4.2. Data Preprocessing
Once we had details of all the students, we then segmented the training dataset further,
considering various feasible splitting attributes, i.e. the attributes which would have a higher
impact on the performance of a student. For instance, we had considered ‘location’ as a
splitting attribute, and then segmented the data according to students’ locality. A snapshot of
the student database is shown in Figure 2. Here, irrelevant attributes such as students
residential address, name, application ID, etc. had been removed. For example, the admission
date of the student was irrelevant in predicting the future performance of the student. The
attributes that had been retained are those for merit score or marks scored in entrance
examination, gender, percentage of marks scored in Physics, Chemistry and Mathematics in
the board examination of class XII and admission type. Finally, the “class” attribute was
added and it held the predicted result, which can be either “Pass” or “Fail”.

Since the attributes for marks would have discrete values, to produce better results, specific
classes were defined. Thus, the “merit” attribute had a value “good” if the merit score of the
student was 120 or above out of a maximum score of 200, and was classified as “bad” if the
merit score was below 120. Also, the value that can be held by the “percentage” attribute of
the student are three - “distinction” if the percentage of marks scored by the student in the
subjects of Physics, Chemistry and Mathematics was 70 or above, “first_class” if the
percentage was less than 70 and greater than or equal to 60, then it was classified as
“second_class” if the percentage was less than 60. The attribute for admission type is labelled
“type” and the value held by a student for it can be either “AI” (short for All-India), if the
student was admitted to a seat available for All-India candidates, or “OTHER” if the student
was admitted to another seat.
4.3. Data Processing Using RapidMiner
The next step was to feed the pruned student database as input to RapidMiner. This helped us
in evaluating interesting results by applying classification algorithms on the student training
dataset.

4.3.1. ID3 Algorithm


Since ID3 is a decision tree algorithm, we obtained a decision tree as the final result with all
the splitting attributes and it is shown in Figure 3.

4.3.2. C4.5 Algorithm


The C4.5 algorithm too generates a decision tree, and we obtained one from RapidMiner in
the same way as ID3. This tree, shown in Figure 4, has fewer decision nodes as compared to
the tree for improved ID3.

4.4. Implementing the Performance Prediction Web Application :


RapidMiner helped significantly in finding hidden information from the training dataset.
These newly learnt predictive patterns for predicting students’ performance were then
implemented in a working web application for staff members to use to get the predicted
results of admitted students.

4.4.1. CodeIgniter
The web application was developed using a popular PHP framework named CodeIgniter. The
application has provisions for multiple simultaneous staff registrations and staff logins. This
ensures that the work of no two staff members is interrupted during performance evaluation.
Figure 5 and Figure 6 depict the staff registration and staff login pages respectively.

4.4.2. Mapping Decision Trees to PHP


The essence of the web application was to map the results achieved after data processing to
code. This was done in form of class methods in PHP. The result of the improved ID3 and
C4.5 algorithms were in the form of trees and these were translated to code in the form of if-
else ladders. We then placed these ladders into PHP class methods that accept only the
splitting attributes - PCM percentage, merit marks, admission type and gender as method
parameters. The class methods return the final result of that particular evaluation, indicating
whether that student would pass or fail in the first semester examination. Figure 7 shows a
class method with the if-else ladder.

Chapter 5

SYSTEM DESIGN
________________________________________________
INTRODUCTION

During analysis, the focus is on what needs to be done intendment of how it is done.
During design, decisions are made about how the problem will be solved, first at a high level,
then at increasingly detailed levels.

System design is the first stage in which the basic approach to solving the problem is
selected. During system designing the overall structure and style are decided. The system
architecture is the overall organization of the system into components called system. System
design deals with transforming the customer requirements, as described in the SRS document,
into a form that is implement able using the programming language. Certain items such as
modules, relationships among identified modules, data structures, relationships between the
data structures, and algorithms for implementation should be designed during this phase.

As a system designer we are tried to take following design decisions:

 Organize the system into modules


 Organize sub-modules for each module
 Allocate tasks to processors
 Choose an approach to manage data store
 Handle access to global resources
 Choose implementation logic
DATABASE DESIGN
SMS_STUDENT_PERSONAL_DETAILS
STUDENT_ID
FIRST_NAME
MIDDLE_NAME
LAST_NAME
BIRTHDATE
SEX
FATHER_INCOME
CASTE_ID
SUBCASTE_ID
ADDRESS_1 SMS_STUDENT_EDUCATION_DETAIL
ADDRESS_2 ID
CITY STUDENT_ID
STATE DISCIPLINE
PINCODE BOARD_OF_STUDY
NATION INSTITUTE
PHONE_NUMBER_RES PERCENTAGE
MOBILE_NUMBER YEAR_OF_COMPLETION
EMAIL_ID ACHIEVMENTS
ALTERNATE_EMAIL_ID
STATUS

SMS_STUDENT_ADMISSION_DETAILS
SMS_QUOTA_MASTER STUDENT_ID
SMS_FACULTY_DETAIL
FACULTY_ID
QUOTA_ID DATE_OF_ADMISSION
FACULTY_NAME
QUOTA_NAME GENERAL_MERIT_NO
DESIGNATION_ID
BOARD_ID CATEGORY_MERIT_NO
SPECIALIZATION_ID
FRESHER
BOARD_ID
CATEGORY_ID
SPECIALITY_ID
SMS_CATEGORY_MASTER
QUOTA_ID
CATEGORY_ID
HOSTEL
CATEGORY_NAME
FACULTY_ID
SMS_BOARD_MASTER REMARKS
DESCRIPTION
BOARD_ID
YCS_ID
BOARD_NAME

SMS_YEAR_COURSE_SEM
YCS_ID
YEAR_ID
COURSE_ID
SEM_ID

SMS_SPECIALITY_MASTER
SPECIALITY_ID
SPECIALITY_NAME
SMS_COURSE_MASTER
COURSE_ID
COURSE_ID
DESCRIPTION
COURSE_NAME
COURSE_DURATION
DESCRIPTION

Figure 5.1 Data Diagram(Admission Module)


SMS_SUBJECT_MASTER SMS_SUBJECT_EXAM_TYPE_DETAIL
SUB_EXAM_ID
SUB_CODE
SUB_ID
SUB_NAME
EXAM_TYPE_ID
TEXT_BOOK
SPECIALITY_ID
REFERENCE_BOOK
YCS_ID
DESCRIPTION
DURATION

TOTAL_MARKS SMS_EXAM_TYPE_MASTER
PASSING_MARKS EXAM_TYPE_CODE

EXAM_TYPE_NAME
SMS_RESULT_DATA DESCRIPTION
EXAM_ID

YCS_ID

SUB_ID SMS_YEAR_COURSE_SEM * SMS_EXAM_SCHEDULE_DETAIL


YCS_ID
STUDENT_ID YCS_ID
YEAR_ID
MARKS YEAR_OF_STUDY
COURSE_ID
EXAM_TYPE_CODE
SEM_ID

SMS_EXAM_DETAIL
EXAM_ID

SMS_EXAM_MASTER SUB_EXAM_ID
SMS_SUBJECT_SEMESTER_ALLOCATION *
EXAM_ID DATE
ID
YCS_ID EXAM_TIME
YCS_ID
EXAM_TYPE_ID
SUB_CODE

SPECIALITY_ID

Figure 5.2 Data Diagram (Examination Module)


5.1 ACTIVITY DIAGRAM

5.2.1 Activity Diagrams

Fig 5.3 Activity Diagram for Login


Fig 5.4 Activity Diagram for Adding Board, Quota and Designation
Fig 5.5 Activity Diagram for Assign Roll Numbers
Fig 5.6 Activity Diagram for configuring Fees Details
Fig 5.7 Activity Diagram for Getting Admission
Fig 5.8 Activity Diagram for Modifying Student Details
Fig 5.9 Activity Diagram for Pay Fees
Fig 5.10 Activity Diagram for Searching Student
Fig 5.11 Activity Diagram for setting Seat for Admission
Fig 5.12 Activity Diagram for setting Subject Details

Modules:

Admin:

1. Add Student (Admin will add student with basic information)


2. Add 10th Marks (Select a student and Select CBSC / SSC Board and add his/her
marks in each and every subject, and specify any activity, and select which field like
sports, science, arts etc.).
3. Add 12th Marks (Marks of each subject like 10th)
4. Add Marks for First Year (SEM I and II) (* if Student is from diploma then enter his
final Years marks here)
5. Add Marks for Second Year (SEM III and IV)
6. View Marks / Grade for Third Year (SEM V and VI)
(* a report will be generated predicting student's grade for each SEM using C4.5)

SYSTEM CONFIGURATION

3.1 HAREWARE CONFIGURATION

PROCESSOR : Intel Pentium IV 1.8 GHz

RAM : 1 GB DDR2 RAM

HARD DISK DRIVE : 160 GB

3.2 SOFTWARE CONFIGURATION

FRONTEND : DOT NET, C #.

BACK END : SQL

OPERATING SYSTEMS : Microsoft windows xp

DOCUMENTATION : Microsoft word 2007.

SCRIPTING LANGUAGE : Java Script

3.3. LANGUAGE SPECIFICATION

3.3.1 DOTNET TECHNOLOGY


Microsoft .NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily and
securely interoperate. There’s no language barrier with .NET: there are numerous languages
available to the developer including Managed C++, C#, Visual Basic and Java Script. The
.NET framework provides the foundation for components to interact seamlessly, whether
locally or remotely on different platforms. It standardizes common data types and
communications protocols so that components created in different languages can easily
interoperate

3.3.2 THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application development
in the highly distributed environment of the internet

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

3.3.2.1 Common language runtime

The CLR is described as the “execution engine” of .NET. It provides the environment
within which programs run. The most important features are:

 Conversion from a low-level assembler-style language, called Intermediate Language


(IL), into code native to the platform being executed on.
 Memory management, notably including garbage collection.
 Checking and enforcing security restrictions on the running code.
Loading and executing programs, with version control and other such features

The following features of the .NET framework are also worth description:

Managed Code - is code that targets .NET, and which contains certain extra
information - “metadata” - to describe itself. Whilst both managed and unmanaged code can
run in the runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
Managed Data - With Managed Code comes Managed Data. CLR provides memory
allocation and Deal location facilities, and garbage collection. Some .NET languages use
Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas others,
namely C++, do not. Targeting CLR can, depending on the language you’re using, impose
certain constraints on the features available. As with managed and unmanaged code, one can
have both managed and unmanaged data in .NET applications - data that doesn’t get garbage
collected but instead is looked after by unmanaged code.

Common Type System - The CLR uses something called the Common Type System
(CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each
other, by describing types in a common way. CTS define how types work within the runtime,
which enables types in one language to interoperate with types in another language, including
cross-language exception handling. As well as ensuring that types are only used in
appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that
hasn’t been allocated to it.

Common Language Specification - The CLR provides built-in support for language
interoperability. To ensure that you can develop managed code that can be fully used by
developers using any programming language, a set of language features and rules for using
them called the Common Language Specification (CLS) has been defined. Components that
follow these rules and expose only CLS features are considered CLS-compliant.

3.3.2.2 Class library

.NET provides a single-rooted hierarchy of classes, containing over 7000


types. The root of the namespace is called System; this contains basic types like Byte,
Double, Boolean, and String, as well as Object. All objects derive from System. Object. As
well as objects, there are value types. Value types can be allocated on the stack, which can
provide useful flexibility. There are also efficient means of converting value types to object
types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen,


and network I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing distinct
areas of functionality, with dependencies between the namespaces kept to a minimum
ASP.NET Windows Forms

XML WEB SERVICES

Base Class Libraries

Common Language Runtime

Operating System

Fig 3.2: .NET Framework

3.3.3 FEATURES OF ASP.NET

3.3.3.1 ASP.NET

ASP.NET is the .NET framework layer that handles Web requests for specific types of files,
namely those with (.aspx or .ascx) extensions. The ASP.NET engine provides a robust object
model for creating dynamic content and is loosely integrated into the .NET framework.

ASP.NET is part of the .NET framework. ASP.NET programs are centralized


applications hosted on one or more Web servers that respond dynamically to client requests.
The responses are dynamic because ASP.NET intercepts requests for pages with a specific
extension (.aspx or .ascx) and hands off the responsibility for answering those requests to
just-in-time (JIT) compiled code files that can build a response “on-the-fly.”

ASP.NET deals specifically with configuration (web.config and


machine.config) files, Web Services (ASMX) files, and Web Forms (ASPX) files. The server
doesn’t “serve” any of these file types—it returns the appropriate content type to the client.
The configuration file types contain initialization and settings for a specific application or
portion of an application. Another configuration file, called machine.web, contains machine-
level initialization and settings. The server ignores requests for web files, because serving
them might constitute a security breach.

Client requests for these file types cause the server to load, parse, and execute
code to return a dynamic response. For Web Forms, the response usually consists of HTML
or WML. Web Forms maintain state by round-tripping user interface and other persistent
values between the client and the server automatically for each request.
A request for a Web Form can use View State, Session State, or Application
State to maintain values between requests. Both Web Forms and Web Services requests can
take advantage of ASP. Net’s integrated security and data access through ADO.NET, and can
run code that uses system services to construct the response. So the major difference between
a static request and a dynamic request is that a typical Web request references a static file.
The server reads the file and responds with the contents of the requested file.

ASP.NET uses .NET languages. ASP.NET code exists in multithreaded JIT


compiled DLL assemblies, which can be loaded on demand. Once loaded, the ASP.NET
DLLs can service multiple requests from a single in-memory copy.

ASP.NET supports all the .NET languages (currently C#, C++, VB.NET, and
JScript, but there are well over 20 different languages in development for .NET), so you will
eventually be able to write Web applications in your choice of almost any modern
programming language.

In addition to huge increases in speed and power, ASP.NET provides


substantial development improvements, like seamless server-to-client debugging, automatic
validation of form data.

Fig 3.3 Interoperability


3.3.3.2 ASP.NET EVENTS are cool

Every time an ASP.NET page is viewed, many tasks are being performed
behind the scenes. Tasks are performed at key points ("events") of the page's execution
lifecycle.

The most common events are

OnInit

The first event in our list to be raised is OnInit. When this event is raised, all
of the page's server controls are initialized with their property values. Post Back values are
not applied to the controls at this time.

On Load

The next event to be raised is On Load, which is the most important event of
them all as all the pages server controls will have their Post Back values now.

Post Back Events

Next all the Post Back events are raised. These events are only raised when the
page view is the result of a Post Back. The order that these events are raised can't be defined
or relied upon; the only consistency with the order that Post Back events are raised is that
they are all raised between the Unload and OnPreRender events.

OnPreRender

This event is raised just prior to the page or server control's html output being
written into the response stream that's sent to the client web browser. This is last chance you
have to make any modifications. By this point, all the server controls on the page have the
final data applied.

On Unload

This is the last event in our list to be raised and you should destroy any un-
managed objects and close any currently open database connection at this point. It is not
possible to modify any controls on the page at this point as the response stream has already
been sent to the client web browser.

As each event of the page is raised it also automatically tells all its child
controls to raise their own implementation of the same event.Then execution flow is passed
back to the main page class to continue onto the next event and the process is repeated for
that event.

3.3.3.4 MAIN FEATURES OF ASP.NET

Successor of Active Server Pages (ASP), but completely different architecture

• Object-oriented

• Event-based

• Rich library of Web Controls

• Separation of layout (HTML) and logic (e.g. C#)

• Compiled languages instead of interpreted languages

• GUI can be composed interactively with Visual Studio .NET

• Better state management

NAMESPACES

ASP.NET uses a concept called namespaces. Namespaces are hierarchical


object models that support various properties and methods. For example, HTML server
controls reside in "System.web.UI.HtmlControls" namespace, web server controls reside in
“System.web.UI.WebControls" namespace and ADO+ resides in "System. Data" namespace.

LANGUAGE INDEPENDENT
An ASP.NET page can be created in any language supported by .NET
framework. Currently .NET framework supports VB, C#, JScript and Managed C++.

ASP.NET SERVER CONTROLS

Using ASP.NET Server Controls, browser variation is handled because these


controls output the HTML themselves based on the browser requesting the page.

TYPES OF CONTROLS

ASP.NET has two basic types of controls: HTML server controls and Web
server controls.HTML Server Controls are generated around specific HTML elements and the
ASP.NET engine changes the attributes of the elements based on server-side code that you
provide.The ASP.NET engine takes the extra steps to decide based upon the container of the
requester, what HTML to output.

Fig 3.3 Web Controls

3.3.4 DATA ACCESS WITH ADO.NET

ADO.NET provides a set of classes which a script can use to interact with
databases. Scripts can create instances of ADO.NET data classes and access their properties
and methods. A set of classes which work with a specific type of database is known as a .NET
Data Provider. ADO.NET comes with two Data Providers, the SQL Server.NET Data
Provider (which provides optimised access for Microsoft SQL Server databases) and the
OLEDB.NET Data Provider, which works with a range of databases. The main ADO.NET
OLEDB data access classes are OLEDBConnection,OLEDBCommand,OLEDBDataReader
and OLEDBDataAdapter.

ADO.NET offers several advantages over previos versions of ADO

 Interoperablity
 Maintainablity
 Programmability
 Performance Scalability

3.3.4.1 VISUAL STUDIO .NET

Visual studio.NET is a complete set of development tools for building ASP web applications
XML web applications ,XML web services desktop applications and mobile applications in
addition to building high performing desktop applications ,you can use visual studio’s
powerful component-based development tools and other technologies to simplify term-based
design, development and deployment of enterprise solutions. Visual basic .NET, Visual C+
+.NET and visual C#,NET all use the same integrated development environment (IDE) which
allows them to share tools and facilitates in the creation of mixed language solutions .

3.4 SQL SERVER 2005

FERTURES OF SQL SERVER 2005

The OLAP Services feature available in SQL Server version 7.0 is now called
SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the
term Analysis Services. Analysis Services also includes a new data mining component. The
Repository component available in SQL Server version 7.0 is now called Microsoft SQL
Server 2000 Meta Data Services.

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE
2. QUERY

3. FORM

4. REPORT

5. MACRO

3.4.1.1 TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table design


view. We can specify what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view
mode.

3.4.1.2 QUERY:

A query is a question that has to be asked the data. Access gathers data
that answers the question from one or more table. The data that make up the answer is either
dynaset (if you edit it) or a snapshot(it cannot be edited).Each time we run query, we get
latest information in the dynaset.Access either displays the dynaset or snapshot for us to view
or perform an action on it ,such as deleting or updating.

3.4.1.3 FORMS:
A form is used to view and edit information in the database record by
record .A form displays only the information we want to see in the way we want to see it.
Forms

use the familiar controls such as textboxes and checkboxes. This makes
viewing and entering data easy.

Views of Form:

We can work with forms in several primarily there are two views,

They are,

1. Design View

2. Form View

Design View

To build or modify the structure of a form, we work in forms design view.


We can add control to the form that are bound to fields in a table or query, includes textboxes,
option buttons, graphs and pictures.

Form View

The form view which display the whole design of the form.

3.4.1.4 REPORT:

A report is used to vies and print information from the database. The report
can ground records into many levels and compute totals and average by checking values from
many records at once. Also the report is attractive and distinctive because we have control
over the size and appearance of it.

3.4.1.5 MACRO:
A macro is a set of actions. Each action in macros does something. Such as
opening a form or printing a report .We write macros to automate the common tasks the work
easy and save the time.

3.4.1.6 MODULE:

Modules are units of code written in access basic language. We can write
and use module to automate and customize the database in very sophisticated ways.

SYSTEM TESTING

TESTING METHODOLOGIES

The following are the Testing Methodologies:

o Unit Testing.
o Integration Testing.
o User Acceptance Testing.
o Output Testing.
o Validation Testing.

Unit Testing
Unit testing focuses verification effort on the smallest unit of Software design that is
the module. Unit testing exercises specific paths in a module’s control structure to

ensure complete coverage and maximum error detection. This test focuses on each module
individually, ensuring that it functions properly as a unit. Hence, the naming is Unit Testing.

During this testing, each module is tested individually and the module interfaces are
verified for the consistency with design specification. All important processing path are tested
for the expected results. All error handling paths are also tested.

Integration Testing

Integration testing addresses the issues associated with the dual problems of
verification and program construction. After the software has been integrated a set of high
order tests are conducted. The main objective in this testing process is to take unit tested
modules and builds a program structure that has been dictated by design.

The following are the types of Integration Testing:

1. Top Down Integration

This method is an incremental approach to the construction of program structure.


Modules are integrated by moving downward through the control hierarchy, beginning with
the main program module. The module subordinates to the main program module are
incorporated into the structure in either a depth first or breadth first manner.
In this method, the software is tested from main module and individual stubs are
replaced when the test proceeds downwards.

2. Bottom-up Integration
This method begins the construction and testing with the modules at the lowest level
in the program structure. Since the modules are integrated from the bottom up, processing
required for modules subordinate to a given level is always available and the need for stubs is
eliminated. The bottom up integration strategy may be implemented with the following steps:

 The low-level modules are combined into clusters into clusters that
perform a specific Software sub-function.
 A driver (i.e.) the control program for testing is written to coordinate test
case input and output.
 The cluster is tested.
 Drivers are removed and clusters are combined moving upward in the
program structure

The bottom up approaches tests each module individually and then each module is module is
integrated with a main module and tested for functionality.

7.1.3 User Acceptance Testing

User Acceptance of a system is the key factor for the success of any system. The
system under consideration is tested for user acceptance by constantly keeping in touch with
the prospective system users at the time of developing and making changes wherever
required. The system developed provides a friendly user interface that can easily be
understood even by a person who is new to the system.

7.1.4 Output Testing

After performing the validation testing, the next step is output testing of the proposed
system, since no system could be useful if it does not produce the required output in the
specified format. Asking the users about the format required by them tests the outputs
generated or displayed by the system under consideration. Hence the output format is
considered in 2 ways – one is on screen and another in printed format.

7.1.5 Validation Checking

Validation checks are performed on the following fields.

Text Field:

The text field can contain only the number of characters lesser than or equal to its
size. The text fields are alphanumeric in some tables and alphabetic in other tables. Incorrect
entry always flashes and error message.

Numeric Field:

The numeric field can contain only numbers from 0 to 9. An entry of any character
flashes an error messages. The individual modules are checked for accuracy and what it has
to perform. Each module is subjected to test run along with sample data. The individually
tested modules are integrated into a single system. Testing involves executing the real data
information is used in the program the existence of any program defect is inferred from the
output. The testing should be planned so that all the requirements are individually tested.

A successful test is one that gives out the defects for the inappropriate data and
produces and output revealing the errors in the system.

Preparation of Test Data


Taking various kinds of test data does the above testing. Preparation of test data plays a vital
role in the system testing. After preparing the test data the system under study is tested using
that test data. While testing the system by using test data errors are again uncovered and
corrected by using above testing steps and corrections are also noted for future use.

Using Live Test Data:

Live test data are those that are actually extracted from organization files. After a system is
partially constructed, programmers or analysts often ask users to key in a set of data from
their normal activities. Then, the systems person uses this data as a way to partially test the
system. In other instances, programmers or analysts extract a set of live data from the files
and have them entered themselves.

It is difficult to obtain live data in sufficient amounts to conduct extensive testing.


And, although it is realistic data that will show how the system will perform for the typical
processing requirement, assuming that the live data entered are in fact typical, such data
generally will not test all combinations or formats that can enter the system. This bias toward
typical values then does not provide a true systems test and in fact ignores the cases most
likely to cause system failure.

Using Artificial Test Data:

Artificial test data are created solely for test purposes, since they can be generated to test all
combinations of formats and values. In other words, the artificial data, which can quickly be
prepared by a data generating utility program in the information systems department, make
possible the testing of all login and control paths through the program.

The most effective test programs use artificial test data generated by persons other
than those who wrote the programs. Often, an independent team of testers formulates a
testing plan, using the systems specifications.
The package “Virtual Private Network” has satisfied all the requirements specified as
per software requirement specification and was accepted.

7.2 USER TRAINING

Whenever a new system is developed, user training is required to educate them about
the working of the system so that it can be put to efficient use by those for whom the
system has been primarily designed. For this purpose the normal working of the
project was demonstrated to the prospective users. Its working is easily
understandable and since the expected users are people who have good knowledge of
computers, the use of this system is very easy.

7.3 MAINTAINENCE

This covers a wide range of activities including correcting code and design errors. To
reduce the need for maintenance in the long run, we have more accurately defined the user’s
requirements during the process of system development. Depending on the requirements, this
system has been developed to satisfy the needs to the largest possible extent. With
development in technology, it may be possible to add many more features based on the
requirements in future. The coding and designing is simple and easy to understand which will
make maintenance easier.

TESTING STRATEGY :

A strategy for system testing integrates system test cases and design techniques into a
well planned series of steps that results in the successful construction of software. The
testing strategy must co-operate test planning, test case design, test execution, and the
resultant data collection and evaluation .A strategy for software testing must accommodate
low-level tests that are necessary to verify that a small source code segment has been
correctly implemented as well as high level tests that validate major system functions
against user requirements.
Software testing is a critical element of software quality assurance and represents the
ultimate review of specification design and coding. Testing represents an interesting anomaly
for the software. Thus, a series of testing are performed for the proposed system before
the system is ready for user acceptance testing.

SYSTEM TESTING:

Software once validated must be combined with other system elements (e.g.
Hardware, people, database). System testing verifies that all the elements are proper and that
overall system function performance is

achieved. It also tests to find discrepancies between the system and its original objective,
current specifications and system documentation.

UNIT TESTING:

In unit testing different are modules are tested against the specifications produced
during the design for the modules. Unit testing is essential for verification of the code
produced during the coding phase, and hence the goals to test the internal logic of the
modules. Using the detailed design description as a guide, important Conrail paths are
tested to uncover errors within the boundary of the modules. This testing is carried out during
the programming stage itself. In this type of testing step, each module was found to be
working satisfactorily as regards to the expected output from the module.

In Due Course, latest technology advancements will be taken into consideration. As part of
technical build-up many components of the networking system will be generic in nature so
that future projects can either use or interact with this. The future holds a lot to offer to the
development and refinement of this project.
CONCLUSION:

In this project, we presented a case study in educational data mining. It shows the potential of
data mining in higher education. It was especially used to improve students' performance and
detect early predictor of their final GPA. We utilized the classification technique, decision
tree in particular, to predict students’ final GPA based on their grades on previous courses. We
discovered classification rules to predict students final GPA based on their grades in
mandatory courses. We also evaluate the most important courses in the study plan that have a
big impact on the students' final GPA.

FUTURE WORK:
In this project, prediction parameters such as the decision trees generated using RapidMiner
are not updated dynamically within the source code. In the future, we plan to make the entire
implementation dynamic to train the prediction parameters itself when new training sets are
fed into the web application. Also, in the current implementation, we have not considered
extra-curricular activities and other vocational courses completed by students, which we
believe may have a significant impact on the overall performance of the students.
Considering such parameters would result in better accuracy of prediction.

REFERENCES:

[1]M. Al-Razgan, A. S. Al-Khalifa, and H. S. Al-Khalifa, "Educational data mining: A


systematic review of the published literature 2006-2013," in Proc. the 1st International
Conference on Advanced Data and Information Engineering, 2013, pp. 711.

[2]F. Siraj and M. A. Abdoulha, "Mining enrolment data using predictive and descriptive
approaches," Knowledge-Oriented Applications in Data Mining, pp. 53-72, 2007.

[3]Q. A. Al-Radaideh, A. A. Ananbeh, and E. M. Al-Shawakfa, "A classification model for


predicting the suitable study track for school students International Journal of Research and
Reviews in Applied , vol. 8, 2001.

[4]B. K. Baradwaj and S. Pal, "Mining educational data to analyze students' performance "
International Journal of Advanced Computer and Applications, vol. 2, 2011.

[5]. Nandeshwar and S. Chaudhari. (2009). Enrollment Prediction Models Data Mining.
[Online]. Available: http://nandeshwar.info/wp-content/uploads/2008/11/DMWVU_Proje.pdf

[6]. Kabakchieva, "Predicting student performance busing data mining methods for
classification," Cybernetics and Information Technologiesvol. 13, 2013.

You might also like