0% found this document useful (0 votes)
11 views50 pages

NIRF Documentationnn

The document presents a project report on predicting college NIRF ranks using machine learning, specifically employing a Random Forest Regressor. It outlines the methodology, including data acquisition, preprocessing, feature selection, model training, and evaluation, achieving a score of 93% accuracy with an RMSE of 15.47. The project aims to assist stakeholders in making informed decisions regarding educational institutions and to promote improvements in their performance metrics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views50 pages

NIRF Documentationnn

The document presents a project report on predicting college NIRF ranks using machine learning, specifically employing a Random Forest Regressor. It outlines the methodology, including data acquisition, preprocessing, feature selection, model training, and evaluation, achieving a score of 93% accuracy with an RMSE of 15.47. The project aims to assist stakeholders in making informed decisions regarding educational institutions and to promote improvements in their performance metrics.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

COLLEGE NIRF RANK PREDICTION USING

MACHINE LEARNING

Project Report

Submitted in partial fulfillment for the award of the degree

Bachelor of Technology
in
COMPUTER SCIENCE & ENGINEERING
by
P. GANESH GANGULY - 20L31A05I4
P. LOKESH NARAYANA - 20L31A05I9
MD. ZAKIR HUSSAIN - 20L31A05E8
N.K.S RAGHAVENDRA - 20L31A05G2
N.V LEKHENDRA - 20L31A05F6

Under the Guidance of


Mr. Ramaraju S.V.S.V.P
Assistant Professor

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


VIGNAN’S INSTITUTE OF INFORMATION TECHNOLOGY
(Autonomous)
Affiliated to JNTUGV, Vizianagaram & Approved by AICTE, New Delhi
Re-Accredited by NAAC (CGPA of 3.41/4.00)
ISO 9001:2008, ISO 14001:2004, OHSAS 18001:2007 Certified Institution
VISAKHAPATNAM – 530039
April 2024

1
VIGNAN’S INSTITUTE OF INFORMATION TECHNOLOGY(A)
Department of Computer Science & Engineering

CERTIFICATE

This is to certify that the major project entitled “COLLEGE NIRF RANK PREDICTION
USING MACHINE LEARNING” is a bonafide record of project work carried out under my
supervision by P. Ganesh Ganguly (20L31A05I4), P. Lokesh Narayana (20L31A05I9),
MD. Zakir Hussain (20L31A05E8), N.K.S Raghavendra(20L31A05G2), N.V Lekhendra
(20L31A05F6) during the academic year 2023 – 2024, in partial fulfilment of the requirements
for the award of the degree of Bachelor of Technology in Computer Science & Engineering
of VIGNAN’S INSTITUTE OF INFORMATION TECHNOLOGY (Autonomous). The
results embodied in this major project report have not been submitted to any other University
or Institute for the award of any Degree.

Signature of Project Guide Head of the Department

Mr. Ramaraju S.V.S.V.P Mr. B Dinesh Reddy


Assistant Professor Associate Professor
CSE, VIIT CSE, VIIT

External Examiner

2
DECLARATION

We hereby declare that the project report entitled “College NIRF Rank Prediction using
Machine Learning” had done by us and has not been submitted either in part or whole for the
award of any degree or any other university.

P. Ganesh Ganguly (20L31A05I4)


P. Lokesh Narayana (20L31A05I9)
MD. Zakir Hussain (20L31A05E8)
N.K.S Raghavendra (20L31A05G2)
N.V Lekhendra (20L31A05F6)

Date:
Place:

3
ACKNOWLEDGEMENT

It gives us a great sense of pleasure to acknowledge the assistance and cooperation we have
received from several persons while undertaking this Major Project. We owe a special debt
of gratitude to Mr. Ramaraju S.V.S.V.P, Assistant Professor Department of Computer Science
& Engineering, for his constant support and guidance throughout the course of our work. His
sincerity, thoroughness and perseverance have been a constant source of inspiration for us.
We also take the opportunity to acknowledge the contribution of Associate Professor Mr. B
Dinesh Reddy, Head of Department, Computer Science & Engineering, for his full support
and assistance during the development of the project. We also acknowledge the
contribution of all faculty members of the department for their kind assistance and
cooperation during the development of our project. Last but not the least, we acknowledge
our friends for their contribution in the completion of the project.

4
ABSTRACT

The National Institutional Ranking Framework (NIRF) is an annual ranking system initiated by
the Indian government to rank higher education institutions based on several parameters such
as teaching, research, and outreach activities. In this project, we propose to develop a machine
learning model that can predict the NIRF rank of an institution. Then based on the score of
previous years, we predict the rank by giving the performance indicators to the model. The
paper focuses on the use of Random Forest Regressor based Machine learning technique to
predict NIRF rank. Factors considered are Teaching, Learning and Resources (TLR) score,
Research and Professional Practice (RPC) score, Graduation Outcome (GO) score, Outreach
and Inclusivity (OI) score and Perception Score for particular college. The model is evaluated
using standard strategic indicator: Root Mean Square Error. The low value of this indicator
show that the model is efficient in predicting NIRF rank. We got score of 93% and RMSE of
15.47. We have completed ML model save and load operations using Joblib. We have created
a flask server for model deployment and deployed on Render as web service. We conducted
comprehensive evaluations on frequently used machine learning models and conclude that our
proposed solution outperforms due to the comprehensive feature engineering that we built. The
system achieves overall high accuracy for College NIRF rank prediction.

Keywords: National Institutional Ranking Framework (NIRF), Machine learning model,


Random Forest Regressor, fraud prevention, financial transactions. Machine learning,
Teaching, Learning and Resources (TLR) score, Research and Professional Practice (RPC)
score, Graduation Outcome (GO) score, Outreach and Inclusivity (OI) score, Perception Score,
Root Mean Square Error (RMSE), Joblib, Feature engineering, Accuracy

5
TABLE OF CONTENTS

Title Page No.

1. Introduction 8 – 10

1.1 Intro 8–9

1.2 Objectives of the project 10

2. Design and Methodology 11 – 15

2.1 Random Forest Regression 13 -14

2.2 System Architecture 15

3. Literature Survey 16 – 18

4. Software Environment 19 – 39

4.1 Software Installation 19 – 20

4.2 Modules in the project 20 – 22

4.3 Method of Implementation 22 – 39

4.3.1 ML model Implementation 22 – 25

4.3.2 ML model Testing Implementation 26 – 30

4.3.3 Front-End Implementation 31 – 34

4.3.4 Back-End Implementation 35 – 39

5. System Interface and Results 40 – 46

5.1 Methodology 40

5.2 Module Description 40 – 41

5.3 Output Screenshots 42 – 46

6. Conclusion 47 – 48

6.1 Future Scope 47 – 48

7. References 49 – 50

6
LIST OF FIGURES

Figures Page No.

1. Random Forest Regression 13

2. Bagging and Boosting 14

3. System Architecture 15

4. Pre-Input Form Picture 42

5. Post-Input Form Picture 43

6. Outcome of form submission 43

7. Comparison of feature values with a rank 44

8. Comparison of TLR – Teaching, Learning & Resources 44

9. Comparison of RPC – Research & Professional Practices 45

10. Comparison of GO – Graduation Outcomes 45

11. Comparison of OI – Outreach & Inclusivity 46

12. Comparison of Perception 46

7
INTRODUCTION

1.1 Introduction:

National Institutional Ranking Framework (NIRF) is a methodology adopted by the

Ministry of Education, Government of India, to rank institutions of higher education

in India. The Framework was approved by the MHRD and launched by Minister of

Human Resource Development on 29 September 2015. Depending on their areas of

operation, institutions have been ranked under 11 different categories – overall,

university, colleges, engineering, management, pharmacy, law, medical, architecture,

dental and research.

The Framework uses several parameters for ranking purposes like resources, research,

and stakeholder perception. These parameters have been grouped into five clusters and

these clusters were assigned certain weightages. The weightages depend on the type

of institution. About 3500 institutions voluntarily participated in the first round of

rankings. The methodology draws from the overall recommendations and broad

understanding by a Core Committee set up by MHRD to identify the broad parameters

for ranking institutions of Higher Education.

The parameters covered are as follows:

1. Teaching, Learning and Resources: This parameter checks the core activities in

the education institutions.

2. Research and Professional Practices - Excellence in teaching and learning is

closely associated with the scholarship.

3. Graduation Outcomes - Tests the effectiveness of learning/core teaching.

8
4. Outreach and Inclusivity - Lays special emphasis on the representation of

women.

5. Perception - Importance is also given to the perception of an institution.

The NIRF ranking is determined by a complex process that involves the analysis of

various performance metrics of educational institutions. These metrics include

teaching, research, graduation outcomes, outreach, and perception. The institutions are

then ranked based on their overall score, which is calculated using a weighted average

of these metrics.

Predicting the NIRF rank of an educational institution can be a challenging task as it

involves analysing various performance metrics and their relative importance in

determining the final rank. Machine learning algorithms can be used to build

predictive models that can accurately predict the NIRF rank of educational institutions

.By predicting the NIRF rank of educational institutions, stakeholders such as students,

parents, and educational institutions can make informed decisions about which

institutions to choose or collaborate with.

1.2 Objectives of Project:

• Develop a machine learning model capable of predicting NIRF (National


Institutional Ranking Framework) ranks for higher educational institutions in
India based on various parameters and performance metrics.

9
• To provide stakeholders such as students, parents, and educational institutions
with a reliable tool to make informed decisions regarding the choice of
educational institutions.

• To help educational institutions identify areas where they need to improve to


increase their ranking in the future.

• To provide policymakers with insights into the performance of educational


institutions in India and help them make informed decisions about resource
allocation and policy changes.

• To encourage healthy competition among educational institutions and


incentivize them to improve their performance in various areas.

• Evaluate the performance of the prediction model against historical NIRF


ranking data to assess its accuracy, reliability, and effectiveness in predicting
rankings across different institutions and disciplines.

• Enable educational institutions to identify areas for improvement, allocate


resources more effectively, and implement strategies to enhance their
performance and competitiveness in the NIRF rankings.

• Increase transparency in the college ranking process by providing insights into


the factors influencing NIRF ranks and how different institutions compare
against each other based on these parameters.

10
DESIGN AND METHODOLOGY

The methodology for predicting the NIRF rank of the Indian institutions using
machine learning algorithms typically involves the following steps:

1. Data Acquisition: The first step is to collect data on various performance metrics
for the educational institutions, such as research output, teaching quality, graduation
outcomes, and perception. The data can be collected from various sources such as
NIRF reports, university websites, and government databases. We have collected
dataset from Kaggle.

2. Data Pre-Processing: The collected data may be incomplete or contain missing


values, outliers, or inconsistencies. Data preprocessing techniques such as data
cleaning, normalization, and feature engineering are used to address these issues
and prepare the data for model training.

3. Feature Selection: The next step is to select the most relevant features that have
the most significant impact on the NIRF rank. Feature selection techniques such as
correlation analysis, principal component analysis (PCA), and recursive feature
elimination (RFE) can be used to identify the most important features.

4. Data Splitting: The dataset is then split into training and testing sets. The training
set is used to train the model, while the testing set is used to evaluate its
performance.

5. Model Training: Random Forest Regression involves creating an ensemble of


decision trees, where each tree is trained on a subset of the data and a subset of the
features. The trees then vote to make a prediction. The hyperparameters of the
algorithm, such as the number of trees and the maximum depth of the trees, can be
tuned to optimize the model's performance. Model evaluation: The trained model is
evaluated using various performance metrics such as root mean square error
(RMSE), mean absolute error (MAE), and R-squared. The evaluation helps to
determine the accuracy and reliability of the model and identify areas for
improvement.

11
6. Model Evaluation: The trained model is evaluated using various performance
metrics such as root mean square error (RMSE), mean absolute error (MAE), and
R-squared. The evaluation helps to determine the accuracy and reliability of the
model and identify areas for improvement.

7. Model Deployment: Once the predictive model has been trained and evaluated, it
can be deployed for NIRF rank prediction. The model can be integrated into an
existing educational analytics platform or developed as a standalone application.

8. Continuous Improvement: Predictive models require continuous improvement to


keep up with changes in performance metrics and to address any limitations and
constraints associated with NIRF rank prediction. This involves regularly updating
the model with new data and evaluating its performance to ensure accuracy and
reliability.

In conclusion, developing a machine learning model to predict the NIRF rank of Indian
institutions involves several crucial steps. Initially, data must be gathered from various
sources, such as Kaggle, to ensure a comprehensive understanding of each institution's
performance metrics. Once collected, the data undergoes preprocessing to clean up any
errors or inconsistencies, making it suitable for analysis. Feature selection is then
conducted to identify the most influential factors affecting NIRF rankings, streamlining
the model's focus for better accuracy.

Furthermore, continuous improvement is essential to maintain the model's relevance


and effectiveness over time. Regular updates with new data and ongoing evaluations
ensure that the model adapts to changes in performance metrics and addresses any
limitations or constraints. By staying responsive to evolving requirements, the model
remains a valuable tool for predicting NIRF ranks with confidence.

12
2.1 Random Forest Regression:

Every decision tree has high variance, but when we combine all of them together in
parallel then the resultant variance is low as each decision tree gets perfectly trained on
that sample data, and hence the output does not depend on one decision tree but on
multiple decision trees. In the case of a classification problem, the final output is taken
by using the majority voting classifier. In the case of a regression problem, the final
output is the mean of all the outputs. This part is called Aggregation

Figure 1: Random Forest Regression

Random forest is an ensemble technique capable of performing both regression and


classification tasks with the use of multiple decision trees and a technique called
Bootstrap and Aggregation, commonly known as bagging.

13
The basic idea behind this is to combine multiple decision trees in determining the final
output rather than relying on individual decision trees. Random Forest has multiple
decision trees as base learning models. We randomly perform row sampling and feature
sampling from the dataset forming sample datasets for every model. This part is called
Bootstrap.

Ensemble uses two types of techniques :

1. Bagging: It creates a different training subset from sample training data with
replacement & the final output is based on majority voting. For example, Random
Forest.

2. Boosting: It combines weak learners into strong learners by creating sequential


models such that the final model has the highest accuracy. For example, ADA
BOOST, XG BOOST.

Figure 2. Bagging & Boosting

14
2.2 System Architecture:

Figure 3. System Architecture

This system architecture is designed for predictive modelling, starting from data
acquisition from Kaggle, after data pre-processing, feature ranking algorithms are used
to assess feature importance. Features are then selected based on their rank, and a
regression algorithm is applied to build a predictive model. Root Mean Square Error
(RMSE) is used to evaluate model performance, and the Random Forest algorithm can
be an alternative or complementary method for prediction. Overall, this architecture
enables the creation of accurate regression models for various predictive tasks.

15
LITERATURE SURVEY

INTRODUCTION:

This chapter provides an overview of related works in College NIRF Prediction using
Machine Learning.

2.1 Dr. Bishnu Prasad Mishra and Dr. Banashri Rath:

The article titled "ML Use For Forecasting The NIRF Ranking Of Engineering
Colleges In India And PCA To Find The Correct Weightage For The Best Result"
explores the application of Machine Learning (ML) and Principal Component
Analysis (PCA) to optimize the National Institutional Ranking Framework (NIRF)
for engineering colleges in India. It evaluates NIRF criteria, proposes weightage
adjustments, and utilizes ML for rank prediction. PCA analysis complements ML
findings, suggesting modifications for enhanced accuracy. Insights highlight
disparities in funding and parameter weightage. The study advocates for refining
NIRF weightage to improve evaluation precision, showcasing the potential of ML
and PCA in ranking assessments.

2.2 Gadi Himaja, Gadu Srinivasa Rao and Gali Akarsh Naidu:

"Recommendation System: National Institute Rank Prediction Using Machine


Learning" delves into the development of a predictive model for ranking national
universities and colleges in India. Leveraging data from the NIRF website spanning
2016 to 2021, the study utilizes various machine learning algorithms including
Ridge Regression, Decision Tree Regression, KNN Regression, Linear Regression,
Lasso Regression, and Random Forest Regression. Performance metrics such as R
Square scores, Mean Absolute Errors, Mean Square Errors, and Root Mean Square
Errors are compared to select the most accurate model. Additionally, a
recommendation model is devised, highlighting parameters for improvement based
on Z scores and fixed threshold values. Finally, a user-friendly web interface is

16
created using Flask, enabling easy access to the model's predictions for users
without programming expertise. This comprehensive approach offers valuable
insights and practical tools for ranking assessment in the education sector.

2.3 Nidhi Agarwal and Devendra K. Tayal:

The article "FFT Based Ensembled Model to Predict Ranks of Higher Educational
Institutions" introduces a new way to predict how well universities and colleges rank
internationally. It's like guessing where your favorite team might end up in a
tournament. The tool, called EnFftRP, combines different methods to make better
guesses. By using a mix of six basic models and a special math technique called Fast
Fourier Transformation (FFT), it's able to make predictions more accurately.
Researchers tested this tool on data from 2005 to 2018 and found it did a great job better
than other methods. This means it's really good at guessing how well universities and
colleges will rank. It's like having a super-smart coach who can tell you where your
team stands among all the others. This tool is a big deal because it helps universities
and colleges understand how they're doing on a global scale.

2.4 Nishi Doshi, Samhitha Gundam and Bhaskar Chaudhury:

"Strategizing University Rank Improvement using Interpretable Machine Learning


and Data Visualization" discusses a method to help universities and higher
educational institutions (HEIs) improve their rankings. Firstly, it uses Exploratory
Data Analysis (EDA) techniques like correlation heatmaps and box plots to
understand ranking trends. Then, it introduces a new idea: using Decision Tree
(DT) algorithms to classify ranking data and find ways to improve ranks. By
visualizing the data, universities can see the paths to better rankings. The method
also calculates the certainty of these paths using Laplace correction. This helps
universities plan long-term improvements and create action plans. Overall, this
approach offers a quantitative way for universities to assess their ranking potential
and strategize for improvement effectively.

17
2.5 Anika Tabassum, Mahamudul Hasan and Shibbir Ahmed:

"University Ranking Prediction System by Analyzing Influential Global


Performance Indicators" introduces a method to predict university rankings by
analyzing global performance indicators, using standardized data from the Times
Higher Education World University Rankings. The research begins by analyzing
country-wise university ranking data to identify the most influential features. It
then splits the ranking dataset into training and test data and predicts scores for
each feature using an outlier detection and rank score calculation algorithm based
on previous years' scores. The universities are then globally ranked based on these
predicted scores. The accuracy of the prediction system is evaluated using metrics
like ROC curve, recall, and the number of matched ranks against rank deviation.
The study concludes that their proposed prediction system effectively assesses
upcoming global university rankings, providing valuable insights for universities
and stakeholders.

2.6 Anuva Goyal, Prem Prakash Vuppuluri:

"An Analytical Approach Towards the Prediction of Undefined Parameters for the
National Institutional Ranking Framework" explores a method to predict undefined
parameters in the National Institutional Ranking Framework (NIRF) for Higher
Education Institutions (HEIs) in India. NIRF ranks HEIs based on five key
parameters, some of which have undefined functions. This research aims to identify
the best-fitting regression machine learning model to approximate these undefined
functions. By studying various regression models and analyzing real NIRF data,
the study seeks to assist stakeholders in better understanding how NIRF scores are
calculated. This understanding can lead to more effective planning and decision-
making for improving HEI rankings. Through experimentation with real NIRF
data, the research offers insights into predicting and enhancing NIRF scores,
contributing to the continuous improvement of higher education institutions in
India.

18
SOFTWARE ENVIRONMENT

4.1 Software Installation:

To run the provided project, you'll need to ensure that you have the necessary software
installed. Below are the steps to install the required software components:

1. Python: Make sure Python is installed on your system. You can download and
install Python from the official Python website: python.org. It's recommended to
install Python 3.x, as the provided code is compatible with Python 3.

2. pip: Pip is a package manager for Python. It's usually installed automatically when
you install Python. You can verify if pip is installed by running the following
command pip --version in your terminal or command prompt.

3. NumPy, pandas, joblib, Pillow, Matplotlib, Flask, Plotly: You can install these
Python libraries and frameworks using pip. Open your terminal or command
prompt and run the following command:
• pip install numpy pandas joblib Pillow matplotlib Flask plotly

You'll need a text editor or an integrated development environment (IDE) to write and
edit your code. Some popular choices include Visual Studio Code, PyCharm, Sublime
Text, Atom, etc. Once you have installed the required software and libraries, you can
proceed to run the provided Flask application.

Make sure you have the necessary dataset file (engineering.csv) and the trained model
file (college_rank_predictor.pkl) in your project directory. To run the Flask application,
navigate to your project directory in the terminal or command prompt and run the
following command: python app.py

19
This command will start the Flask development server, and you should see output
indicating that the server is running. You can then open a web browser and go to
http://127.0.0.1:5000/ to access your Flask application.

That's it! You have successfully installed the required software and run the provided
project.

4.2 Modules in the Project:

NUMPY:

NumPy is a general-purpose array-processing package. It provides a high-performance


multidimensional array object, and tools for working with these arrays. It is the
fundamental package for scientific computing with Python. It contains various features
like Efficient Array Operations, Integration with pandas, Mathematical Functions and
many more.

PANDAS:

Pandas is an open-source Python Library providing high-performance data


manipulation and analysis tool using its powerful data structures. These offer two main
data structures: Series (one-dimensional like a list) and DataFrames (two-dimensional
like a spreadsheet). You can clean, sort, filter, and perform calculations on your data
with ease. It allows you to read data from various sources like CSV, Excel, and
databases, and export the results in different formats.

Matplotlib:

Matplotlib is a Python library for creating static, animated, and interactive


visualizations. Bar charts, line plots, scatter plots, histograms, and more are all possible
with Matplotlib. It can be integrated with libraries like Plotly to create interactive charts
that users can explore. You can fine-tune the look and feel of your visualizations with
extensive customization options for colors, styles, and labels.

20
Scikit-learn:

Scikit-learn provides a wide range of algorithms for tasks like classification (predicting
categories), regression (predicting continuous values), and clustering (grouping similar
data points). Using this train machine learning models on your data and assess their
performance using metrics like accuracy and precision. It is designed with a user-
friendly interface, allowing you to experiment with different algorithms and fine-tune
your models efficiently.

FLASK:

Flask, in Python, is a web framework designed for building web applications. It is


known for its ease of use and allows developers to customize the application structure
freely. It maps URLs to specific functions that handle user requests and generate
responses. Flask integrates with templating engines like Jinja2 to easily define the
structure and layout of web pages.

Joblib:

Joblib is a Python library designed for streamlining tasks like parallelization, It


efficiently stores and reuses expensive function calls to avoid redundant computations.
You can save and load Python objects (like models or data) for later use, enabling
project restarts and sharing. It saves time by reusing previous calculations.

Pillow:

Pillow provides functions to open, edit, resize, and save images. You can crop, rotate,
and apply various filters. It allows you to draw basic shapes, text, and even create new
images from scratch. Due to its rich functionalities, Pillow is a popular choice for
various tasks involving image processing in Python applications, from simple editing
to complex computer vision projects.

21
Plotly

Plotly is a Python library used to create interactive visualizations. It offers a wide


variety of chart types, including bar charts, line charts, scatter plots, and even 3D
visualizations. Plotly visualizations can be easily shared online or embedded in web
applications, making them perfect for presentations and reports.

4.3 Method of Implementation:

4.3.1 ML model Implementation:

1. Loading the NIRF dataset into a pandas DataFrame:


• Pandas is used to read the dataset from a CSV file into a DataFrame. This can
be done using the read_csv() function.

2. Pre-processing the data:


• Pandas is used for data cleaning, transformation, and normalization.
Operations such as handling missing values, encoding categorical variables,
and scaling numeric features can be performed using pandas methods.
• NumPy is often used alongside pandas for numerical operations and array
manipulation.

22
3. Splitting the dataset into training and testing sets:
• Scikit-learn's train_test_split() function is used for this purpose.

4. Defining a Random Forest Regression model:


• Scikit-learn's Random Forest Regressor class is used to define the model.

5. Training the model:


• The fit() method is used to train the model on the training data.

23
6. Evaluating the model's performance:
• Scikit-learn provides various metrics functions for evaluation, such as mean
squared error (MSE), root mean squared error (RMSE), mean absolute error
(MAE), and R-squared.
• These metrics can be calculated using functions from the sklearn.metrics
module.

7. Tuning the hyperparameters:


• Grid search or randomized search techniques from scikit-learn's
GridSearchCV or RandomizedSearchCV can be used for hyperparameter
tuning.

24
8. Re-training the model with optimized hyperparameters:
• The model can be re-trained with the best hyperparameters obtained from the
tuning process.

9. Saving the trained model:


• The joblib library is used to save the trained model to a file.

10. Using the saved model for predictions:


• The saved model can be loaded and used to make predictions on new data.

These are the steps involved in building a Random Forest Regression model for NIRF
rank prediction using the mentioned libraries in Python. Each library plays a specific
role in different stages of the process, contributing to the overall workflow of model
building and evaluation.

25
4.3.2 ML model Testing Implementation:

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
df=pd.read_csv("./dataset/engineering.csv")
df.sample(6)
df.shape
df.info()
df.isnull().sum()
df.describe()
df.duplicated().sum()
clean_df=df.drop(["institute_id","name","city","state"],axis=True)
clean_df.sample(6)
X = clean_df.drop('rank', axis=1)
y = clean_df['rank']
print('Shape of X = ', X.shape)
print('Shape of y = ', y.shape)
from sklearn.model_selection import train_test_split
X_train,X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=51)
print('Shape of X_train = ', X_train.shape)
print('Shape of y_train = ', y_train.shape)
print('Shape of X_test = ', X_test.shape)
print('Shape of y_test = ', y_test.shape)
from sklearn.ensemble import RandomForestRegressor
regressorRFR =RandomForestRegressor(n_estimators=100, criterion='squared_error')
regressorRFR.fit(X_train, y_train)
regressorRFR.score(X_test, y_test)
y_pred2=regressorRFR.predict(X_test)
from sklearn.metrics import mean_squared_error,mean_absolute_error,r2_score
mse = mean_squared_error(y_test, y_pred2)
rmse = np.sqrt(mse)
print('MSE = ', mse)

26
print('RMSE = ', rmse)
from sklearn.model_selection import cross_val_score
cross_val_score(regressorRFR, X_train, y_train, cv=5, ).mean()
int(regressorRFR.predict([X_test.iloc[18, :]])[0].round())
y_test.iloc[18]
import joblib
joblib.dump(regressorRFR, "college_rank_predictor.pkl")
model = joblib.load("college_rank_predictor.pkl")
model.predict([X_test.iloc[18, :]])[0]
feature_importances = model.feature_importances_
feature_names = X.columns
feature_importance = dict(zip(feature_names, feature_importances)
sorted_feature_importance = sorted(feature_importance.items(), key=lambda x: x[1],
reverse=True)
for feature, importance in sorted_feature_importance:
print(f'{feature}: {importance}')
df_2016=pd.read_csv("./db/2016/EngineeringRanking_2016.csv")
df_2017=pd.read_csv("./db/2017/EngineeringRanking_2017.csv")
df_2018=pd.read_csv("./db/2018/EngineeringRanking_2018.csv")
df_2019=pd.read_csv("./db/2019/EngineeringRanking_2019.csv")
df_2020=pd.read_csv("./db/2020/EngineeringRanking_2020.csv")
df_2021=pd.read_csv("./db/2021/EngineeringRanking_2021.csv")
df_2016['year'] = 1
df_2016['year'] = 2
df_2017['year'] = 3
df_2018['year'] = 4
df_2019['year'] = 5
df_2020['year'] = 6
df_2021['year'] = 7
df_combined=pd.concat([df_2016,df_2017,df_2018,df_2019,df_2020,df_2021],
ignore_index=True)
excel_file_path_combined = 'combined_data.xlsx'
csv_file_path_combined = 'combined_data.csv'

27
df_combined.to_csv(csv_file_path_combined, index=False)
print(f"Combined DataFrame has been saved to {csv_file_path_combined}")
print(df_combined.columns)
clean_df=df_combined.drop(["InstituteId","InstituteName","City","State","Score"]
,axis=True)
clean_df.sample(6)
X = clean_df.drop('Rank', axis=1)
y = clean_df['Rank']
print('Shape of X = ', X.shape)
print('Shape of y = ', y.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=51)
print('Shape of X_train = ', X_train.shape)
print('Shape of y_train = ', y_train.shape)
print('Shape of X_test = ', X_test.shape)
print('Shape of y_test = ', y_test.shape)
model_combined = RandomForestRegressor(n_estimators=100, random_state=42)
model_combined.fit(X_train, y_train)
print(X_train)
predictions_combined = model_combined.predict(X_test)
from sklearn.metrics import mean_squared_error,mean_absolute_error,r2_score
mae_combined = mean_absolute_error(y_test, predictions_combined)
mse_combined = mean_squared_error(y_test, predictions_combined)
r2_combined = r2_score(y_test, predictions_combined)
print(f'Combined Data Mean Absolute Error: {mae_combined}')
print(f'Combined Data Mean Squared Error: {mse_combined}')
print(f'Combined Data R-squared: {r2_combined}')
import joblib
joblib.dump(model_combined, "college_rank_predictor1.pkl")
model = joblib.load("college_rank_predictor1.pkl")
feature_importances_combined = model_combined.feature_importances_
feature_names_combined = X.columns

28
feature_importance_combined=dict(zip(feature_names_combined,feature_importanc
es_combined)
sorted_feature_importance_combined= sorted(feature_importance_combined.items(),
key=lambda x: x[1], reverse=True)
for feature, importance in sorted_feature_importance_combined:
print(f'{feature}: {importance}')
import pandas as pd
import matplotlib.pyplot as plt
top_college_features = { 'tlr': 90,'rpc': 85,'go': 95,'oi': 80,'perception': 92}
def get_user_features():
user_features = {}
print("Please enter the features of your college:")
for feature in top_college_features.keys():
value = float(input(f"Enter value for {feature}: "))
user_features[feature] = value
return user_features
def predict_rank(features):
predicted_rank = 5 # Example prediction
return predicted_rank
user_features = get_user_features()
predicted_rank = predict_rank(user_features)
differences={feature:user_features[feature]-top_college_features[feature]for feature
in top_college_features}
df = pd.DataFrame({'Top College': top_college_features, 'Your College': user_features,
'Differences': differences})
fig, ax = plt.subplots(figsize=(12, 6))
df[['Top College', 'Your College']].plot(kind='bar', ax=ax, color=['blue', 'orange'],
width=0.4)
df['Differences'].plot(kind='bar', ax=ax, color='red', alpha=0.5, width=0.2)
ax.set_ylabel('Feature Values / Differences')
ax.set_title('Comparison of Feature Values with Top-Ranked College')
ax.annotate(f'Predicted Rank: {predicted_rank}', xy=(0.5, 0), xytext=(0, -40),
xycoords='axes fraction', textcoords='offset points', ha='center', va='top',

29
fontsize=12, color='red', bbox=dict(boxstyle='round,pad=0.5', fc='yellow', alpha=0.5))
plt.xticks(rotation=45) # Rotate x-axis labels for better readability
plt.grid(True)
plt.legend(['Top College', 'Your College', 'Differences'])
plt.show()

1. Data Splitting: The dataset is split into training and testing sets using the
train_test_split function from sklearn.model_selection. This step ensures that the
model's performance can be evaluated on unseen data.

2. Model Evaluation Metrics: Several evaluation metrics are calculated to assess the
performance of the trained model on the test set. These metrics include Mean
Squared Error (MSE), Root Mean Squared Error (RMSE), and R-squared. These
metrics provide insights into how well the model generalizes to new, unseen data.

3. Cross-Validation: Cross-validation is performed using cross_val_score from


sklearn.model_selection. It helps estimate the model's performance on various
splits of the training data, providing a more robust evaluation.

4. Serialization and Deserialization: The trained model is serialized using


joblib.dump to save it to a file after training. Later, the model is deserialized using
joblib.load to reload it for further analysis or deployment.

5. Feature Importance Analysis: Feature importances are calculated using the


trained model to identify which features contribute the most to predicting college
ranks. This analysis helps understand the model's decision-making process and
identify significant predictors.

6. User Input Prediction: Although not directly related to testing the model's
performance, the function predict_rank allows users to input features of a college
and obtain a predicted rank using the trained model. This functionality can be
considered as a form of testing the model's deployment and usability.

30
4.3.3 FRONT-END IMPLEMENTATION:

<!DOCTYPE html>
<html>
<head>
<title>College NIRF Rank Predictor</title>
<link href="https://cdn.jsdelivr.net/npm/[email protected]
alpha1/dist/css/bootstrap.min.css" rel="stylesheet"
integrity="sha384GLhlTQ8iRABdZLl6O3oVMWSktQOp6b7In1Zl3/Jr59b6EGGoI1
aFkw7cmDA6j6gD" crossorigin="anonymous">
<style>
.plot-container {
display: flex;
flex-wrap: wrap;
justify-content: space-around;
margin-bottom: 20px;
}
.plot {
flex: 0 1 30%;
margin-bottom: 20px;
}
</style>
</head>
<body>
<div>
<img src="static/images/NIRF.png" class="w3-border w3-padding"
alt="BANNER" style="width:100%">

</div>
<center>
<h1>College NIRF Rank Predictor</h1>
<br>
{% if message %}

31
<div class="mb-3" style="width: 300px; border: 5px solid red; padding: 10px;
margin: 0;">
{{message}}
</div>
{% endif %}
<div class="mb-3" style="width: 300px; border: 5px solid gray; padding: 10px;
margin: 0;">
<form method="POST">
<label for="exampleFormControlInput1" class="form-label ">Teaching,
Learning and Resources (TLR) Score : </label>
<input type="number" step="0.01" name="tlr" placeholder="Score Range(1-
100)"><br>
<label for="exampleFormControlInput1" class="form-label">Research and
Professional Practice (RPC) Score : </label>
<input type="number" step="0.01" name="rpc" placeholder="Score Range(1-
100)" ><br>
<label for="exampleFormControlInput1" class="form-label">Graduation
Outcome (GO) Score :</label>
<input type="number" step="0.01" name="go" placeholder="Score Range(1-
100)"><br>
<label for="exampleFormControlInput1" class="form-label">Outreach and
Inclusivity (OI) Score :</label>
<input type="number" step="0.01" name="oi" placeholder="Score Range(1-
100)"><br>
<label for="exampleFormControlInput1" class="form-label">Perception Score
:</label>
<input type="number" step="0.01" name="perception" placeholder="Score
Range(1-100)"><br><br>
<label for="exampleFormControlInput1" class="form-label">Enter the rank to
compare with :</label>
<input type="number" step="0.01" name="rta" placeholder=""><br><br>
<input type="submit" value="Predict" class="btn btn-info"><br>
</form>

32
</div>
<div style="width: 500px; border: 5px solid rgb(5, 145, 143); padding: 10px;
margin: 0;">
{% if prediction is not none %}

<p >
<h3>The Predicted NIRF college rank is: <u><b>{{ prediction
}}</u></b></h3></p>
{% endif %}
</div>
{{ chart_html | safe }}<br>
{% if plot_filenames %}
<div class="plot-container">
{% for plot_filename in plot_filenames[:3] %}
<div class="plot">
<iframe src="{{ url_for('static', filename=plot_filename) }}" width="100%"
height="400px"></iframe>
</div>
{% endfor %}
</div>
<div class="plot-container">
{% for plot_filename in plot_filenames[3:] %}
<div class="plot">
<iframe src="{{ url_for('static', filename=plot_filename) }}" width="100%"
height="400px"></iframe>
</div>
{% endfor %}
</div>
<div>
{% endif %}
<br><br>
<p>
<h4><u>NOTE:</u></h4><br>

33
NIRF(National Institutional Ranking Framework) is an initiative of the Indian
government to rank higher educational institutions in India based on various
parameters such as teaching, learning, research, outreach, and perceptio & This
Machine Learning model is trained on the 2020 NIRF Ranking dataset</p>
</div>
</center>
</body>
</html>

1. Title and Styling: The HTML document starts with a title indicating the purpose
of the page, "College NIRF Rank Predictor." It imports the Bootstrap CSS
framework to style the page elements.

2. Banner and Header: The page includes an image of the NIRF logo as a banner.
Below the banner, a centered header displays the title "College NIRF Rank
Predictor."

3. Form for User Input: Users can input scores for five parameters - Teaching,
Learning and Resources (TLR) Score, Research and Professional Practice (RPC)
Score, Graduation Outcome (GO) Score, Outreach and Inclusivity (OI) Score, and
Perception Score. Additionally, users can enter a rank to compare with.

4. Prediction Output: The predicted NIRF college rank is displayed in a highlighted


box below the input form if a prediction is made.

5. Charts and Plots: The page renders charts and plots related to the prediction
results. It divides plots into separate containers for better organization and
presentation.

6. Note Section: A note section provides information about NIRF and the machine
learning model, including its training data source.

34
4.3.4 BACK-END IMPLEMENTATION:

from flask import Flask, render_template, request


import joblib
model = joblib.load('college_rank_predictor.pkl')
from flask import Flask, render_template, request
import plotly.graph_objects as got
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
app = Flask(__name__)
# Sample data (replace this with your actual data)
# Features of the top-ranked college
df=pd.read_csv("./dataset/engineering.csv")
@app.route('/', methods=['GET', 'POST'])
def index():
if request.method == 'POST':
# Get user-entered features
model = joblib.load('college_rank_predictor.pkl')
user_features = {}
rank_to_access = int(request.form.get("rta"))
row = df.loc[df['rank'] == rank_to_access]
if row.empty:
return render_template('index1.html',message="Entered rank is invalid")
top_college_features = {}
for column in row.columns:
# Exclude the 'Rank' column
if column in ['tlr', 'rpc', 'go', 'oi', 'perception']:
top_college_features[column] = row.iloc[0][column]
print(top_college_features)
for feature in top_college_features.keys():
user_features[feature] = float(request.form[feature])
# Calculate differences in features

35
differences = {feature: user_features[feature] - top_college_features[feature] for
feature in top_college_features}
difference_count = sum(1 for diff in differences.values() if diff != 0)
# Predict rank (replace this with your actual prediction code)
tlr = float(request.form.get("tlr"))
rpc = float(request.form.get("rpc"))
go = float(request.form.get("go"))
oi = float(request.form.get("oi"))
perception = float(request.form.get("perception"))
prediction = model.predict([[tlr, rpc, go, oi, perception]])
prediction = prediction -1
plot_filenames = []
for feature, value in user_features.items():
fig = got.Figure()
# Add features of top-ranked college
fig.add_trace(got.Bar(x=[feature], y=[top_college_features[feature]],
name='Top College',marker_color='blue'))
# Add features of user-entered college
fig.add_trace(got.Bar(x=[feature],y=[value],name='Your College',
marker_color='orange'))
# Add differences
bar_color = 'green' if differences[feature] > 0 else 'red'

fig.add_trace(got.Bar(x=[feature],y=[differences[feature]],name='Differences',
marker_color=bar_color ))
# Update layout
fig.update_layout(title=f'Comparison of {feature} with Top-Ranked College',
xaxis_title='Features',
yaxis_title='Values / Differences',
barmode='group'
)
# Save plot as HTML file
plot_filename = f"plot_{feature}.html"

36
plot_filenames.append(plot_filename)
fig.write_html(f"static/{plot_filename}")
# Create Plotly chart
fig = got.Figure()
# Add features of top-ranked college
fig.add_trace(got.Bar(x=list(top_college_features.keys()),
y=list(top_college_features.values()),
name='Top College',
marker_color='blue'
))
# Add features of user-entered college
fig.add_trace(got.Bar(x=list(user_features.keys()),
y=list(user_features.values()),
name='Your College',marker_color='orange'))
bar_colors = ['green' if diff > 0 else 'red' for diff in differences.values()]
fig.add_trace(got.Bar(x=list(differences.keys()),y=list(differences.values()),
name='Differences',marker_color=bar_colors))

# Update layout
fig.update_layout(title='Comparison of Feature Values with Top-Ranked
College',
xaxis_title='Features',yaxis_title='Values / Differences',barmode='group' )
# Convert Plotly chart to HTML
chart_html = fig.to_html(full_html=False, include_plotlyjs='cdn')
# print(prediction)
Return render_template('index1.html',chart_html=chart_html,
plot_filenames=plot_filenames, prediction=int(prediction[0].round()))
return render_template('index1.html')
if __name__ == '__main__':
app.run(debug=True)

37
Flask is a lightweight web application framework in Python that can be used for
deploying machine learning models for NIRF rank prediction. Here are the steps
involved in deploying a Random Forest Regression model using Flask:

1. Develop the Random Forest Regression model using Python libraries such as
scikit-learn and pandas.
2. Save the trained model as a file using Python's joblib library.
3. Create a new Flask application and import the necessary libraries and the trained
model file.
4. Define a route in Flask that will handle incoming requests to predict the NIRF
rank.
5. An instance of the Flask application is created with app = Flask(__name__).
6. Routes are defined using @app.route('/'). The / route corresponds to the root URL
of the application.
7. The index() function is the view function for the root URL. It handles both GET
and POST requests.
8. The render_template() function is used to render HTML templates, passing data
to the templates.
9. request.form is used to access form data submitted by the user.
10. In the route function, pre-process the incoming data and pass it through the trained
model to make a prediction.
11. Return the predicted NIRF rank as a response to the client.
12. Test the Flask application locally to ensure that it is working correctly.
13. The app.run(debug=True) statement starts the Flask development server.

38
Plotly: Plotly is a graphing library that allows you to create interactive plots and charts.
In this code:

• Plotly is imported with import plotly.graph_objects as go.


• Plotly figures are created using the go.Figure() constructor.
• Bar charts are created for each feature comparison between the user's college
and the top-ranked college.
• Differences in feature values are visualized using green bars for positive
differences and red bars for negative differences.
• Bar charts are created using go.Bar() traces added to the figure.
• The write_html() method is used to save the Plotly figure as an HTML file.
• The update_layout() method is used to customize the layout of the chart,
including titles and axis labels.
• The HTML code for the Plotly chart is passed to the render_template() function
to be displayed in the web page.
• The to_html() method is used to convert the Plotly figure to HTML for
embedding in the Flask application.

Overall, Flask and Plotly are used together in this code to create a web application that
allows users to compare their college's features with those of the top-ranked college
and visualize the differences in an interactive manner

39
SYSTEM INTERFACE AND RESULTS

5.1 METHODOLOGY

The main modules in our model include:

1. Receive User Input

2. Handle User Input

3. Model Prediction

4. Display Results

5. Store Plots

6. HTML template rendering

7. CSS Styling

8. Dependencies

5.2 Module Description

1. Receive User Input:

• HTML Form: Allows users to input data such as Teaching, Learning and
Resources (TLR) Score, Research and Professional Practice (RPC) Score,
Graduation Outcome (GO) Score, Outreach and Inclusivity (OI) Score,
Perception Score, and the rank to compare with.

2. Handle User Input:

• Flask Route (‘/’): Handles GET and POST requests.

• POST Method: Handles form submission and user input processing.

40
3. Model Prediction:

• Machine Learning Model: Trained model loaded using joblib for predicting
NIRF college rank based on user input.
• Prediction Logic: Predicts the NIRF college rank using the trained model
and user-provided feature scores.
4. Display Results:

• Conditional Block: Displays the predicted NIRF college rank if available.


• Plotly Charts: Visualize comparisons between user-entered features and
features of the top-ranked college using interactive charts.
5. Store Plots:
• Static Directory: Static directory stores the generated plot files.
• Plot Saving: Plotly charts are saved as HTML files in the static directory.
6. HTML Template Rendering:
• Flask's render_template function: Renders the HTML template
(‘index1.html’) with dynamic content such as predicted rank and Plotly
charts.
7. CSS Styling:
• Custom CSS: Styling for certain components such as plot containers and
plots.
8. Dependencies:
• Flask: Web framework for Python used for handling HTTP requests and
responses.
• Plotly: Library for creating interactive charts.
• Pandas, Matplotlib, NumPy: Libraries for data manipulation and
visualization.
• Joblib: Library for loading machine learning models.

These modules collectively allow users to input their data, predict the NIRF college
rank, visualize comparisons, and display the results on the webpage.

41
5.3 OUTPUT SCREENSHOTS

To execute the project open command prompt and navigate to the project folder
location and run the following command:
>python app.py
Now navigate to the 127.0.0.1 which is the flask development server, then the
following interface appears on the web browser:

Figure 4. Pre-Input Form Picture

42
Now enter the scores of the institution to predict it’s NIRF ranking, also enter the NIRF
rank of the institution to compare:

Figure 5. Post-Input Form Picture

Click on the predict button to reveal the results:

Figure 6. Outcome of form submission

43
Figure 7. Comparison of feature values with a specified rank for analysis

Figure 8. Comparison of TLR - Teaching, Learning and Resources

44
Figure 9. Comparison of RPC - Research and Professional Practices

Figure 10. Comparison of GO - Graduation Outcomes


45
Figure 11. Comparison of OI - Outreach and Inclusivity

Figure 12. Comparison of Perception

46
CONCLUSION

In conclusion, the NIRF rank prediction project aims to leverage machine learning
techniques to predict the National Institutional Ranking Framework (NIRF) rank of
Indian higher education institutions. The project's purpose is to provide insights into
the factors that contribute to an institution's NIRF rank, identify areas for improvement,
and help policymakers allocate resources to enhance the overall quality of higher
education in India. By building a Random Forest Regression model using scikit-learn,
the project demonstrates the potential of machine learning to predict NIRF rankings
with a high degree of accuracy.

The model has been trained and evaluated using a large dataset of Indian educational
institutions, and its performance has been measured using evaluation metrics such as
root mean squared error (RMSE). The future scope of the project is vast and
encompasses several potential avenues for further development, such as incorporating
more data sources, enriching data with text analysis, incorporating temporal trends,
exploring alternative machine learning models, and building a user-friendly interface.
Overall, the NIRF rank prediction project is a valuable contribution to the improvement
of the Indian higher education system, and its predictive model provides actionable
insights for institutions and policymakers.

6.1 FUTURE SCOPE

The future scope of the NIRF rank prediction project is vast and encompasses several
potential avenues for further development and improvement. Here are some possible
directions for future work:

1. Integration of Additional Features: Currently, the model may be using a limited


set of features for predicting NIRF ranks. Expanding the feature set to include more
parameters such as faculty quality, student-teacher ratio, infrastructure, and
industry collaborations can improve prediction accuracy.

47
2. Incorporating Time-Series Analysis: NIRF rankings evolve over time, reflecting
changes and improvements in educational institutions. Implementing time-series
analysis techniques can enable the model to capture temporal trends and predict
future rankings based on historical data.
3. Dynamic Updating of Model: Implementing a system for dynamically updating
the prediction model with the latest NIRF ranking data can ensure that the
predictions remain accurate and reflective of current trends in educational quality.

4. Collaboration with Educational Institutions and Policy Makers: Collaborating


with educational institutions and policy makers to gather more comprehensive data,
understand the factors influencing educational quality, and tailor the prediction
model to meet the needs of stakeholders can lead to more impactful applications of
NIRF rank prediction in decision-making processes.

5. User Feedback and Improvement Loop: Incorporating a feedback mechanism


where users provide feedback on the predicted rankings can help refine the model
further. Analyzing user feedback and incorporating it into model training can lead
to continuous improvement and better predictions over time.

6. Enriching Data with Text Analysis: The model could potentially leverage natural
language processing techniques to extract insights from unstructured data sources
such as institutional websites, research papers, and news articles. This could
provide a more comprehensive picture of an institution's strengths and weaknesses.

48
REFERENCES

[1] National Institutional Ranking Framework (NIRF) official website:


https://www.nirfindia.org/

[2] Bhatia, A., & Singh, S. P. (2021). Predicting NIRF Ranking using Machine
Learning. In Proceedings of the 3rd International Conference on Computing
Methodologies and Communication (pp. 547-553). Springer.

[3] Jha, P. C., & Aggarwal, M. (2019). Predicting NIRF Ranking of Indian Universities
and Institutes using Machine Learning Techniques. Journal of Data Science, 17(4),
611-626.

[4] Scikit-learn documentation: https://scikit-learn.org/stable/documentation.html

[5] Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical
learning: data mining, inference, and prediction. Springer.

[6] Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5-32.

[7] Friedman, J. H. (2001). Greedy function approximation: a gradient boosting


machine. Annals of Statistics, 1189-1232.2011.

[8] Chollet, F. (2018). Deep learning with Python. Manning Publications.

[9] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.

[10] Kingma, D. P., & Ba, J. (2014). Adam: a method for stochastic optimization.
arXiv preprint arXiv:1412.6980.

[11] Nigam, A., & Singh, S. (2020). Predicting NIRF Ranking of Indian Engineering
Institutions using Machine Learning Techniques. International Journal of Engineering
Research and Technology, 13(2), 96-102.

49
[12] Kumar, A., & Kumar, M. (2021). NIRF Ranking Prediction using Ensemble
Machine Learning Techniques. In 2021 4th International Conference on Computing,
Communication and Networking Technologies (ICCCNT) (pp. 1-6). IEEE.

[13] Jain, A., & Sood, S. K. (2020). NIRF Ranking Prediction of Indian Universities
using Machine Learning Algorithms. International Journal of Computer Applications,
180(7), 1-5.

[14] Agrawal, A., & Singh, S. P. (2020). Predicting NIRF Ranking of Indian
Universities and Institutes using Supervised Learning Techniques. In 2020 3rd
International Conference on Computing, Communication and Security (ICCCS) (pp.
1-6). IEEE.

[15] Géron, A. (2019). Hands-on machine learning with Scikit-Learn, Keras, and
TensorFlow: Concepts, tools, and techniques to build intelligent systems. O'Reilly
Media, Inc.

50

You might also like