0% found this document useful (0 votes)
4 views21 pages

BML Answer Key

The document provides an answer key for an end semester examination on Basic Machine Learning Techniques, covering topics such as decision trees, inductive bias, rule induction, association rule mining, bagging, boosting, artificial neural networks (ANN), and backpropagation. Each question includes definitions, explanations, and examples, detailing algorithms like Apriori and AdaBoost, as well as the structure and function of ANNs. The document also discusses the importance of activation functions and the challenges associated with training neural networks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views21 pages

BML Answer Key

The document provides an answer key for an end semester examination on Basic Machine Learning Techniques, covering topics such as decision trees, inductive bias, rule induction, association rule mining, bagging, boosting, artificial neural networks (ANN), and backpropagation. Each question includes definitions, explanations, and examples, detailing algorithms like Apriori and AdaBoost, as well as the structure and function of ANNs. The document also discusses the importance of activation functions and the challenges associated with training neural networks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

B.TECH.

DEPARTMENT OF ARTIFICIAL INTELLIGENCE AND DATA SCIENCE


BASIC MACHINE LEARNING TECHNIQUES (U23ADT306)
END SEMESTER EXAMINATIONS-ANSWER KEY Total Marks-75

PART A (10 X 2 = 20 Marks)


Answer all the Questions
Q.No. What is a decision tree, and how is it used for classification? Mark allotted
A decision tree is a flowchart-like structure where each internal node represents a decision based on a feature, each
1 Definition
branch represents an outcome, and each leaf node represents a class label. It is used for classification by splitting
2 Marks
data into subsets based on the most significant attributes, making predictions based on learned rules.
Q.No. Explain the concept of inductive bias in the context of decision trees. Mark allotted
Inductive bias refers to the assumptions a learning algorithm makes to generalize from limited training data. In 7 Steps
2 decision trees, this bias includes preferring shorter trees (Occam’s Razor) and using features that provide the most
information gain first. 2 Marks
Q.No. What is rule induction in machine learning? Mark allotted
Rule induction is the process of extracting useful if-then rules from data to make predictions. It is often used in Explanation
decision trees, where classification rules are derived from patterns in the training dataset. 1 Mark
Use
1 Mark
Q.No. Explain the concept of association rule mining and provide an example of a typical association rule. Mark allotted
Association rule mining finds relationships between variables in large datasets. A common example is market Any 2
4 basket analysis, where an association rule like {Bread} → {Butter} means that customers who buy bread often Differences
buy butter.
2 Marks
Q.No. What is the concept of bagging, and how does it help in improving the model's performance? Mark allotted
Bagging (Bootstrap Aggregating) is an ensemble learning technique that trains multiple models on random subsets Each equation
of data and averages their predictions to reduce variance and improve accuracy. It helps by making the model 1 Mark
5 more stable and less prone to overfitting.
2*1=2 Marks

Q.No Define boosting. Mark allotted


Boosting is an ensemble learning technique that combines multiple weak classifiers sequentially, where each new Each Definition
6 classifier focuses on correcting the errors of the previous ones. It helps improve accuracy by reducing bias and
variance. 2*1= 2 Marks
1
Q.No Define ANN with its structure Mark allotted

Explanation
7
2 Marks

An ANN is composed of a large number of processing elements with their connections, and it has three distinctive
layers, namely input, hidden and output layers. These layers are called the basic elements of architecture and
known as nodes/neurons.

Q.No Explain the role of the activation function in a neural network. Mark allotted
The activation function introduces non-linearity into the network, enabling it to learn complex patterns. Common Definition
8 activation functions include ReLU, Sigmoid, and Tanh, which help in decision-making by transforming weighted
inputs. 2 Marks
Q.No What is a Multi-Layer Perceptron (MLP) network, and how is it different from a single-layer perceptron? Mark allotted
 A Multi-Layer Perceptron (MLP) is a type of artificial neural network that consists of an input
layer, one or more hidden layers, and an output layer. It can learn complex patterns and solve Explanation
9 non-linear problems.
 An MLP differs from a Single-Layer Perceptron (SLP) because SLP has only one layer (input
2 Marks
and output) and can only solve linearly separable problems, while MLP, with multiple hidden
layers, can handle non-linearly separable data.
Q.No What is backpropagation? Mark allotted
Backpropagation is a training algorithm for neural networks that adjusts weights by propagating errors backward Definition
10 from the output layer to the input layer, minimizing loss using techniques like gradient descent.
2 Marks
PART B(5 X 5 = 25 Marks)
Q.No. Explain the basic decision tree algorithm. How does it work, and what is the role of entropy in building Mark allotted
a decision tree?
11 A Decision Tree Algorithm is a supervised learning method used for classification and regression
tasks. It models decisions and their possible consequences in a tree-like structure, where each Machine Learning
internal node represents a decision based on a feature, branches denote the outcomes of these Explanation

2
decisions, and leaf nodes indicate the final prediction or class label. 2 Marks

Working of the Decision Tree Algorithm:

1. Selecting the Best Feature for Splitting: Types


2 Marks
o The algorithm evaluates all features to determine which one best separates the data
into homogeneous subsets. This evaluation is often based on criteria like Information
Gain or Gini Impurity.
Application
2. Splitting the Data: 1 Mark

o The dataset is divided into subsets based on the selected feature's possible values.
Each subset becomes a child node of the current node.

3. Recursive Partitioning:

o The splitting process is recursively applied to each child node, considering only the
data within that node, until a stopping condition is met (e.g., all instances in a node
belong to the same class, or a maximum tree depth is reached).

4. Assigning Class Labels:

o Once the tree is fully grown, each leaf node is assigned a class label, which is used for
making predictions on new data.Analytics Vidhya

Role of Entropy in Building a Decision Tree:


5 Marks
Entropy is a measure of impurity or disorder within a set of data. In the context of decision trees, it
quantifies the uncertainty or randomness in the dataset. The formula for entropy (E) is:

Where pi is the proportion of instances belonging to class i in the dataset S.

In decision trees, entropy is used to calculate Information Gain, which helps in selecting the feature
that best splits the data. Information Gain (IG) is defined as the reduction in entropy after a dataset is
3
split on a particular feature:

Where:

 S is the original dataset.

 A is the feature being considered for the split.

 Sv represents the subsets of S after splitting by feature A.

By calculating the Information Gain for each feature, the algorithm selects the feature that results in
the greatest reduction in entropy, leading to more homogeneous and informative splits. This process
continues recursively, building a tree that effectively partitions the data to make accurate predictions.

https://www.analyticsvidhya.com/blog/2021/08/decision-tree-algorithm/
Q.No. Discuss the concept of association rule mining and explain the Apriori algorithm with an example of
Mark allotted
how it generates frequent itemsets.
12 Association Rule Mining
Explanation
Association Rule Mining is a technique used in data mining to discover relationships between items 3 Marks
in large datasets. It is commonly used in market basket analysis to identify patterns in customer
purchases. Example
2 Marks
Key Terms in Association Rule Mining:

1. Support – Measures how frequently an itemset appears in the dataset.

2. Confidence – Measures how often an item B appears in transactions that contain item A.
4
5 Marks

3. Lift – Measures how much more likely item B is purchased when item A is purchased,
compared to if they were independent.

Apriori Algorithm

The Apriori algorithm is a popular method for finding frequent itemsets and generating association
rules. It uses a bottom-up approach where it iteratively finds itemsets with high support values.

Steps of the Apriori Algorithm:

1. Set a minimum support threshold.

2. Generate frequent itemsets:

o Count the occurrences of individual items (1-itemsets).

o Remove items that do not meet the minimum support.

o Form 2-itemsets from the remaining items and count their occurrences.

o Repeat the process for higher-order itemsets until no more frequent itemsets can be
formed.

3. Generate strong association rules from the frequent itemsets using confidence and lift.

5
Example of Apriori Algorithm

Dataset (Transactions):
Items Purchased
Transaction ID
1 Bread, Milk, Egg
2 Bread, Diaper, Beer
3 Milk, Diaper, Beer
4 Bread, Milk, Diaper, Beer
5 Bread, Milk, Diaper
Step 1: Finding Frequent Itemsets (Support Calculation)

Minimum support threshold: 2 transactions (40%)

1-itemsets:

 Bread (4/5 = 80%) ✅

 Milk (4/5 = 80%) ✅

 Egg (1/5 = 20%) ❌ (Removed)

 Diaper (4/5 = 80%) ✅

 Beer (3/5 = 60%) ✅

2-itemsets (Combinations of frequent 1-itemsets):

 (Bread, Milk) → 3/5 = 60% ✅

 (Bread, Diaper) → 3/5 = 60% ✅

 (Milk, Diaper) → 3/5 = 60% ✅

 (Diaper, Beer) → 3/5 = 60% ✅

6
 (Milk, Beer) → 2/5 = 40% ✅

3-itemsets:

 (Bread, Milk, Diaper) → 3/5 = 60% ✅

 (Milk, Diaper, Beer) → 2/5 = 40% ✅

Step 2: Generating Association Rules (Using Confidence & Lift)

 Rule 1: {Milk} → {Diaper}

o Confidence = 3/4 = 75%

 Rule 2: {Diaper} → {Beer}

o Confidence = 3/4 = 75%

 Rule 3: {Milk, Diaper} → {Beer}

o Confidence = 2/3 = 66.7%

Q.No. Describe the concept of boosting in ensemble learning. Explain the working of the AdaBoost Mark allotted
algorithm and how it improves the performance of weak classifiers.
13 Formulas
Boosting is an ensemble learning technique that aims to improve the accuracy of weak classifiers by 3 Marks
combining them into a strong classifier. In boosting, multiple weak classifiers (models that perform slightly
better than random guessing) are trained sequentially, with each new classifier focusing on the mistakes made Correct Answer
by the previous ones. 2 Marks

How Boosting Works:

1. Sequential Training: Boosting trains a series of weak classifiers, each built on the errors
made by the previous classifier. 5 Marks

7
2. Weight Adjustment: The algorithm assigns higher weights to misclassified data points,
making them more important for the next classifier.

3. Final Model: The predictions of the weak classifiers are combined (usually by weighted
voting) to form a final strong classifier. The final model’s performance is typically much
better than any single weak model.

AdaBoost Algorithm:

AdaBoost (Adaptive Boosting) is one of the most well-known boosting algorithms. It works by
combining multiple weak classifiers (typically decision trees) to create a stronger classifier.

Working of the AdaBoost Algorithm:

1. Initialize Weights:

 Initially, all data points are given equal weights. Suppose we have a dataset with NNN
examples, each example has a weight wi=1/N.

2. Train Weak Classifier:

A weak classifier (e.g., a decision stump or small decision tree) is trained on the dataset using these weights.

3. Calculate Error Rate:

 The error rate ϵt of the classifier is calculated based on the weighted sum of misclassified
data points.

where I(misclassified) is 1 if the example is misclassified and 0 if it is correctly classified.

8
4. Calculate Alpha (Weight of Classifier):

 AdaBoost calculates a weight αt for the classifier based on its error rate.

 A higher weight is assigned to classifiers with lower error rates, meaning they will
have more influence in the final model.
5. Update Weights:

 The weights of the misclassified samples are increased so that the next classifier
will focus more on them. Correctly classified samples have their weights reduced.
The new weights are updated as follows:

6. Repeat:

 Steps 2 to 5 are repeated for a predefined number of iterations or until no further improvement is
made.

7. Final Model:

 The final model is a weighted combination of all the weak classifiers. The prediction is made by

9
taking the weighted vote from all the classifiers:

where T is the total number of classifiers.

How AdaBoost Improves the Performance of Weak Classifiers:

1. Focus on Hard-to-Classify Examples:


AdaBoost adapts by assigning higher weights to incorrectly classified examples. This ensures that the
subsequent classifiers focus on the mistakes made by the previous ones, improving the overall model.

2. Combining Weak Classifiers:


Each weak classifier may not perform well on its own, but by combining multiple weak classifiers,
AdaBoost can create a strong classifier that performs better than any individual classifier.

3. Handling Overfitting:
Unlike other algorithms, AdaBoost can handle overfitting well, especially when the base classifiers
are simple (e.g., decision stumps). By focusing on difficult examples, it often avoids the tendency to
overfit the easy ones.

Q.No. Explain the structure of an ANN. Discuss the different types of activation functions used in neural
Mark allotted
networks and their significance.
14 1. Structure of an Artificial Neural Network (ANN) Explanation
An Artificial Neural Network (ANN) is a computational model inspired by the human brain, consisting of
3 Marks
interconnected nodes (neurons) arranged in layers.
Components of ANN:
1. Input Layer
Example
o Receives raw data (features) and passes it to the next layer.
2. Hidden Layers 2 Marks
o Perform computations and extract patterns.
o The depth of the network depends on the number of hidden layers.

10
3. Output Layer
o Produces final predictions or classifications.
4. Weights and Biases
o Weights determine the importance of inputs.
o Bias helps shift activation to improve learning.
5. Activation Function
o Introduces non-linearity, allowing the network to learn complex patterns.
2. Activation Functions in Neural Networks (Based on DataCamp's Guide)
Activation functions play a crucial role in determining how the output of a neuron is computed and whether it
should be activated or not.
Types of Activation Functions:

5 Marks

11
3. Importance of Activation Functions in ANN:
1. Introduces Non-Linearity: Allows networks to model complex patterns.
2. Enables Gradient-Based Learning: Helps with weight updates using backpropagation.
3. Prevents Vanishing Gradient: Functions like ReLU avoid slow learning in deep networks.
4. Determines Output Interpretation: Softmax ensures probability-based outputs in classification
tasks.

Q.No. Explain the process of training a neural network using backpropagation and the challenges associated
Mark allotted
with it .
15 Backpropagation is an algorithm used to train artificial neural networks by adjusting weights and biases to Explanation
minimize error. It works by propagating the error backward from the output layer to the input layer using the
With diagram
chain rule of calculus.

Steps in the Backpropagation Process 5 Marks


Step 1: Forward Propagation
12
 The input data is passed through the network, layer by layer.
 Each neuron applies weights, biases, and activation functions.
 The final output is compared with the actual target value.
Step 2: Compute the Loss Function
 A loss function (e.g., Mean Squared Error (MSE) for regression or Cross-Entropy Loss for
classification) is used to measure the difference between predicted and actual values.
 Example:

 The objective is to minimize this error.


Step 3: Backward Propagation (Error Calculation & Gradient Computation)
 The error is propagated backward through the network using partial derivatives.
 The gradients of the loss function concerning each weight are computed using the chain rule of
differentiation.
Step 4: Update Weights Using Gradient Descent
 The gradients are used to update weights and biases using the following formula:

Gradient Descent Variants:


 Batch Gradient Descent: Uses the entire dataset to update weights.
 Stochastic Gradient Descent (SGD): Updates weights after each training example.
 Mini-Batch Gradient Descent: Updates weights in small batches, balancing efficiency and
performance.
Challenges in Backpropagation & Solutions
1. Vanishing Gradient Problem
 In deep networks, gradients become very small, making early layers learn slowly.
 Solution: Use ReLU (Rectified Linear Unit) activation function instead of sigmoid/tanh.
2. Exploding Gradient Problem
 If gradients grow too large, it makes training unstable.
 Solution: Apply gradient clipping or weight regularization.
3. Overfitting
 The network memorizes the training data but fails to generalize to new data.
13
 Solution: Use dropout, L2 regularization, and early stopping.
4. Computational Cost
 Training deep networks requires high processing power.
 Solution: Use GPU acceleration and batch processing.

https://www.datacamp.com/tutorial/mastering-backpropagation
PART C (3 X 10 = 30 Marks)
Q.No. Compare ID3, C4.5, and CART algorithms in terms of their strengths and weaknesses. Also Discuss the
Mark allotted
role of entropy and information gain, and how it is used to build a decision tree.

16
Workflow
Explanation

8 Marks

Diagram
2 Marks

10 Marks

Role of Entropy and Information Gain in Decision Trees


1. Entropy
Entropy is a measure of impurity or uncertainty in a dataset. It quantifies the randomness in class distribution.
 If a node has only one class, entropy = 0 (pure).

14
 If a node has an equal mix of classes, entropy is high (impure).
Mathematically, entropy for a dataset with nnn classes is given by:

where pip_ipi is the probability of each class in set SSS.


2. Information Gain
Information Gain (IG) measures the reduction in entropy after splitting on an attribute. The attribute with the
highest Information Gain is selected for splitting.

where:
 A is the attribute being considered.
 S is the dataset.
 Sv are subsets created after splitting.
3. Building a Decision Tree Using Entropy & Information Gain
1. Compute the entropy of the entire dataset.
2. Calculate Information Gain for each attribute.
3. Choose the attribute with the highest Information Gain to split the node.
4. Repeat recursively for each subset until all nodes are pure or a stopping condition is met.

Explain rule induction in machine learning. Discuss the process of rule learning and its applications.
Q.No. Mark allotted
Rule induction is a technique in machine learning where patterns in data are represented as IF-THEN rules.
These rules help in making predictions and understanding relationships in datasets.
Explanation MLR
17 Example of a Rule:
IF blood sugar level > 140 AND BP > 130/90, THEN risk of diabetes = High. 6 Marks

Rule induction is commonly used in decision-making systems, medical diagnosis, and expert systems.

2. Process of Rule Learning

Rule induction consists of two main approaches: Example


Explanation
15
1. Direct Rule Extraction (Separate-and-Conquer) 4 Marks

2. Decision Tree-Based Rule Extraction

(A) Direct Rule Extraction (Separate-and-Conquer Approach)


This approach learns rules directly from data without building a decision tree first. The algorithm selects the
most important rule, removes the covered examples, and repeats the process.

Common algorithms:

 Sequential Covering (PRISM Algorithm)

 RIPPER (Repeated Incremental Pruning to Produce Error Reduction) 10 Marks

Steps:

1. Identify the most significant attribute.

2. Form IF-THEN rules.

3. Remove covered instances.

4. Repeat until all instances are classified.

Example:

Dataset:

Age BP Diabetes Outcome

50 High Yes Risky

45 Normal No Safe

Rule Induced:
IF Age > 45 AND BP = High, THEN Outcome = Risky.

(B) Decision Tree-Based Rule Extraction


First, a decision tree is built using algorithms like ID3, C4.5, or CART. Then, the tree is converted into IF-
THEN rules by following decision paths from the root to leaf nodes.

Example:
16
A decision tree for loan approval can be transformed into:
IF Income > 50K AND Credit Score > 700 THEN Loan Approved.

3. Applications of Rule Induction

 Medical Diagnosis: Identifying diseases based on symptoms and test results.

 Fraud Detection: Detecting suspicious transactions in banking and finance.

 Customer Segmentation: Understanding buying behavior in marketing.

 Risk Assessment: Assessing loan approvals based on financial history.

 Expert Systems: Automating decision-making in different industries.

Q.No. Mark allotted

18 Comparison of Random Forest (Bagging) and AdaBoost (Boosting)


1. Introduction Naïve Baye’s
Random Forest and AdaBoost are both ensemble learning techniques used to improve the accuracy and
Explanation
robustness of machine learning models. However, they follow different approaches. Random Forest uses
bagging (Bootstrap Aggregating), while AdaBoost uses boosting to create an ensemble of models.
7 Marks
2. Approach
Random Forest (Bagging) Spam detection
 Uses Bootstrap Aggregating (Bagging) to create multiple independent decision trees. example
 Each tree is trained on a random subset of the dataset (sampled with replacement). 3 Marks
 The final prediction is made by majority voting (for classification) or averaging (for regression).
 Helps reduce variance and overfitting by combining multiple models.
AdaBoost (Boosting)
 Uses Adaptive Boosting (Boosting) to combine weak classifiers into a strong classifier.
 Models are trained sequentially, where each new model focuses on the errors made by the previous
model. 10 Marks
 Assigns higher weights to misclassified instances, forcing the model to learn from difficult cases.
 The final prediction is based on a weighted sum of all weak learners.

3. Advantages
Advantages of Random Forest
 Reduces overfitting by averaging multiple decision trees.
 Handles high-dimensional data efficiently.
17
 Works well with both numerical and categorical data.
 Can handle missing values and noisy data.
 Less sensitive to outliers than individual decision trees.
Advantages of AdaBoost
 Improves weak learners (e.g., decision stumps) to create a strong model.
 Focuses more on misclassified samples, improving accuracy.
 Less prone to overfitting compared to a single decision tree.
 Works well for imbalanced datasets by assigning higher importance to hard-to-classify samples.
 Can be used with different base models (not just decision trees).

4. Disadvantages
Disadvantages of Random Forest
 Slower training time compared to a single decision tree.
 Less interpretable due to multiple decision trees.
 Can be computationally expensive for very large datasets.
Disadvantages of AdaBoost
 Sensitive to noisy data and outliers, as it assigns higher weights to misclassified instances.
 Can overfit if the number of weak learners is too high.
 Requires careful tuning of parameters (e.g., learning rate).
5. Use Cases
Use Cases of Random Forest
 Medical Diagnosis: Predicting diseases based on patient records.
 Fraud Detection: Identifying fraudulent transactions in banking.
 Image Classification: Used in object detection and face recognition.
 Feature Selection: Helps in ranking important features in datasets.
Use Cases of AdaBoost
 Face Detection: Used in computer vision applications.
 Spam Detection: Identifying spam emails in filtering systems.
 Customer Churn Prediction: Understanding customer retention in marketing.
 Credit Scoring: Assessing loan eligibility based on past records.

6. Key Differences
Feature Random Forest (Bagging) AdaBoost (Boosting)
Base Learner Multiple decision trees Decision stumps or weak classifiers
Sequential (each model corrects previous
Training Approach Parallel (trees trained independently)
errors)
Higher if too many weak learners are
Overfitting Risk Low (due to averaging multiple models)
added
Performance on Noisy
Robust to noise Sensitive to noisy data and outliers
Data
Handling of Data May not perform well without Assigns higher weight to misclassified

18
Imbalance modifications samples
Lower for weak classifiers but increases
Computational Cost Higher due to multiple trees
with more iterations
Majority voting (classification) or
Final Decision Weighted sum of weak classifiers
averaging (regression)

7. Role of Bagging and Boosting in Improving Performance


 Bagging (Random Forest) reduces variance, making models more stable and resistant to overfitting.
 Boosting (AdaBoost) reduces bias, improving the model’s ability to learn complex patterns in data.
 In general, Random Forest is better when variance is a problem, while AdaBoost works better
when bias is a problem.

Discuss the types of activation functions commonly used in neural networks, including their advantages
Q.No. and disadvantages. Mark allotted
https://www.v7labs.com/blog/neural-networks-activation-functions?utm_source=chatgpt.com Problem Solving
19
4 Steps
3 Iteration

10 Marks
Discuss the issues related to generalization and overfitting in neural networks, and how they can
Q.No. be addressed. Mark allotted
20 Introduction Explanation

7 Marks
 Generalization: The ability of a neural network to perform well on new, unseen data.
 Overfitting: When a model learns the training data too well, including noise, and performs poorly on Example
new data. 3 Marks

2. Problems Due to Overfitting

1. Memorization of Training Data – The model remembers training examples instead of learning
general patterns.
2. Poor Test Performance – High accuracy on training data but low accuracy on test data.
3. Complex Model – Too many layers and parameters lead to learning unnecessary details.
10 Marks
19
4. Lack of Enough Data – Small datasets increase the chance of overfitting.

3. Methods to Reduce Overfitting and Improve Generalization

1. Regularization Techniques

 L1 and L2 Regularization: Adds a penalty to large weights to avoid complex models.


 Dropout: Randomly turns off some neurons during training to prevent reliance on specific features.

2. Data Augmentation

 Artificially increases dataset size by rotating, flipping, or scaling images.


 Helps the model learn general patterns instead of memorizing specific data points.

3. Early Stopping

 Stops training when validation loss stops improving.


 Prevents the model from learning unnecessary patterns.

4. Using More Training Data

 More data helps the model learn general trends instead of overfitting to noise.
 If real data is limited, data augmentation can be used.

5. Batch Normalization

 Standardizes activations to stabilize training and improve generalization.

6. Reducing Model Complexity

 Using fewer layers and neurons can prevent the model from memorizing unnecessary details.

20
 Convolutional Neural Networks (CNNs) are more efficient for image tasks.

7. Cross-Validation

 Splitting data into training, validation, and test sets ensures better model evaluation.
 K-fold cross-validation helps use data more effectively.

8. Transfer Learning

 Using pre-trained models on similar datasets improves performance on small datasets.

STAFF INCHARGE PROGRAMME ACADEMIC COORDINATOR HOD DEAN ACADEMICS


(Mrs.A.Ilakkia) (Dr.M.Auxilia) (Dr. J. Madhusudanan) (Dr .A.A.Arivalagar)

21

You might also like