0% found this document useful (0 votes)
18 views11 pages

AAM Ut Answer

The document discusses various machine learning concepts including K-Means Algorithm advantages and failures, differences between Machine Learning and Deep Learning, and definitions of RNN, LSTM, and GRU. It also covers dimensionality reduction, artificial neural networks, hyperparameters, and activation functions, along with their importance and examples. Additionally, it provides a Python implementation of K-Means and explains techniques for hyperparameter tuning.

Uploaded by

yashmoreyt0221
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views11 pages

AAM Ut Answer

The document discusses various machine learning concepts including K-Means Algorithm advantages and failures, differences between Machine Learning and Deep Learning, and definitions of RNN, LSTM, and GRU. It also covers dimensionality reduction, artificial neural networks, hyperparameters, and activation functions, along with their importance and examples. Additionally, it provides a Python implementation of K-Means and explains techniques for hyperparameter tuning.

Uploaded by

yashmoreyt0221
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

1.

State any two advantages of K-Means Algorithm


 Simplicity and Efficiency :- Easy to understand and
implement
 Scalability :- Works well with large amounts of data\
2.Enlist any four causes for failure of K-Means Algorithm
 Failure of K-Means Algorithm are as follows :-
o Sensitive to initial Centroid Positions
o Assume Spherical clusters
o Sensitive to outliers
o Requires pre-specification of the number of clusters (K)
3.Differentiate between Machine Learning and Deep Learning

4. Define the terms:


i.) RNN :- RNN (Recurrent Neural Network) is a type of deep
learning neural network designed to handle sequential data,
such as time series, text, or speech. It has a memory mechanism
that allows information to persist, making it effective for tasks
where context and order are important.
ii.) LSTM:- LSTM (Long Short-Term Memory) is a type of
Recurrent Neural Network (RNN) designed to learn and
remember long-term dependencies in sequential data. It
overcomes the limitations of traditional RNNs by using memory
cells and gates (input, forget, and output) to control the flow of
information

5.Describe Dimensionality Reduction with example


 Dimensionality Reduction is the process of reducing the
number of input variables or features in a dataset while
retaining as much relevant information as possible. It helps
to improve model performance, reduce overfitting, and
decrease computational cost.
 Why Dimensionality Reduction
o Real-world data can have hundreds or thousands of
features.
o Many features may be irrelevant or redundant.
o High-dimensional data leads to the curse of
dimensionality, making models inefficient.
 Common Techniques
o Principal Component Analysis (PCA)
o Linear Discriminant Analysis
 Example:- Suppose a medical dataset has 100 features
about patient health. Using PCA, we reduce it to 10
principal components that still explain 95% of the variance,
allowing accurate disease prediction with fewer inputs.
6. Describe ANN (Artificial Neural Network) with suitable
example
 Artificial Neural Network (ANN) is a computational model
inspired by the structure and functioning of the human
brain.
 It consists of interconnected layers of nodes (neurons) that
process input data and learn patterns through training.
 Structure of ANN
a. Input Layer – Receives raw input feature
b. Hidden Layer(s) – Performs intermediate processing
and pattern recognition
c. Output Layer – Produces the final prediction or
classification result
 How It Works:
a. Inputs are passed through layers
b. Weights are adjusted using algorithms like
backpropagation and gradient descent to minimize
error.
c. Over time, the network learns the correct mapping
from inputs to outputs
 Example :- In an email spam classifier, an ANN can learn
from past labelled emails to identify whether a new email is
spam or not based on features like keywords, sender info,
etc.
7. Describe any four hyperparameter in Neural Network
 Hyperparameters are external configurations set before
training a neural network. They significantly impact model
performance and learning behavior
 Learning Rate:- This hyperparameter controls the step size
taken by the optimizer during each iteration of training. Too
small a learning rate can result in slow convergence, while
too large a learning rate can lead to instability.
 Epochs:- This hyperparameter represents the number of
times the entire training dataset is passed through the
model during training. Increasing the number of epochs can
improve the model’s performance but may lead to
overfitting if not done carefully.
 Number of layers:- This hyperparameter determines the
depth of the model, which can have a significant impact on
its complexity and learning ability
 Activation Function :- This hyperparameter introduces non-
linearity into the model, allowing it to learn complex
decision boundaries. Common activation function include
sigmoid, tanh, and Rectified Linear Unit (ReLU)
8.Explain how Convolutional Neural Network (CNN) is used in
deep learning for image data.
 Convolutional Neural Networks (CNNs) are deep learning
models specifically built to process image data efficiently by
mimicking how the human brain interprets visual
information
 Step-by-Step Working of a CNN
o Input Layer
 Takes in an image (e.g., 28x28 pixels for grayscale
or 224x224x3 for color images).
o Convolutional Layer
 Applies filters (kernels) to scan over the image and
extract local patterns like edges, corners, or
textures
o Activation Function (ReLU)
 Applied to introduce non-linearity into the model.
o Pooling Layer
 Reduces the spatial dimensions (height × width) of
feature maps.
o Deeper Convolution + Pooling Layers
 As we go deeper, the network learns higher-level
features like shapes, objects, and faces
o Flatten Layer
 Converts 2D feature maps into a 1D vector to feed
into the dense layer
o Fully Connected Layer
 Connects all neurons and makes prediction
o Output Layer
 Uses Softmax for multi-class classification to assign
probability scores to each class
9.Explain feed forward and backward propagation in detail with
suitable example
 ✅ 1. Feedforward Propagation:
 Feedforward is the forward movement of data through the
neural network, layer by layer, to generate an output or
prediction.
 How It Works
o Input data (features) is passed into the input layer
o Data moves through hidden layers where :
 Weights and biases are applied
 Activation functions (like ReLU, Sigmoid) introduce
non-linearity
o The final output is produced in the output layer
o Goal: To make a prediction based on current weights
 Backward Propagation (Backpropagation):
 Backpropagation is the training phase, where the network
learns by updating weights to reduce error
 How It Works
o After feedforward, the loss/error is calculated using a
loss function
o The error is then sent backward through the network
the weights are adjusted to minimize error
o This involves computing gradients of the loss with
respect to weights using the chain rule of calculus.
o Goal: Minimize the loss by updating weights iteratively
 Example: Predicting House Price
o Input: Features like size, location, number of rooms
o Feedforward: Neural network predicts price (e.g., ₹75
Lakhs).
o Actual Price: ₹70 Lakhs → Loss = 5 Lakhs
o Backpropagation: Weights are adjusted to reduce the
prediction error in future iterations
10. Write python program to implement K-Means Algorithm
using suitable dataset
# (I) Import suitable modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.cluster import KMeans

# (ii) Create dataset


iris = load_iris()
X = pd.DataFrame(iris.data, columns=iris.feature_names)
y = pd.DataFrame(iris.target, columns=['target'])

# (iii) Visualize dataset (2D scatter plot)


plt.scatter(X['sepal length (cm)'], X['sepal width (cm)'], c='blue',
label='Data points')
plt.title('Iris Dataset - Sepal Dimensions')
plt.xlabel('Sepal Length (cm)')
plt.ylabel('Sepal Width (cm)')
plt.legend()
plt.show()

# (iv) Splitting data into training and testing set


X_train, X_test = train_test_split(X, test_size=0.2,
random_state=42)
# (v) Algorithm Implementation - KMeans
kmeans = KMeans(n_clusters=3, random_state=42)
kmeans.fit(X_train)

# Predict clusters for test data


y_pred = kmeans.predict(X_test)
print("Cluster predictions for test data:", y_pred)

11. Describe Gated Recurrent Unit (GRU).


 Gated Recurrent Unit (GRU) is a type of Recurrent Neural
Network (RNN) architecture designed to handle sequential
data while solving the vanishing gradient problem.
 It is simpler and faster than LSTM but still effective for
learning long-term dependencies
 Key Features of GRU:
o Update Gate
 Decides how much of the previous memory should
be passed to the next time step
 Helps retain long-term dependencies
o Reset Gate
 Decides how much of the previous information to
forget
 Helps model short-term dependencies
 How GRU Works
o At each time step, the input and the hidden state are
passed into the update and reset gates
o The reset gate determines how much past information
to forget
o The update gate determines how much new
information to keep
o The output is a new hidden state that combines the old
and new information.
 Advantages
o Faster training and fewer parameters than LSTM
o Performs well on sequential tasks like time series,
speech recognition, and NLP
12 Describe Activation Function in Neural Network
 An Activation Function in a neural network is a
mathematical function applied to the output of each
neuron to introduce non-linearity into the model
 Why Activation Functions Are Important
o Without them, a neural network would behave like a
linear regression model regardless of its depth.
o They allow the model to capture intricate relationships
in data like images, text, and sound
 Types of Activation Functions
o ReLU (Rectified Linear Unit):
 Outputs 0 for negative values, linear for positive
o Sigmoid
 Maps output between 0 and 1
 Useful for binary classification
o Tanh
 Maps output between -1 and 1
o Softmax
 Used in the output layer of multi-class
classification
 Converts scores into probabilities that sum to 1
13. Define GPT (Generative Pre-Trained Transformer).
GPT (Generative Pre-Trained Transformer) is a deep learning
model based on the Transformer architecture developed by
OpenAI. It is designed for natural language processing (NLP)
tasks such as text generation, translation, summarization, and
question-answering.

14. How to select value of ‘K’ in K-nearest neighbor Algorithm


 There is no way to determine the best value of ‘K’, so we
need to try some values to find the best out of them. The
most preferred value for K is 5
 A very low for K such as K = 1 or K = 2, can be noisy and lead
to the effects of outliers in the model
 Large values of K are good, but it may find some difficulties.
15 . What are hyperparameters. Enlist hyperparameters tuning
techniques
 Hyperparameters are the external configuration values set
before training a machine learning model.
 They control the learning process and influence model
performance but are not learned from the data
 Common hyperparameter tuning techniques include:
o Grid Search
 Tests all possible combinations of hyperparameters
in a predefined grid.
o Random Search
 Randomly selects combinations of
hyperparameters to try.
o Bayesian Optimization
 Uses probability to find the best hyperparameters
more efficiently.

You might also like