AI ML Question
AI ML Question
o IMPLIES (→): If the first proposition is true, the second must be true.
o BICONDITIONAL (↔): Both propositions must have the same truth value.
• Inference Rules: Allow logical reasoning, such as Modus Ponens and Modus Tollens.
Example:
• Logical Representation: P → Q
Advantages:
Limitations:
1. Graph Representation:
2. Heuristic Function:
o Guides the search by estimating the cost of reaching the goal from a given node.
3. Steps:
▪ For "OR" nodes, select the child with the lowest cost.
o Update the cost of the parent node based on the selected path.
Example:
In a game tree:
Advantages:
Limitations:
Key Features:
1. Iterative Improvement:
2. Evaluation Function:
3. Variants:
o Steepest-Ascent Hill Climbing: Considers all neighbors and selects the best.
4. Termination:
Limitations:
Applications:
• Optimization problems.
4. Illustrate the Branch & Bound search technique in heuristic search method
Branch & Bound is an algorithm used for solving optimization problems by systematically exploring
the solution space. It uses a combination of "branching" to divide the problem and "bounding" to
eliminate suboptimal solutions.
Steps:
1. Branching:
2. Bounding:
o Use a heuristic function to calculate the upper or lower bound of the solution.
3. Exploration:
Example:
Advantages:
Limitations:
5. Explain how prediction is made with Simple Linear Regression. Refer its working with
example
Key Concepts:
1. Parameters:
2. Types of Points:
o Border Point: Lies within the ε neighborhood of a core point but has fewer than
MinPts neighbors.
o Noise Point: Not a core or border point.
3. Algorithm Steps:
Applications:
• Anomaly detection.
• Customer segmentation.
Logistic Regression predicts binary outcomes using a logistic function. The output is a probability
that ranges between 0 and 1.
Logistic Function:
Steps:
Example:
Polynomial Regression extends linear regression by fitting a polynomial equation to the data:
Steps:
Python Implementation:
import numpy as np
# Example Data
X = np.array([1, 2, 3, 4, 5]).reshape(-1, 1)
poly = PolynomialFeatures(degree=2)
X_poly = poly.fit_transform(X)
model = LinearRegression()
model.fit(X_poly, y)
# Predict
predictions = model.predict(X_poly)
print(predictions)
Machine learning is a branch of artificial intelligence that enables algorithms to uncover hidden
patterns within datasets, allowing them to make predictions on new, similar data without explicit
programming for each task. Traditional machine learning combines data with statistical tools to
predict outputs, yielding actionable insights. This technology finds applications in diverse fields
such as image and speech recognition, natural language processing, recommendation systems,
fraud detection, portfolio optimization, and automating tasks.
For instance, recommender systems use historical data to personalize suggestions. Netflix, for
example, employs collaborative and content-based filtering to recommend movies and TV shows
based on user viewing history, ratings, and genre preferences. Reinforcement learning further
enhances these systems by enabling agents to make decisions based on environmental feedback,
continually refining recommendations.
Machine learning’s impact extends to autonomous vehicles, drones, and robots, enhancing their
adaptability in dynamic environments. This approach marks a breakthrough where machines learn
from data examples to generate accurate outcomes, closely intertwined with data mining and data
science.
1. Supervised Learning
• Description: In supervised learning, the model is trained using labeled data, where the
output for each input is already known. The goal is to learn the mapping from input to
output.
• Examples:
o Predicting house prices based on features like area, location, and bedrooms
(Regression).
• Diagram:
rust
Copy code
2. Unsupervised Learning
• Description: In this type, the data is unlabeled, and the model tries to find hidden patterns
or groupings without explicit outputs.
• Examples:
• Diagram:
rust
Copy code
3. Semi-Supervised Learning
• Description: Combines both labeled and unlabeled data. The model learns from a small
amount of labeled data and generalizes to unlabeled data.
• Examples:
• Diagram:
rust
Copy code
4. Reinforcement Learning
• Examples:
• Diagram:
rust
Copy code
State -> Action -> Reward -> Update -> New State
1. Data Collection: Gather student data with features like CGPA and IQ.
2. Feature Scaling: Normalize the data if the features are on different scales.
3. Clustering Algorithm:
4. Interpretation:
o Each cluster represents a group of students with similar CGPA and IQ.
o For instance:
Output Interpretation:
Agglomerative Clustering:
• Description: Bottom-up approach where each data point starts as its own cluster, and
pairs of clusters are merged iteratively based on similarity.
• Steps:
Divisive Clustering:
• Description: Top-down approach where all data points start in one cluster and are
recursively split into smaller clusters.
• Steps:
2. Split clusters iteratively until each data point is its own cluster.
• Ridge Regression:
• Lasso Regression:
Useful when all features are Best when the number of predictors is
Best for relevant and there’s high, and you need to identify the
multicollinearity. most significant features.
Bias and Variance Adds some bias but helps Similar to Ridge, but potentially more
Tradeoff reduce variance. bias due to feature elimination.
• Description: A* (A-Star) is a search algorithm that finds the shortest path between nodes in
a graph by combining the cost to reach a node and a heuristic estimate of the cost to reach
the goal.
• Steps:
3. Expand the node with the lowest f(n) = g(n) + h(n), where:
7. Applications of AI
o Where:
▪ b0b_0b0: Intercept.
▪ b1b_1b1: Slope.
o Where:
o Measures the sum of squared distances between data points and their cluster
centroid.
• Elbow Method:
2. Identify the “elbow point” where WCSS starts decreasing less sharply.