0% found this document useful (0 votes)
2 views5 pages

AI Assignment

The document provides an introduction to key concepts in Artificial Intelligence, including the distinction between knowledge and intelligence, AI component areas, and the PEAS framework for defining agents. It covers intelligent agent architecture, uninformed search strategies, forms of learning, knowledge representation techniques, and the importance of feature selection and extraction. Additionally, it discusses reinforcement learning, semantic networks, and principal component analysis (PCA) in the context of pattern recognition.

Uploaded by

Anuj Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views5 pages

AI Assignment

The document provides an introduction to key concepts in Artificial Intelligence, including the distinction between knowledge and intelligence, AI component areas, and the PEAS framework for defining agents. It covers intelligent agent architecture, uninformed search strategies, forms of learning, knowledge representation techniques, and the importance of feature selection and extraction. Additionally, it discusses reinforcement learning, semantic networks, and principal component analysis (PCA) in the context of pattern recognition.

Uploaded by

Anuj Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Assignment: Introduction to Artificial Intelligence (Concise)

1. (a) Knowledge vs. Intelligence, AI Component Areas, PEAS, PEAS for English Tutor
& Oil Refinery.

Knowledge vs. Intelligence: Knowledge is acquired facts and information.


Intelligence is the ability to apply knowledge to reason, learn, and solve problems.
Four AI Component Areas: Machine Learning, Natural Language Processing (NLP),
Computer Vision, Expert Systems.
PEAS: A framework to define an agent: Performance Measure, Environment,
Actuators, Sensors.
PEAS Descriptions:
(i) English Tutor:
P: Student’s improved language skills.
E: Student, interactive exercises.
A: Displaying content, providing feedback.
S: Keyboard/mouse input, speech recognition.
(ii) Oil Refinery:
P: Maximize yield, ensure safety, minimize cost.
E: Physical plant, chemical processes, material flow.
A: Control valves, pumps, heaters.
S: Temperature, pressure, flow sensors.

1. (b) Intelligent Agent Architecture; Role of Perception, Cognition, Action.

Intelligent Agent Architecture: Consists of Sensors (perceive environment), a


Perception component (interpret data), a Cognition/Reasoning component (with
Knowledge Base and Inference Engine to make decisions), an Action component
(translate decisions), and Actuators (act on environment).
Role of Perception, Cognition, Action:
1. Perception: Agent gathers environmental data via sensors (e.g., self-driving
car’s camera sees a red light).
2. Cognition: Agent processes data, uses knowledge, and decides on an action
(e.g., car decides to brake).
3. Action: Agent executes the decision via actuators (e.g., car applies brakes).
This forms a continuous cycle.
2. (a) Uninformed Search Strategies; Comparison.

Uninformed Search Strategies: Explore the search space without domain-specific


problem knowledge (e.g., BFS, DFS, UCS). They only use the problem definition.
Comparison (Key Strategies):

Strategy Complete? Optimal? (unit cost) Time Space

BFS Yes Yes O(b^d) O(b^d)

DFS No (loops) No O(b^m) O(bm)

UCS Yes Yes O(b^(C*/ε)) O(b^(C*/ε))

(b: branching factor, d: solution depth, m: max depth, C: optimal cost, ε: min step
cost)*

2. (b) Forms of Learning; Concept of Learning with Diagram.

Forms of Learning:
Supervised: Learns from labeled data (input-output pairs).
Unsupervised: Learns from unlabeled data to find patterns (e.g., clustering).
Reinforcement: Learns by interacting with an environment, receiving
rewards/penalties.
Concept of Learning: An agent improves its performance on tasks through
experience by modifying its internal knowledge or decision-making.
Learning Agent (Conceptual): Environment provides Percepts to Agent. Agent has:
Sensors -> Learning Element (uses Critic’s feedback) -> Performance Element (uses
Knowledge Base) -> Actuators to perform Actions in Environment.

3. (a) Frames vs. Scripts; Vehicle Frame Structure.

Frames: Data structures representing stereotyped objects/concepts using “slots”


(attributes) and “fillers” (values), supporting defaults and inheritance.
Frames vs. Scripts: Frames represent objects (e.g., “car”). Scripts represent event
sequences (e.g., “going to a restaurant”).
Frame Structure for a Vehicle (Simplified):
Frame: Vehicle
Slots:
is_a: Mode_of_Transport
num_wheels: (Default: 4)
propulsion_type: (e.g., Petrol, Electric)
color:
max_passengers: (Default: 5)
Assumptions: Generic vehicle, defaults can be overridden.

3. (b) Knowledge Representation Techniques with Examples.

1. Logical: Uses formal logic.


Propositional: Raining → WetGround
Predicate (FOPL): ∀x (Bird(x) → HasWings(x))
2. Semantic Networks: Graph of nodes (concepts) and labeled arcs (relationships).
Example: [Canary] --is_a--> [Bird] --has_part--> [Wings]
3. Frames: Slot-and-filler structures for objects. (See 3a)
4. Production Rules: IF-THEN rules.
Example: IF alert_level IS red THEN sound_alarm.
5. Scripts: Represent common event sequences.
Example: “Restaurant script” (entering, ordering, eating, paying).

4. (a) Reinforcement Learning (RL); Differences from Supervised/Unsupervised;


Agent, Environment, State, Action, Reward.

Reinforcement Learning (RL): Agent learns optimal actions by interacting with an


environment to maximize cumulative rewards.
Differences:
Supervised: Uses labeled data; RL uses scalar reward signals, often delayed.
Unsupervised: Finds patterns in unlabeled data; RL aims to optimize a policy.
RL Concepts:
Agent: The learner/decision-maker.
Environment: External system agent interacts with.
State (S): Current situation of the environment.
Action (A): Choice made by the agent.
Reward ®: Feedback from environment indicating action’s desirability.
4. (b) Semantic Networks; Diagram for Scenario.

Semantic Networks: Represent knowledge as a graph with nodes (concepts)


linked by labeled arcs (relationships).
Semantic Network Diagram:

Co. XYZ Tom

is_a has_dept has_dept has_dept manages is_a

SW Dev Co. Sales Admin Programming Mgr

employs employs

John Mike

is_a is_a married_to

Programmer Alice

is_a works_for

Editor Weekly News Mag. wns

Ford Car

color

Red

5. (a) Importance of Feature Selection and Feature Extraction.


Feature Selection: Choosing a subset of relevant original features.
Importance: Improves model accuracy, reduces overfitting, speeds up
training, enhances interpretability.
Feature Extraction: Transforming original features into a new, smaller set.
Importance: Reduces dimensionality, can create more informative features,
handles complex relationships, helps in noise reduction.
Both aim to improve pattern recognition system performance by focusing on
the most useful information.

5. (b) Principle Component Analysis (PCA); How it Works; Role in Pattern Recognition.

Principal Component Analysis (PCA): An unsupervised dimensionality reduction


technique that transforms data into a new set of uncorrelated variables (principal
components, PCs) ordered by variance.
How PCA Works:
1. Standardize data.
2. Compute covariance matrix.
3. Calculate eigenvectors (directions of PCs) and eigenvalues (variance
explained by PCs).
4. Select top k eigenvectors based on eigenvalues.
5. Transform original data using these k eigenvectors to get k -dimensional
data.
Role in Pattern Recognition:
Reduces dimensionality, simplifying models and computation.
Extracts key features (PCs) capturing most data variance.
Reduces noise by discarding low-variance components.
Enables visualization of high-dimensional data.
Used as a preprocessing step to improve other algorithms.

You might also like