AI_Notes
AI_Notes
• Goals of AI:
2. Approaches to AI
• Methods include:
• Problems:
• Focuses on making the best possible decision rather than imitating humans.
3. Foundations of AI
4. History of AI
5. Applications of AI
• Natural Language Processing (NLP) – Chatbots, virtual assistants (e.g., Siri, Alexa).
1.2.1 Philosophy
1.2.2 Mathematics
1.2.3 Economics
• Game theory (1957): Developed by John von Neumann for multi-agent decision-
making.
1.2.4 Neuroscience
1.2.5 Psychology
1.2.8 Linguistics
1.3 History of AI
• General Problem Solver (GPS, 1956): Newell & Simon’s AI problem-solving model.
• Herbert Simon (1957): Predicted a machine would win at chess in 10 years (took
40).
• R1 (1982): Saved $40 million per year in configuring new computer orders.
Why AI?
• Automates decision-making.
• Agent: Anything that perceives its environment via sensors and acts upon it using
actuators.
• Examples:
o Human agent: Eyes, ears (sensors); hands, legs, vocal tract (actuators).
o Software agent: Reads files, network packets (sensors); writes files, sends
packets (actuators).
• Simple strategy: If the square is dirty, clean it; otherwise, move to the next square.
2.2 Good Behavior: The Concept of Rationality
o Noise level.
o Efficiency of cleaning.
o Power consumption.
2.2.2 Rationality
• Depends on:
o Performance measure.
o Possible actions.
o Percept sequence.
• Autonomy: Agents rely more on percepts and less on prior knowledge over time.
• Performance measure
• Environment
• Actuators
• Sensors
o Unknown: Agent must learn rules through experience (e.g., new video game).
Agent Function
• Agent function: May or may not use the entire percept history.
• Key AI challenge: Writing small, efficient programs that generate rational behavior.
2.4.2 Types of Agents
• Condition-action rules: If condition is met, take action (e.g., braking when a car
ahead stops).
• Example:
3. Goal-Based Agents
4. Utility-Based Agents
• Goals alone may not be enough (e.g., multiple routes reach the destination, but
some are faster, safer, or cheaper).
5. Learning Agents
• Components:
o Learning Element (LE): Improves agent performance.
Atomic Representation
Factored Representation
o Gas level.
o GPS location.
o Toll expenses.
Structured Representation
• Used in:
o Relational databases.
Summary
• Agent structure consists of architecture + program.