0% found this document useful (0 votes)
3 views

AI_Notes

The document provides an overview of Artificial Intelligence (AI), covering its definition, goals, approaches, foundations, history, applications, and challenges. Key topics include the Turing Test, rational agents, and the interdisciplinary nature of AI involving psychology, mathematics, and computer science. It also discusses the evolution of AI from early logic-based systems to modern applications in various fields, highlighting both the benefits and risks associated with AI technologies.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

AI_Notes

The document provides an overview of Artificial Intelligence (AI), covering its definition, goals, approaches, foundations, history, applications, and challenges. Key topics include the Turing Test, rational agents, and the interdisciplinary nature of AI involving psychology, mathematics, and computer science. It also discusses the evolution of AI from early logic-based systems to modern applications in various fields, highlighting both the benefits and risks associated with AI technologies.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Artificial Intelligence – Week 1 Notes

1. Introduction to Artificial Intelligence (AI)

• AI is the study of intelligent agents—systems that perceive their environment and


take actions to achieve goals.

• Goals of AI:

o Build machines that think and act like humans.

o Create systems that behave rationally (i.e., make optimal decisions).

2. Approaches to AI

2.1 Acting Humanly: The Turing Test Approach

• Turing Test: Proposed by Alan Turing to evaluate machine intelligence.

• A machine is considered intelligent if it can mimic human conversation well enough


to fool a human judge.

• "Total Turing Test":

o Involves computer vision and speech recognition for perception.

o Requires robotics to interact with the physical world.

2.2 Thinking Humanly: Cognitive Modeling Approach

• AI should mimic human thought processes based on psychology and


neuroscience.

• Methods include:

o Cognitive Science Models – Simulating human problem-solving.

o Neural Networks – Inspired by the human brain’s functioning.

2.3 Thinking Rationally: The Laws of Thought Approach

• AI should follow principles of logic to reason correctly.

• Problems:

o Real-world decisions involve uncertainty and incomplete knowledge.


o Logical reasoning alone is not always feasible.

2.4 Acting Rationally: The Rational Agent Approach

• Focuses on making the best possible decision rather than imitating humans.

• Rational agents act to maximize expected outcomes based on available


information.

• Used in machine learning, robotics, and automated systems.

3. Foundations of AI

AI is based on multiple disciplines:

• Mathematics – Probability, logic, and statistics.

• Psychology – Understanding human cognition.

• Computer Science – Algorithms and data structures.

• Linguistics – Language processing.

• Neuroscience – Brain-inspired models.

4. History of AI

• 1950s-70s (Early AI) – Focus on logic, problem-solving, and rule-based systems.

• 1980s-90s (Machine Learning Boom) – Introduction of expert systems and neural


networks.

• 2000s-Present (Modern AI) – Deep learning, data-driven approaches, and AI


applications in robotics, healthcare, and finance.

5. Applications of AI

• Natural Language Processing (NLP) – Chatbots, virtual assistants (e.g., Siri, Alexa).

• Computer Vision – Facial recognition, autonomous driving.

• Robotics – Industrial robots, smart assistants.

• Healthcare – AI diagnosis, drug discovery.


• Finance – Fraud detection, stock market predictions.

6. AI Challenges & Future

• Ethical Issues – AI bias, job displacement, security risks.

• General AI vs. Narrow AI – Current AI is specialized, whereas General AI (like


human intelligence) is still a challenge.

• Explainability – Making AI decisions transparent and understandable.

Artificial Intelligence – Week 2 Notes


1.2 Foundations of AI

1.2.1 Philosophy

• Aristotle (384–322 BCE): Created rules of logical reasoning (syllogisms).

• Ramon Llull (1232–1315): Developed Ars Magna for logical reasoning.

• Leonardo da Vinci (1452–1519): Designed (but did not create) a mechanical


calculator.

• Blaise Pascal (1642): Built the Pascaline, a mechanical calculator.

• Thomas Hobbes (1588–1679): Suggested reasoning is a form of computation.

• René Descartes (1596–1650): Proposed dualism, distinguishing mind from the


body.

• John Locke (1632–1704): Advocated empiricism, learning through sensory


experiences.

1.2.2 Mathematics

• Formal logic: Developed by ancient Greece, India, and China.

• First-order logic: Introduced by Gottlob Frege (1879).

• Probability theory: Introduced by Gerolamo Cardano (1501–1576) for gambling.

• Statistics: Advanced by Ronald Fisher (1922).

• Algorithms: Concept first introduced by Al-Khwarizmi (9th century).


• Turing Machine (1936): Alan Turing developed the concept of computability.

• Complexity theory: Defined polynomial vs. exponential growth in the 1960s.

1.2.3 Economics

• Decision theory: Combines probability and utility theory.

• Game theory (1957): Developed by John von Neumann for multi-agent decision-
making.

• Markov Decision Processes (MDP): Bellman (1957) modeled sequential decision-


making.

• Satisficing: Simon (1978) suggested finding "good enough" solutions instead of


optimal ones.

1.2.4 Neuroscience

• Paul Broca (1824-1880): Identified Broca’s area (speech production).

• Santiago Ramón y Cajal (1852-1934): Discovered neurons as individual units.

• Hans Berger (1920s): Invented electroencephalograph (EEG).

• fMRI (1990s): Allowed imaging of brain activity.

• Optogenetics (1999-2007): Enabled neuron control using light.

1.2.5 Psychology

• Wilhelm Wundt (1879): Established experimental psychology.

• Behaviorism (1900s): Studied actions of animals.

• Cognitive Psychology (1943): Focused on mental processes.

• Human-Computer Interaction (HCI): Combines psychology and AI.

1.2.6 Computer Engineering

• First general-purpose computers: Colossus (WWII).

• First programmable computer (1941): Z-3 by Konrad Zuse.

• Modern computers: AI-specific hardware (GPUs, TPUs) developed in 2012.

• Quantum computing: Increasing AI efficiency.

1.2.7 Control Theory & Cybernetics


• First self-regulating machine: Water clock by Ktesibios (250 BCE).

• Feedback systems: Steam engine governor (James Watt, 18th century).

• Cybernetics (1940s): Ashby developed homeostatic devices.

1.2.8 Linguistics

• Noam Chomsky (1950s): Criticized behaviorist language learning theories.

• AI and NLP: Machine learning applied to language processing.

1.3 History of AI

1.3.1 Inception (1943–1956)

• McCulloch & Pitts (1943): Modeled artificial neurons.

• Hebbian learning (1949): Strengthens neural connections.

• Turing Test (1950): Alan Turing’s proposal to test machine intelligence.

• Dartmouth Conference (1956): Official birth of AI.

1.3.2 Early Growth (1952–1969)

• General Problem Solver (GPS, 1956): Newell & Simon’s AI problem-solving model.

• Arthur Samuel (1956): Created self-learning checkers program.

• LISP (1958): First AI programming language by John McCarthy.

• First-order logic (1965): Developed by J.A. Robinson.

• Microworlds (1963-1970s): AI models for limited problems (e.g., block world,


chess-playing programs).

1.3.3 Challenges (1966–1973)

• AI faced issues due to limited hardware and overly ambitious goals.

• Herbert Simon (1957): Predicted a machine would win at chess in 10 years (took
40).

1.3.4 Expert Systems (1969–1986)

• MYCIN (1970s): Diagnosed blood infections better than junior doctors.

• R1 (1982): Saved $40 million per year in configuring new computer orders.

• Limitations: Expert systems couldn’t learn from experience.


1.3.5 Neural Networks (1986–Present)

• Backpropagation (1980s): Enabled better training of deep learning models.

1.3.6 Machine Learning & Probabilistic AI (1987–Present)

• Bayesian Networks (1988): Introduced probabilistic AI.

• UC Irvine ML datasets: Established AI benchmarks.

• Speech Recognition (1980s): Hidden Markov Models (HMMs) became dominant.

1.3.7 Big Data (2001–Present)

• Large datasets: Enabled improvements in AI accuracy (e.g., ImageNet for image


recognition).

• Commercial AI: Used in business intelligence, recommendation systems, and


social media.

1.3.8 Deep Learning (2011–Present)

• LeCun (1990s): Developed Convolutional Neural Networks (CNNs).

• AlexNet (2013): Revolutionized image classification.

• Applications: Used in speech recognition, medical diagnosis, and gaming (e.g.,


AlphaGo, 2016).

1.4 The State of AI Today

• Robotic vehicles: Autonomous cars (DARPA challenge, 2005).

• Speech recognition: Automated systems (e.g., United Airlines booking).

• Game playing: Deep Blue (IBM) defeated world chess champion.

• Spam detection: AI filters billions of messages daily.

• Machine Translation: Google Translate processes billions of words per day.

• Healthcare: AI outperforms doctors in some diagnoses.

• Climate science: AI detects weather patterns.

1.5 Risks and Benefits of AI

• Weapons: Autonomous lethal systems.

• Surveillance: AI used for mass monitoring and persuasion.


• Bias in AI: Machine learning models can reinforce societal biases.

• Job automation: AI may replace human workers, increasing inequality.

• Cybersecurity risks: AI can both protect and attack digital systems.

• Superintelligence risk: The challenge of controlling AI beyond human intelligence.

Why AI?

• Handles large, complex, and evolving datasets.

• Automates decision-making.

• Advances industries like healthcare, finance, robotics, and NLP.

Artificial Intelligence – Week 3 Notes


2.1 Agents and Environments

• Agent: Anything that perceives its environment via sensors and acts upon it using
actuators.

• Examples:

o Human agent: Eyes, ears (sensors); hands, legs, vocal tract (actuators).

o Robotic agent: Cameras, infrared sensors (sensors); motors (actuators).

o Software agent: Reads files, network packets (sensors); writes files, sends
packets (actuators).

Agent Percepts and Actions

• Percept: The input an agent receives at a given moment.

• Percept sequence: Complete history of what an agent has perceived.

• Agent function: Maps percept sequences to actions.

• Agent program: Implementation of the agent function in a physical system.

Example: Vacuum Cleaner Agent

• Percepts: Current location and dirt status.

• Actions: Move left, move right, suck dirt, do nothing.

• Simple strategy: If the square is dirty, clean it; otherwise, move to the next square.
2.2 Good Behavior: The Concept of Rationality

2.2.1 Performance Measures

• Defines success criteria for an agent.

• Examples for a vacuum cleaner agent:

o Noise level.

o Efficiency of cleaning.

o Power consumption.

2.2.2 Rationality

• A rational agent chooses actions that maximize expected performance.

• Depends on:

o Performance measure.

o Prior knowledge of the environment.

o Possible actions.

o Percept sequence.

2.2.3 Omniscience, Learning, and Autonomy

• Omniscience: Knowing actual outcomes in advance (impossible in reality).

• Rationality: Maximizes expected performance.

• Learning: Agents gather data to improve performance.

• Autonomy: Agents rely more on percepts and less on prior knowledge over time.

2.3 The Nature of Environments

2.3.1 Task Environment (PEAS)

• Performance measure

• Environment

• Actuators
• Sensors

• Example: Automated Taxi

o Performance measure: Safety, speed, comfort, legality.

o Environment: Roads, traffic, pedestrians.

o Actuators: Steering, accelerator, brakes.

o Sensors: Cameras, GPS, speedometer.

2.3.2 Properties of Task Environments

1. Fully Observable vs. Partially Observable

o Fully observable: Agent has complete information (e.g., chess).

o Partially observable: Limited information due to sensor constraints (e.g.,


poker).

2. Single-Agent vs. Multi-Agent

o Single-agent: Solves problems alone (e.g., crossword puzzle).

o Multi-agent: Interacts with other agents (e.g., self-driving cars, chess).

3. Deterministic vs. Stochastic

o Deterministic: Outcome is predictable (e.g., tic-tac-toe).

o Stochastic: Outcome involves randomness (e.g., rolling dice).

4. Episodic vs. Sequential

o Episodic: Decisions don’t affect future states (e.g., spam filtering).

o Sequential: Current decisions impact future outcomes (e.g., chess, driving).

5. Static vs. Dynamic

o Static: Environment doesn’t change while agent is deciding (e.g., crossword


puzzles).

o Dynamic: Environment changes in real-time (e.g., self-driving cars).

6. Discrete vs. Continuous

o Discrete: Finite states and actions (e.g., chess).

o Continuous: Infinite states/actions (e.g., robot movement).


7. Known vs. Unknown

o Known: Agent understands the rules (e.g., chess game rules).

o Unknown: Agent must learn rules through experience (e.g., new video game).

Hardest Task Environment

• Partially observable, multi-agent, stochastic, sequential, dynamic, continuous,


unknown.

• Example: Self-driving cars operating in real-world traffic.

Artificial Intelligence – Week 3 Notes (Part B)


2.4 The Structure of Agents

• Agent behavior: Actions performed after any given percept sequence.

• Agent = Architecture + Program

o Architecture: The physical computing device with sensors and actuators.

o Program: The implementation of the agent function.

Agent Function

• Mathematical function that maps percepts to actions.

• Implemented as the agent program.

• Workflow: Environment → Sensors → Agent Function → Actuators → Environment

2.4.1 Agent Programs

• Agent program: Takes current percept as input and returns an action.

• Agent function: May or may not use the entire percept history.

• Table-driven approach is impractical because:

o Chess requires 10¹⁰⁵ entries in a lookup table.

o The observable universe has fewer than 10⁸⁰ atoms.

o No physical agent can store or learn all required table entries.

• Key AI challenge: Writing small, efficient programs that generate rational behavior.
2.4.2 Types of Agents

1. Simple Reflex Agents

• Act based only on the current percept, ignoring percept history.

• Example: Vacuum cleaner agent cleans if current square is dirty.

• Condition-action rules: If condition is met, take action (e.g., braking when a car
ahead stops).

• Works well in fully observable environments.

2. Model-Based Reflex Agents

• Maintain an internal model of the world for partially observable environments.

• Example:

o Car braking: Uses previous camera images to detect red lights.

o Lane changing: Keeps track of nearby cars.

• World model: Knowledge of how the environment changes.

3. Goal-Based Agents

• Use goals to decide between possible actions.

• Example: A taxi at an intersection decides to turn left, right, or go straight based on


its destination.

• Combines goal information with percepts to choose the best action.

4. Utility-Based Agents

• Utility = quality of being useful.

• Goals alone may not be enough (e.g., multiple routes reach the destination, but
some are faster, safer, or cheaper).

• Utility function: Assigns a numeric value to each state to evaluate performance.

5. Learning Agents

• Learn from experience to improve performance over time.

• Components:
o Learning Element (LE): Improves agent performance.

o Performance Element: Selects external actions.

o Critic: Provides feedback on agent’s performance.

o Problem Generator: Suggests new experiences for learning.

• Example: Reinforcement learning in AI systems.

2.4.7 How Agent Program Components Work

Atomic Representation

• Each state is indivisible with no internal structure.

• Example: Finding a driving route using a sequence of cities.

Factored Representation

• Splits a state into multiple variables/attributes.

• Example (Car driving):

o Gas level.

o GPS location.

o Oil warning light status.

o Toll expenses.

• Used in propositional logic, planning, Bayesian networks, and machine learning.

Structured Representation

• Views the world as objects and relationships, not just variables.

• Used in:

o Relational databases.

o First-order logic & probability models.

o Knowledge-based learning & natural language processing (NLP).

Summary
• Agent structure consists of architecture + program.

• Different types of agents exist: Reflex, Model-based, Goal-based, Utility-based,


Learning.

• AI uses different representations (atomic, factored, structured) for decision-


making.

• Learning agents improve autonomously, adapting to complex environments.

You might also like