Artificial Intelligence: UNIT-1
Artificial Intelligence: UNIT-1
UNIT-1
INTRODUCTION
Introduction
Definition
Future of Artificial Intelligence
Characteristics of Intelligent Agents
Typical Intelligent Agents
Problem Solving Approach to Typical AI problems.
Lecture 1: History, Introduction and Definition
Year 1943 : Warren McCulloch and Walter Pits proposed a model of artificial neurons.
Year 1949: Donald Hebb proposed weight updating rule for modifying the connection strength
between neurons.
Year 1950: Alan Turing publishes "Computing Machinery and Intelligence“, a test to check
the machine's ability to exhibit intelligent behavior equivalent to human intelligence.
Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence
program“ Which was named as "Logic Theorist". This program had proved 38 of 52
Mathematics theorems.
Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist
John McCarthy at the Dartmouth Conference.
Year 1966: Joseph Weizenbaum created the first chatbot , which was named as ELIZA.
Year 1972: The first intelligent humanoid robot was built in Japan which was named as
WABOT-1.
----Continue
The first AI winter (1974-1980):The duration between years 1974 to 1980 was the first AI
winter duration. AI winter refers to the time period where computer scientist dealt with a
severe shortage of funding from government for AI researches. During AI winters, an interest
of publicity on artificial intelligence was decreased.
Year 1980-87: After AI winter duration, AI came back with "Expert System". Expert systems
were programmed that emulate the decision-making ability of a human expert.
The second AI winter (1987-1993) :
Year 1997: IBM Deep Blue beats world chess champion, Gary Kasparov, and became the first
computer to beat a world chess champion.
Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
Year 2006: Companies like Facebook, Twitter, and Netflix also started using AI.
Deep learning, big data and artificial general intelligence (2011-present)
Year 2011: IBM's Watson won jeopardy, a quiz show.
--Continue
Year 2012: Google has launched an Android app feature "Google now”.
Year 2014: Chatbot "Eugene Goostman" won a competition in the infamous "Turing test“.
Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well. Google has demonstrated an AI program
"Duplex" which was a virtual assistant and which had taken hairdresser appointment on call,
and lady on other side didn't notice that she was talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data
science are now trending like a boom.
Introduction
-John McCarthy
AI is study of how human brain think, learn, decide and work when it tries to solve
a problem.
Finally this study outputs intelligent software systems.
• Aim of AI is to improve computer functions which are related to human knowledge
for example learning, reasoning and problem solving.
• AI is an approach to make a computer, a robot, or a product to think how smart
human thinks.
AI Intelligence is intangible. It is composed of
-Reasoning
-Learning
- Problem Solving
- Perception
- Linguistic Intelligence
Definition
Disadvantages
High cost
can’t replace human
Lack of creativity
Risk of unemployment
No feeling and emotions
Lecture 2:Types of AI
Type-1:Based on Capability
Artificial Narrow Intelligence (ANI)
Artificial General Intelligence (AGI)
Artificial Super Intelligence (ASI)
Type-2 : Based on Functionality
Reactive Machines
Limited Memory
Theory of Minds
Self- Awareness
Type -1 : Based on Capability
Reactive Machines
These machines are most common form of AI applications.
Such AI systems do not store or experiences for future actions.
These machines focus only on current scenario and react on it as per best possible
action.
Example : Deep Blue and IBM's chess playing super computer.
Type -2 : Based on Functionality
Limited Memory
Limited memory machines can retain data for short period of time.
While they can use this data for specific period of time. They cannot add
it to its library of their experiences.
Many self-driving cars use limited memory model they store data such as
Speed of nearby cars
Distance of such cars
The speed limits and other information that can help them navigate
on roads
Type -2 : Based on Functionality
Theory of Mind
AI should understand human emotions, beliefs and should able to interact
socially.
Self-Awareness
Self-awareness AI is future of artificial intelligence.
These machines will be super intelligent and they will have
their own consciousness, sentiments and self-awareness.
These machines will be more smarter than human mind.
Hypothetical at this point.
AI Applications
Chatbots
AI in healthcare
Handwriting recognition
Speech recognition
Natural language processing
AI in gaming
AI in finance
AI in robotics
AI in security
AI in social media
AI in education
Lecture 3 : Future of Artificial Intelligence
Pro-activeness: Intelligent agents are not just reactive; they can take
initiative and perform actions to achieve their goals.
As per Russell and Norvig, an environment can have various features from the point of
view of an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs Sequential
7. Known vs Unknown
8. Accessible vs Inaccessible
Fully Observable vs Partially Observable
If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially observable.
A fully observable environment is easy as there is no need to maintain the internal
state to keep track history of the world.
An agent with no sensors in all environments then such an environment is called as
unobservable.
--Continue
For example, in a chess game, the state of the system, that is, the position of all the players on
the chess board, is available the whole time so the player can make an optimal decision such
environment is fully observable
An example of a partially observable system would be a card game in which some of the cards
are discarded into a pile face down. In this case the observer is only able to view their own
cards and potentially those of the dealer.
Deterministic vs Stochastic
When a uniqueness in the agent’s current state completely determines the next state of the
agent, the environment is said to be deterministic.
The stochastic environment is random in nature which is not unique and cannot be completely
determined by the agent.
Example:
Chess – There would be only a few possible moves for a coin at the current state and these
moves can be determined
Self Driving Cars – The actions of a self-driving car are not unique, it varies time to time.
Competitive vs Collaborative
The game of chess is discrete as it has only a finite number of moves. The number of
moves might vary with every game, but still, it’s finite.
The environment in which the actions performed cannot be numbered ie. is not
discrete, is said to be continuous.
Agents can be grouped into five classes based on their degree of perceived
intelligence and capability.
All these agents can improve their performance and generate better action over the
time.
These are given below:
• Simple Reflex Agent
• Model-based reflex agent
• Goal-based agents
• Utility-based agent
• Learning agent
Simple Reflex Agent
They ignore the rest of the percept history and act only on the basis of the current
percept.
The agent function is based on the condition-action rule.
A condition-action rule is a rule that maps a state i.e, condition to an action.
If the condition is true, then the action is taken, else not.
This agent function only succeeds when the environment is fully observable.
For simple reflex agents operating in partially observable environments, infinite
loops are often unavoidable.
It may be possible to escape from infinite loops if the agent can randomize its
actions.
----Continue
--Continue
These kinds of agents take decisions based on how far they are currently from
their goal(description of desirable situations).
Their every action is intended to reduce its distance from the goal.
This allows the agent a way to choose among multiple possibilities, selecting the
one which reaches a goal state.
The knowledge that supports its decisions is represented explicitly and can be
modified, which makes these agents more flexible.
They usually require search and planning.
The goal-based agent’s behavior can easily be changed.
Contd..
Initialize sensors and actuators
Load maps and traffic rules into Knowledge Base
Set Goal = "Transport passengers safely and efficiently"
if destination is reached:
Update Goal status as "achieved"
Stop vehicle
Exit loop
else:
Plan path to destination using probabilistic roadmaps
Execute path
modelling paradigms.
Contd.....
There can be various solutions to a single problem, which are achieved by different
heuristics.
Also, some problems have unique solutions.
It all rests on the nature of the given problem.
Examples of Problems in AI
Chess
N-Queen problem
Tower of Hanoi Problem
Travelling Salesman Problem
Water-Jug Problem
Problem Solving Techniques
Time Complexity: Time complexity is a measure of time for an algorithm to complete its
task.
Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.
Types of search algorithms:
•The uninformed search does not contain any domain knowledge such as closeness, the
location of the goal.
•Uninformed search applies a way in which search tree is searched without any information
about the search space like initial state operators and test for the goal, so it is also called
blind search.
•It examines each node of the tree until it achieves the goal node
Informed Search
• Informed search algorithms use domain knowledge. In an informed search, problem
information is available which can guide the search.
• Informed search strategies can find a solution more efficiently than an uninformed
search strategy.
• Informed search is also called a Heuristic search.
• A heuristic is a way which might not always be guaranteed for best solutions but
guaranteed to find a good solution in reasonable time.
Heuristics
The heuristic method helps comprehend a problem and devises a solution based purely
on experiments and trial and error methods.
However, these heuristics do not often provide the best optimal solution to a specific
problem.
Instead, these undoubtedly offer efficient solutions to attain immediate goals.
Therefore, the developers utilize these when classic methods do not provide an efficient
solution for the problem.
Since heuristics only provide time-efficient solutions and compromise accuracy, these
are combined with optimization algorithms to improve efficiency.
Example: TSP
The most common example of using heuristic is the Travelling Salesman problem.
There is a provided list of cities and their distances.
The user has to find the optimal route for the Salesman to return to the starting city
after visiting every city on the list.
The greedy algorithms solve this NP-Hard problem by finding the optimal solution.
According to this heuristic, picking the best next step in every current city provides
the best solution.
Types of Searching Algorithms
Informed Search
Greedy Search
A* Search
Uninformed Search
Breadth-First Search
Depth First Search
Uniform Cost Search
Iterative Deepening Depth First Search
Bidirectional Search
Evolutionary Computation
•Examples:
•Image classification: it is able to accurately predict that a given image
belongs to a certain class.
•Object detection :
•The field of NLP involves making computers to perform useful tasks with
the natural languages humans use. The input and output of an NLP system
can be −
•Speech
•Written Text