0% found this document useful (0 votes)
47 views23 pages

AI Module1 Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views23 pages

AI Module1 Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Artificial Intelligence (AI) Notes

(BAD402)

Compiled by: Dr. Amjad Khan Associate Professor,


Department of AI & ML, BIET Davangere

MODULE-1
Introduction: What is AI? Foundations and History of AI Intelligent Agents:
Agents and environment, Concept of Rationality, The nature of environment, The
structure of agents.

Explain in detail about AI.

Artificial intelligence, or AI, is technology that enables computers and


machines to simulate human intelligence and problem-solving capabilities. AI
can perform tasks that would otherwise require human intelligence or
intervention. Digital assistants, GPS guidance, autonomous vehicles, and
generative AI tools (like Open AI's Chat GPT) are just a few examples of AI in
the daily news and our daily lives. The last time generative AI loomed this
large, the breakthroughs were in computer vision, but now the leap forward is
in natural language processing (NLP). Today, generative AI can learn and
synthesize not just human language but other data types including images,
video, software code, and even molecular structures.
As a field of computer science, artificial intelligence encompasses
machine learning and deep learning. These disciplines involve the
development of AI algorithms, modeled after the decision-making processes
of the human brain, that can ‘learn’ from available data and make increasingly
more accurate classifications or predictions over time.
Examples:
1. Unimate: The first digitally operated and programmable robot was
invented by George Devol in 1954 and represents the foundation of the
modern robotics industry. It is considered the first industrial robot was
developed in the USA: a hydraulic arm called Unimate, used to lift heavy
loads, which was sold to General Motors.
2. Sophia is a social humanoid robot developed by the Hong Kong-based
company Hanson Robotics. Sophia was activated on February 14, 2016.
Hanson Robotics’ most advanced human-like robot, Sophia, personifies
our dreams for the future of AI. It became the first ever to be granted a
full Saudi Arabian citizenship and its appointment was made public
during the Future Investment Initiative held in the Saudi Arabian capital
of Riyadh.
Discuss the advantages and disadvantages of AI in present
scenario.

Advantages of AI
 Reduction in Human Error: One of the biggest benefits of Artificial Intelligence is that it
can significantly reduce errors and increase accuracy and precision.

 Zero Risks: Another big benefit of AI is that humans can overcome many risks by letting AI
robots do them for us.

 24x7 Availability: AI can work endlessly without breaks. They think much faster than
humans and perform multiple tasks at a time with accurate results. They can even handle
tedious repetitive jobs easily with the help of AI algorithms.

 Digital Assistance: Some of the most technologically advanced companies engage with
users using digital assistants, which eliminates the need for human personnel. Many
websites utilize digital assistants to deliver user-requested content.

 New Inventions: In practically every field, AI is the driving force behind numerous
innovations that will aid humans in resolving the majority of challenging issues.

 Unbiased Decisions: Human beings are driven by emotions, whether we like it or not. AI
on the other hand, is devoid of emotions and highly practical and rational in its approach. A
huge advantage of Artificial Intelligence is that it doesn't have any biased views, which
ensures more accurate decision-making.

 Perform Repetitive Jobs: We will be doing a lot of repetitive tasks as part of our daily
work, such as checking documents for flaws and mailing thank-you notes, among other
things. We may use artificial intelligence to efficiently automate these menial chores and
even eliminate "boring" tasks for people, allowing them to focus on being more creative.

 Daily Applications: Today, our everyday lives are entirely dependent on mobile devices
and the internet. We utilize a variety of apps, including Google Maps, Alexa, Siri, Cortana on
Windows, OK Google, taking selfies, making calls, responding to emails, etc. With the use of
various AI-based techniques, we can also anticipate today’s weather and the days ahead.

 AI in Risky Situations: One of the main benefits of artificial intelligence is this. By creating
an AI robot that can perform perilous tasks on our behalf, we can get beyond many of the
dangerous restrictions that humans face. It can be utilized effectively in any type of natural
or man-made calamity, whether it be going to Mars, defusing a bomb, exploring the deepest
regions of the oceans, or mining for coal and oil.

 Medical Applications: AI has also made significant contributions to the field of medicine,
with applications ranging from diagnosis and treatment to drug discovery and clinical trials.
AI-powered tools can help doctors and researchers analyze patient data, identify potential
health risks, and develop personalized treatment plans. This can lead to better health
outcomes for patients and help accelerate the development of new medical treatments and
technologies.
Disadvantages of AI
 High Costs
 No Creativity
 Unemployment
 Make Humans Lazy
 No Ethics
 Emotionless
 No Improvement

What is meant by Intelligence? and illustrate the intelligence Parameters.


Intelligence is the ability to acquire and apply knowledge.
Intelligence is the ability to learn, solve problems and to make computers intelligent so
that they can act intelligently. If the computers can, somehow, solve real-world problems,
by improving on their own from past experiences, they would be called “intelligent”.

Intelligence Parameters:
1. Reasoning
2. Learning
3. Problem solving
4. Perception
5. Linguistic Intelligence

Define Intelligence Parameters.


 Reasoning: Reasoning in Artificial Intelligence refers to the process by which AI
systems analyze information, make inferences, and draw conclusions to solve
problems or make decisions.
 Learning: Learning refers to gaining skills or knowledge. It is the process of
acquiring knowledge. Machine learning (ML) and deep learning( DL) algorithms and
methods are used
 Problem Solving: AI problem-solving involves a series of distinct steps and
methodologies that enable machines to understand, analyze, and resolve complex
problems.
 Perception: Perception in AI involves the interpretation of data from sensors,
cameras, microphones, or other input devices to understand the environment. It
mimics human perception by enabling machines to recognize objects, understand
speech, or interpret visual and auditory signals.
 Linguistic Intelligence: Linguistic intelligence refers to the ability to “understand
and use spoken and written language.” It is the ability to think in words and to use
language to express and appreciate complex meanings.
What are the steps involved in AI’s problem solving?
Steps in problem-solving include:
 Problem definition: Detailed specification of inputs and acceptable system
solutions.
 Problem analysis: Analyze the problem thoroughly.
 Knowledge Representation: collect detailed information about the problem and
define all possible techniques.
 Selection of best techniques to solve the problem

Clarify the different types of AI and summarize their functions.


1. Narrow AI (Weak-AI)
2. Strong AI-----General AI (AGI) and Super AI (ASI)
Narrow AI: Weak AI—also known as narrow AI or artificial narrow intelligence (ANI)—is
AI trained and focused to perform specific tasks. It enables some very robust applications,
such as Apple's Siri, Amazon's Alexa, IBM watsonx™, and self-driving vehicles.
Strong AI: Strong AI is made up of artificial general intelligence (AGI) and artificial super
intelligence (ASI).
AGI, or general AI, is a theoretical form of AI where a machine would have an intelligence
equal to humans; it would be self-aware with a consciousness that would have the ability to
solve problems, learn, and plan for the future.
ASI—also known as superintelligence—would surpass the intelligence and ability of the
human brain. While strong AI is still entirely theoretical with no practical examples in use
today, that doesn't mean AI researchers aren't also exploring its development.

Describe the History a n d foundations of AI


Important events and milestones in the evolution of artificial intelligence include the following:

 1950: Alan Turing publishes Computing Machinery and Intelligence In this paper,
Turing—famous for breaking the German ENIGMA code during WWII and often
referred to as the "father of computer science"— asks the following question: "Can
machines think?" From there, he offers a test, now famously known as the "Turing Test,"
where a human interrogator would try to distinguish between a computer and human text
response.
 1956: John McCarthy coins the term "artificial intelligence" at the first-ever AI
conference at Dartmouth College. (McCarthy would go on to invent the Lisp language.)
Later that year, Allen Newell, J.C. Shaw, and Herbert Simon create the Logic Theorist,
the first-ever running AI software program.

 1967: Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a
neural network that "learned" though trial and error.
 1980s: Neural networks which use a back propagation algorithm to train itself
become widely used in AI applications.
 1995: Stuart Russell and Peter Norvig publish Artificial Intelligence: A Modern
Approach which becomes one of the leading textbooks in the study of AI. In it, they
delve into four potential goals or definitions of AI, which differentiates computer
systems on the basis of rationality and thinking vs. acting.
 1997: IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess
match (and rematch).

 2004: John McCarthy writes a paper, What Is Artificial Intelligence? and proposes
an often-cited definition of AI.

 2011: IBM Watson beats champions Ken Jennings and Brad Rutter at Jeopardy!

 2015: Baidu's Minwa supercomputer uses a special kind of deep neural network
called a convolutional neural network to identify and categorize images with a
higher rate of accuracy than the average human.

 2016: Deep Mind’s AlphaGo program, powered by a deep neural network, beats Lee
Sodol, the world champion Go player, in a five-game match. The victory is significant
given the huge number of possible moves as the game progresses (over 14.5 trillion
after just four moves!). Later, Google purchased Deep Mind for a reported USD 400
million.

 2023: A rise in large language models, or LLMs, such as ChatGPT, creates an


enormous change in performance of AI and its potential to drive enterprise
value. With these new generative AI practices, deep-learning models can be pre-
trained on vast amounts of raw, unlabeled data.

History of AI
Maturation of Artificial Intelligence (1943-1952)

 Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
 Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection
strength between neurons. His rule is now called Hebbian learning.
 Year 1950: The Alan Turing who was an English mathematician and pioneered Machine
learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in
which he proposed a test. The test can check the machine's ability to exhibit intelligent
behavior equivalent to human intelligence, called a Turing test.
 Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial neural
network (ANN) named SNARC. They utilized 3,000 vacuum tubes to mimic a network
of 40 neurons.

The birth of Artificial Intelligence (1952-1956)

 Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-Playing
Program, which marked the world's first self-learning program for playing games.
 Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program had proved
38 of 52 Mathematics theorems, and find new and more elegant proofs for some
theorems.
 Year 1956: The word "Artificial Intelligence" first adopted by American Computer
scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as an
academic field.
The golden years-Early enthusiasm (1956-1974)

 Year 1958: During this period, Frank Rosenblatt introduced the perceptron, one of the
early artificial neural networks with the ability to learn from data. This invention laid the
foundation for modern neural networks. Simultaneously, John McCarthy developed the
Lisp programming language, which swiftly found favor within the AI community,
becoming highly popular among developers.
 Year 1959: Arthur Samuel is credited with introducing the phrase "machine learning" in
a pivotal paper in which he proposed that computers could be programmed to surpass
their creators in performance. Additionally, Oliver Selfridge made a notable contribution
to machine learning with his publication "Pandemonium: A Paradigm for Learning." This
work outlined a model capable of self-improvement, enabling it to discover patterns in
events more effectively.
 Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow created
STUDENT, one of the early programs for natural language processing (NLP), with the
specific purpose of solving algebra word problems.
 Year 1965: The initial expert system, Dendral, was devised by Edward Feigenbaum,
Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi. It aided organic chemists in
identifying unfamiliar organic compounds.
 Year 1966: The researchers emphasized developing algorithms that can solve
mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was
named ELIZA. Furthermore, Stanford Research Institute created Shakey, the earliest
mobile intelligent robot incorporating AI, computer vision, navigation, and NLP. It can
be considered a precursor to today's self-driving cars and drones.
 Year 1968: Terry Winograd developed SHRDLU, which was the pioneering multimodal
AI capable of following user instructions to manipulate and reason within a world of
blocks.
 Year 1969: Arthur Bryson and Yu-Chi Ho outlined a learning algorithm known as
backpropagation, which enabled the development of multilayer artificial neural networks.
This represented a significant advancement beyond the perceptron and laid the
groundwork for deep learning. Additionally, Marvin Minsky and Seymour Papert
authored the book "Perceptrons," which elucidated the constraints of basic neural
networks. This publication led to a decline in neural network research and a resurgence in
symbolic AI research.
 Year 1972: The first intelligent humanoid robot was built in Japan, which was named
WABOT-1.
 Year 1973: James Lighthill published the report titled "Artificial Intelligence: A General
Survey," resulting in a substantial reduction in the British government's backing for AI
research.
The first AI winter (1974-1980)

 The duration between years 1974 to 1980 was the first AI winter duration. AI winter
refers to the time period where computer scientist dealt with a severe shortage of funding
from government for AI researches.
 During AI winters, an interest of publicity on artificial intelligence was decreased.

A boom of AI (1980-1987)

 In 1980, the first national conference of the American Association of Artificial


Intelligence was held at Stanford University.
 Year 1980: After AI's winter duration, AI came back with an "Expert System". Expert
systems were programmed to emulate the decision-making ability of a human expert.
Additionally, Symbolics Lisp machines were brought into commercial use, marking the
onset of an AI resurgence. However, in subsequent years, the Lisp machine market
experienced a significant downturn.
 Year 1981: Danny Hillis created parallel computers tailored for AI and various
computational functions, featuring architecture akin to contemporary GPUs.
 Year 1984: Marvin Minsky and Roger Schank introduced the phrase "AI winter" during
a gathering of the Association for the Advancement of Artificial Intelligence. They
cautioned the business world that exaggerated expectations about AI would result in
disillusionment and the eventual downfall of the industry, which indeed occurred three
years later.
 Year 1985: Judea Pearl introduced Bayesian network causal analysis, presenting
statistical methods for encoding uncertainty in computer systems.

The second AI winter (1987-1993)

 The duration between the years 1987 to 1993 was the second AI Winter duration.
 Again Investors and government stopped in funding for AI research as due to high cost
but not efficient result. The expert system such as XCON was very cost effective.

The emergence of intelligent agents (1993-2011)

 Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by defeating world
chess champion Gary Kasparov, marking the first time a computer triumphed over a
reigning world chess champion. Moreover, Sepp Hochreiter and Jürgen Schmidhuber
introduced the Long Short-Term Memory recurrent neural network, revolutionizing the
capability to process entire sequences of data such as speech or video.
 Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum
cleaner.
 Year 2006: AI came into the Business world till the year 2006. Companies like
Facebook, Twitter, and Netflix also started using AI.
 Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released the paper titled
"Utilizing Graphics Processors for Extensive Deep Unsupervised Learning," introducing
the concept of employing GPUs for the training of expansive neural networks.
 Year 2011: Jürgen Schmidhuber, Dan Claudiu Cire?an, Ueli Meier, and Jonathan Masci
created the initial CNN that attained "superhuman" performance by emerging as the
victor in the German Traffic Sign Recognition competition. Furthermore, Apple launched
Siri, a voice-activated personal assistant capable of generating responses and executing
actions in response to voice commands.

Deep learning, big data and artificial general intelligence (2011-present)

 Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where it had to solve
complex questions as well as riddles. Watson had proved that it could understand natural
language and can solve tricky questions quickly.
 Year 2012: Google launched an Android app feature, "Google Now", which was able to
provide information to the user as a prediction. Further, Geoffrey Hinton, Ilya Sutskever,
and Alex Krizhevsky presented a deep CNN structure that emerged victorious in the
ImageNet challenge, sparking the proliferation of research and application in the field of
deep learning.
 Year 2013: China's Tianhe-2 system achieved a remarkable feat by doubling the speed of
the world's leading supercomputers to reach 33.86 petaflops. It retained its status as the
world's fastest system for the third consecutive time. Furthermore, Deep Mind unveiled
deep reinforcement learning, a CNN that acquired skills through repetitive learning and
rewards, ultimately surpassing human experts in playing games. Also, Google researcher
Tomas Mikolov and his team introduced Word2vec, a tool designed to automatically
discern the semantic connections among words.
 Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the
infamous "Turing test." Whereas Ian Goodfellow and his team pioneered generative
adversarial networks (GANs), a type of machine learning framework employed for
producing images, altering pictures, and crafting deepfakes, and Diederik Kingma and
Max Welling introduced variational autoencoders (VAEs) for generating images, videos,
and text. Also, Facebook engineered the DeepFace deep learning facial recognition
system, capable of identifying human faces in digital images with accuracy nearly
comparable to human capabilities.
 Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go player Lee
Sedol in Seoul, South Korea, prompting reminiscence of the Kasparov chess match
against Deep Blue nearly two decades earlier.Whereas Uber initiated a pilot program for
self-driving cars in Pittsburgh, catering to a limited group of users.
 Year 2018: The "Project Debater" from IBM debated on complex topics with two master
debaters and also performed extremely well.
 Google has demonstrated an AI program, "Duplex," which was a virtual assistant that had
taken hairdresser appointments on call, and the lady on the other side didn't notice that
she was talking with the machine.
 Year 2021: OpenAI unveiled the Dall-E multimodal AI system, capable of producing
images based on textual prompts.
 Year 2022: In November, OpenAI launched ChatGPT, offering a chat-oriented interface
to its GPT-3.5 LLM.

Now AI has developed to a remarkable level. The concept of Deep learning, big data, and
data science are now trending like a boom. Nowadays companies like Google, Facebook,
IBM, and Amazon are working with AI and creating amazing devices. The future of
Artificial Intelligence is inspiring and will come with high intelligence.

Explain AI agent with examples.

AI Agent
An agent is a program or software designed to perform specific tasks. It interacts with the
environment through sensors to perceive the state of the environment and uses actuators to
perform desired actions. Sensors are devices that detect changes in the environment and convert
them into electrical signals to be processed by a computer. Actuators are devices that perform
actions based on the signals received from a computer.

An agent is something that perceives and acts in an environment. The agent function for
an agent specifies the action taken by the agent in response to any percept sequence. An AI agent
is a computer program or system that is designed to perceive its environment, make decisions
and take actions to achieve a specific goal or set of goals.

 The agent operates autonomously, meaning it is not directly controlled by a human


operator.
 Agents can be classified into different types based on their characteristics, such as
whether they are reactive or proactive, whether they have a fixed or dynamic
environment, and whether they are single or multi-agent systems.
 Reactive agents are those that respond to immediate stimuli from their environment and
take actions based on those stimuli.
 Proactive agents, on the other hand, take initiative and plan ahead to achieve their goals.
 The environment in which an agent operates can also be fixed or dynamic. Fixed
environments have a static set of rules that do not change, while dynamic environments
are constantly changing and require agents to adapt to new situations.
 Multi-agent systems involve multiple agents working together to achieve a common goal.
These agents may have to coordinate their actions and communicate with each other to
achieve their objectives.
 Agents are used in a variety of applications, including robotics, gaming, and intelligent
systems. They can be implemented using different programming languages and
techniques, including machine learning and natural language processing.

Interaction of Agents with the Environment

Perceiving its environment through sensors and acting upon that environment through
actuators.

Examples of an AI agents

 Intelligent personal assistants: These are agents that are designed to help users with
various tasks, such as scheduling appointments, sending messages, and setting reminders.
Examples of intelligent personal assistants include Siri, Alexa, and Google Assistant.
 Autonomous robots: These are agents that are designed to operate autonomously in the
physical world. They can perform tasks such as cleaning, sorting, and delivering goods.
Examples of autonomous robots include the Roomba vacuum cleaner and the Amazon
delivery robot.
 Gaming agents: These are agents that are designed to play games, either against human
opponents or other agents. Examples of gaming agents include chess-playing agents and
poker-playing agents.
 Fraud detection agents: These are agents that are designed to detect fraudulent behavior
in financial transactions. They can analyze patterns of behavior to identify suspicious
activity and alert authorities. Examples of fraud detection agents include those used by
banks and credit card companies.
 Traffic management agents: These are agents that are designed to manage traffic flow
in cities. They can monitor traffic patterns, adjust traffic lights, and reroute vehicles to
minimize congestion. Examples of traffic management agents include those used in smart
cities around the world.
 A software agent has Keystrokes, files contents, received network packages that act as
sensors and displays on the screen, files, and sent network packets acting as actuators. A
Human-agent has eyes, ears, and other organs which act as sensors, and hands, legs,
mouth, and other body parts act as actuators.

 A Robotic agent has Cameras and infrared range finders which act as sensors and
various motors act as actuators.

Structure of an AI Agent

To understand the structure of Intelligent Agents, we should be familiar with Architecture


and Agent programs.

 Architecture is the machinery that the agent executes on. It is a device with sensors
and actuators, for example, a robotic car, a camera, and a PC.
 An agent program is an implementation of an agent function. An agent function is a
map from the percept sequence (history of all that an agent has perceived to date)
to an action.

Agent = Architecture + Agent Program


Types of AI Agents
 Simple Reflex Agents
 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents

Simple Reflex Agents

A simple reflex agent acts according to a rule whose condition matches the current
state, as defined by the percept. The agent in the above figure will work only if the
correct decision can be made on the basis of only the current percept-that is, only if
the environment is fully observable.

Simple reflex agents ignore the rest of the percept history and act only on the basis
of the current percept. Percept history is the history of all that an agent has
perceived to date. The agent function is based on the condition-action rule. A
condition-action rule is a rule that maps a state i.e., a condition to an action. If the
condition is true, then the action is taken, else not. This agent function only succeeds
when the environment is fully observable. For simple reflex agents operating in
partially observable environments, infinite loops are often unavoidable. It may be
possible to escape from infinite loops if the agent can randomize its actions.
Problems with Simple reflex agents are:

 Very limited intelligence.


 No knowledge of non-perceptual parts of the state.
 Usually too big to generate and store.
 If there occurs any change in the environment, then the collection of rules needs to
be updated.

With neat diagram describe the Model-Based Reflex agents.

Model-Based Reflex Agents

A model based reflex agent keeps track of the current state of the world using an
internal model. It then chooses an action in the same way as the reflex agent. Here
the agent should maintain some sort of internal state that depends on the percept
history and thereby reflects at least some of the unobserved aspects of the current
state.

It works by finding a rule whose condition matches the current situation. A model-
based agent can handle partially observable environments by the use of a model
about the world. The agent has to keep track of the internal state which is adjusted
by each percept and that depends on the percept history. The current state is stored
inside the agent which maintains some kind of structure describing the part of the
world which cannot be seen.

Updating the state requires information about:


How the world evolves independently from the agent?
How do the agent’s actions affect the world?
Demonstrate the Goal-Based agents with neat diagram.

Goal-Based Agents

 A Goal based agent keeps track of the world state as well as a set of goals it is trying
to achieve, and chooses an action that will (eventually) lead to the achievement of
its goals.
 These kinds of agents take decisions based on how far they are currently from their
goal (description of desirable situations).
 Their every action is intended to reduce their distance from the goal. This allows the
agent a way to choose among multiple possibilities, selecting the one which reaches
a goal state.
 The knowledge that supports its decisions is represented explicitly and can be
modified, which makes these agents more flexible. They usually require search and
planning. The goal-based agent’s behavior can easily be changed.

Utility-Based Agents
A utility based agent uses a model of the world, along with a utility function that
measures its preferences among states of the world. Then it chooses the action that
leads to the best expected utility, where expected utility is computed by averaging
over all possible outcome states, weighted by the probability of the outcome.
 The agents which are developed having their end uses as building blocks are called
utility-based agents. When there are multiple possible alternatives, then to decide
which one is best, utility-based agents are used.
 They choose actions based on a preference (utility) for each state. Agent happiness
should be taken into consideration.
 Utility describes how “happy” the agent is. Because of the uncertainty in the world, a
utility agent chooses the action that maximizes the expected utility.
 A utility function maps a state onto a real number which describes the associated
degree of happiness.

Summary: A simple reflex agents respond directly to percepts, whereas model-


based agents maintain internal state to track aspects of the world that are not
evident in the current percept. Goal based agents act to achieve their goals, and
utility-based agents try to maximize their own expected “happiness.” Hence all
agents can improve their performance through learning.

Explain in brief the role of AI in various applications.


Agents are used in a wide range of applications in artificial intelligence, including:

 Robotics: Agents can be used to control robots and automate tasks in


manufacturing, transportation, and other industries.
 Smart homes and buildings: Agents can be used to control heating, lighting, and
other systems in smart homes and buildings, optimizing energy use and improving
comfort.
 Transportation systems: Agents can be used to manage traffic flow, optimize
routes for autonomous vehicles, and improve logistics and supply chain
management.
 Healthcare: Agents can be used to monitor patients, provide personalized
treatment plans, and optimize healthcare resource allocation.
 Finance: Agents can be used for automated trading, fraud detection, and risk
management in the financial industry.
 Games: Agents can be used to create intelligent opponents in games and
simulations, providing a more challenging and realistic experience for players.
 Natural language processing: Agents can be used for language translation,
question answering, and chatbots that can communicate with users in natural
language.
 Cyber security: Agents can be used for intrusion detection, malware analysis, and
network security.
 Environmental monitoring: Agents can be used to monitor and manage natural
resources, track climate change, and improve environmental sustainability.
 Social media: Agents can be used to analyze social media data, identify trends and
patterns, and provide personalized recommendations to users.

Define Environment in AI and summarize its types


An environment in artificial intelligence is the surrounding of the agent. The agent takes input
from the environment through sensors and delivers the output to the environment through
actuators. There are several types of environments

Environment types in AI

 Deterministic and Stochastic


 Static and Dynamic
 Fully Observable and Partially Observable
 Single-agent and Multi-agent
 Discrete Vs Continuous

Deterministic vs Stochastic

 When uniqueness in the agent’s current state completely determines the next state
of the agent, the environment is said to be deterministic.

Example: Chess – there would be only a few possible moves for a coin at the current state
and these moves can be determined.
 The stochastic environment is random in nature which is not unique and cannot
be completely determined by the agent.

Example: Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to
time.

Dynamic vs Static

 An environment that keeps constantly changing itself when the agent is up with
some action is said to be dynamic.

Example: A roller coaster ride is dynamic as it is set in motion and the environment keeps
changing every instant.

 An idle environment with no change in its state is called a static environment.

Example: An empty house is static as there’s no change in the surroundings when an agent
enters.

Fully Observable vs Partially Observable

Fully Observable

 When an agent sensor is capable to sense or access the complete state of an agent at
each point in time, it is said to be a fully observable environment else it is partially
observable.

Example: Chess – the board is fully observable, and so are the opponent’s moves.

Partially Observable

 Maintaining a fully observable environment is easy as there is no need to keep track


of the history of the surrounding.
 An environment is called unobservable when the agent has no sensors in all
environments.

Example: Driving – the environment is partially observable because what’s around the
corner is not known.
Single-agent vs Multi-agent

Single-agent

 An environment consisting of only one agent is said to be a single-agent


environment.

Example: A person left alone in a maze is an example of the single-agent system.

Multi-agent

 An environment involving more than one agent is a multi-agent environment.

Example: The game of football is multi-agent as it involves 11 players in each team.

Discrete vs Continuous

Discrete

 If an environment consists of a finite number of actions that can be deliberated in


the environment to obtain the output, it is said to be a discrete environment.

Example: The game of chess is discrete as it has only a finite number of moves.

The number of moves might vary with every game, but still, it’s finite.

Continuous

 The environment in which the actions are performed cannot be numbered i.e. is not
discrete, is said to be continuous.

Example: Self-driving cars are an example of continuous environments as their actions is


driving, parking, etc. which cannot be numbered.
What is Rational Agent and concept of Rationality?

Rational Agent:
 A Rational agent is one that can choose actions that maximize the performance
measure based on its prior knowledge and percept sequence.

 In other words, it must perform actions that lead to the desired changes in the
environment. A rational agent is always preferred over an irrational one as it
ensures the best result in terms of the performance measure.

 It is one that does the right thing. Doing right thing is better than doing wrong thing.
Right action is the one that will cause the agent to be most successful. To measure
success in the agents, measures of rationality are used.

Concept of Rationality:
The concept of rationality refers to the ability of an AI agent to work as per the desired
actions. In other words, a rational agent must perform actions that satisfy a performance
measure. A performance measure is a function that maps a given percept sequence to a
measure of the performance of the agent. Whereas Rationality is the ability to make
decisions based on logical reasoning and optimize behavior to achieve its goals, considering
its perception of the environment and the performance measure.

Justify the four measures of Rationality.


Four Measures of Rationality

 The Performance Measure: This defines the criterion of success.


 The Agent's Prior Knowledge of the environment
 Actuator Dependency: The actions that the agent can perform.
 The Agent's Percept Sequence to date

Performance Measure

The performance measure is a function that maps the percept sequence to the measure of
the performance of the agent. To put it simply, it is a way to evaluate the effectiveness of
the agent. For instance, in the case of a self-driving car, the performance measure would be
to reach the destination safely and on time.
Agent's Prior Knowledge

An agent's prior knowledge is the knowledge that it has acquired from the environment. It
determines the actions that the agent can perform. For example, a self-driving car's agent
has prior knowledge of the traffic rules and road conditions.

Actuator Dependency

A rational agent must take actions that satisfy the performance measure. To do so, it
depends on the actuators to perform the required actions.

Agent's Percept Sequence

The percept sequence is the history of what the agent has perceived from the environment.
It is based on the sensors that detect changes in the environment.

APPLICATIONS OF AI

 Robotics
 Education
 Healthcare
 Marketing
 Business
 Banking
 Agriculture
 Cyber Security
 Finance
 Digital Marketing
 Industry
 Construction industry
 Manufacturing industry
 Automotive industry
 Pharmaceutical industry
 Fashion industry
 Hospitals
 Food industry
 Retail industry
 Music industry
Module-1
Important Questions:
1. Discuss the advantages and disadvantages of AI in present scenario.
2. What is meant by Intelligence? And illustrate the intelligence Parameters.
3. Describe the History of Artificial Intelligence (AI).
4. Clarify the different types of AI and summarize their functions.
5. Explain AI agent with examples.
6. With neat diagram describe the Model-Based reflex agents.
7. Demonstrate the Goal-Based agents with neat diagram.
8. Define Environment in AI and summarize its types.
9. Define rational agent and concept of Rationality?
10. Justify the four measures of Rationality.
11. Illustrate in brief the role of AI in various applications.

You might also like