Unit-1 Introduction To AI (VI Sem BCA) .

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Unit- Introduction VI Sem BCA (Artificial Intelligence)

Introduction

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to mimic human cognitive functions such as learning, problem-solving, decision-
making, perception, understanding natural language, and interacting with the environment. AI
encompasses a broad range of techniques, algorithms, and methodologies aimed at creating
systems capable of performing tasks that typically require human intelligence.

Foundations of AI:

The foundation of Artificial Intelligence (AI) draws upon several disciplines, including mathematics,
neuroscience, control theory, and linguistics. Let's briefly explore each:

1. Mathematics: Mathematics provides the theoretical framework for many AI algorithms and
techniques. Concepts such as calculus, linear algebra, probability theory, and optimization are
crucial in areas like machine learning, neural networks, and algorithm design. Mathematical models
help in understanding complex systems and developing algorithms for problem-solving and
decision-making tasks.

2. Neuroscience: Neuroscience studies the structure and function of the brain and nervous system.
AI researchers draw inspiration from neuroscience to understand how biological systems process
information, learn, and adapt. Neural networks, a fundamental concept in AI, are inspired by the
interconnected structure of neurons in the brain. Understanding neural mechanisms aids in
developing biologically inspired AI models and algorithms.

3. Control Theory : Control theory deals with the behavior of dynamical systems and the design of
control systems to regulate their behavior. In AI, control theory is relevant for designing systems
that can make decisions and take actions to achieve desired outcomes. Reinforcement learning, a
branch of machine learning, is closely related to control theory as it involves learning to control
systems by interacting with their environment to maximize rewards.

4. Linguistics: Linguistics is the scientific study of language and its structure. In AI, linguistics plays
a crucial role in natural language processing (NLP), which enables computers to understand,
interpret, and generate human language. Linguistic theories and models inform the development of

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 1


Unit- Introduction VI Sem BCA (Artificial Intelligence)

algorithms for tasks such as speech recognition, language translation, sentiment analysis, and
dialogue systems.

Each of these disciplines contributes to the understanding, development, and advancement of AI,
forming the interdisciplinary foundation upon which AI research and applications are built.

AI Past, Present and Future:

The evolution of artificial intelligence (AI) from its inception to its current state and future
prospects is a fascinating journey.

Past:

1. Foundations (1950s-1970s): The term "artificial intelligence" was coined in the 1950s. Early
efforts focused on symbolic AI, where machines were programmed with explicit rules to simulate
human intelligence.

2. AI Winter (1970s-1980s): Due to overinflated expectations and underwhelming results,


funding and interest in AI dwindled, leading to a period known as the "AI winter."

3. Resurgence (1990s-2000s): Progress in computational power, algorithms, and data availability


revived interest in AI. Machine learning techniques like neural networks gained traction.

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 2


Unit- Introduction VI Sem BCA (Artificial Intelligence)

4. Narrow AI Dominance (2010s): AI applications focused on narrow tasks, such as image


recognition, language translation, and recommendation systems. Deep learning, fueled by big data
and powerful GPUs, became dominant.

Present:

1. Ubiquity of AI: AI is integrated into various aspects of daily life, from virtual assistants on smart
phones to personalized recommendations on streaming platforms.

2. Ethical Concerns: Issues such as bias in algorithms, privacy infringement, and job displacement
are hotly debated.

3. AI in Industry: AI is transforming industries like healthcare, finance, transportation, and


manufacturing, improving efficiency and enabling new capabilities.

4. Research Frontiers: Areas like reinforcement learning, generative models, and AI ethics are at
the forefront of research.

Future:

1. General AI: The pursuit of artificial general intelligence (AGI), where machines possess human-
like cognitive abilities, remains a long-term goal.

2. Ethical AI: Emphasis on developing AI systems that are fair, transparent, and aligned with
human values to mitigate risks and maximize benefits.

3. AI Augmentation: AI will augment human capabilities across various domains, enhancing


creativity, problem-solving, and decision-making.

4. AI and Society: Addressing socio-economic implications, such as job displacement and


inequality, through policy frameworks and re-skilling initiatives.

5. Interdisciplinary Collaboration: AI development will increasingly involve collaboration across


disciplines, including ethics, psychology, and sociology.

Overall, AI's journey reflects a continuous cycle of innovation, challenges, and societal implications,
shaping our understanding of intelligence and its applications in the modern world.
Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 3
Unit- Introduction VI Sem BCA (Artificial Intelligence)

Agents in AI:

In artificial intelligence, an agent is a computer program or system that is designed to perceive its
environment, make decisions and take actions to achieve a specific goal or set of goals.

An Agent runs in the cycle of perceiving, thinking, and acting. An agent can be:

o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and
hand, legs, vocal tract work for actuators.

o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.

o Software Agent: Software agent can have keystrokes, file contents as sensory input and act
on those inputs and display output on the screen.

Hence the world around us is full of agents such as thermostat, cellphone, camera, and even we are
also agents.

Before moving forward, we should first know about sensors, effectors, and actuators.

Sensor: Sensor is a device which detects the change in the environment and sends the information
to other electronic devices. An agent observes its environment through sensors.

Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an electric
motor, gears, rails, etc.

Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels,
arms, fingers, wings, fins, and display screen.

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 4


Unit- Introduction VI Sem BCA (Artificial Intelligence)

Intelligent Agents:

An intelligent agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals. An intelligent agent may learn from the environment to achieve their
goals. A thermostat is an example of an intelligent agent.

Following are the main four rules for an AI agent:

o Rule 1: An AI agent must have the ability to perceive the environment.

o Rule 2: The observation must be used to make decisions.

o Rule 3: Decision should result in an action.

o Rule 4: The action taken by an AI agent must be a rational action.

Rational Agent:

A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to
maximize its performance measure with all possible actions.

A rational agent is said to perform the right things. AI is about creating rational agents to use for
game theory and decision theory for various real-world scenarios.

For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong action,
an agent gets a negative reward.

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 5


Unit- Introduction VI Sem BCA (Artificial Intelligence)

Rationality:

The rationality of an agent is measured by its performance measure. Rationality can be judged on
the basis of following points:

o Performance measure which defines the success criterion.

o Agent prior knowledge of its environment.

o Best possible actions that an agent can perform.

o The sequence of percepts.

Specifying task of Environment (PEAS Representation)

PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational
agent, then we can group its properties under PEAS representation model. It is made up of four
words:

o P: Performance measure

o E: Environment

o A: Actuators

o S: Sensors

Here performance measure is the objective for the success of an agent's behavior.

PEAS for self-driving cars:

Let's suppose a self-driving car then PEAS representation will be:

Performance: Safety, time, legal drive, comfort

Environment: Roads, other vehicles, road signs, pedestrian

Actuators: Steering, accelerator, brake, signal, horn

Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.


Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 6
Unit- Introduction VI Sem BCA (Artificial Intelligence)

Example of Agents with their PEAS representation

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 7


Unit- Introduction VI Sem BCA (Artificial Intelligence)

Properties of task environments


An environment in artificial intelligence is the surrounding of the agent. The agent takes input
from the environment through sensors and delivers the output to the environment through
actuators. There are several types of environments:
 Fully Observable vs Partially Observable
 Deterministic vs Stochastic

 Competitive vs Collaborative

 Single-agent vs Multi-agent

 Static vs Dynamic

 Discrete vs Continuous

 Episodic vs Sequential

 Known vs Unknown

1. Fully Observable vs Partially Observable

 When an agent sensor is capable to sense or access the complete state of an agent at each
point in time, it is said to be a fully observable environment else it is partially observable.

 Maintaining a fully observable environment is easy as there is no need to keep track of the
history of the surrounding.

 An environment is called unobservable when the agent has no sensors in all environments.

 Examples:
 Chess – the board is fully observable, and so are the opponent’s moves.

 Driving – the environment is partially observable because what’s around the corner
is not known.

2. Deterministic vs Stochastic

 When a uniqueness in the agent’s current state completely determines the next state of the
agent, the environment is said to be deterministic.

 The stochastic environment is random in nature which is not unique and cannot be
completely determined by the agent.

 Examples:

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 8


Unit- Introduction VI Sem BCA (Artificial Intelligence)

 Chess – there would be only a few possible moves for a coin at the current state and
these moves can be determined.

 Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to
time.

3. Competitive vs Collaborative

 An agent is said to be in a competitive environment when it competes against another agent


to optimize the output.

 The game of chess is competitive as the agents compete with each other to win the game
which is the output.

 An agent is said to be in a collaborative environment when multiple agents cooperate to


produce the desired output.

 When multiple self-driving cars are found on the roads, they cooperate with each other to
avoid collisions and reach their destination which is the output desired.

4. Single-agent vs Multi-agent

 An environment consisting of only one agent is said to be a single-agent environment.

 A person left alone in a maze is an example of the single-agent system.

 An environment involving more than one agent is a multi-agent environment.

 The game of football is multi-agent as it involves 11 players in each team.

5. Dynamic vs Static

 An environment that keeps constantly changing itself when the agent is up with some action
is said to be dynamic.

 A roller coaster ride is dynamic as it is set in motion and the environment keeps changing
every instant.

 An idle environment with no change in its state is called a static environment.

 An empty house is static as there’s no change in the surroundings when an agent enters.

6. Discrete vs Continuous

 If an environment consists of a finite number of actions that can be deliberated in the


environment to obtain the output, it is said to be a discrete environment.

 The game of chess is discrete as it has only a finite number of moves. The number of moves
might vary with every game, but still, it’s finite.

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 9


Unit- Introduction VI Sem BCA (Artificial Intelligence)

 The environment in which the actions are performed cannot be numbered i.e. is not
discrete, is said to be continuous.

 Self-driving cars are an example of continuous environments as their actions are driving,
parking, etc. which cannot be numbered.

7. Episodic vs Sequential

 In an Episodic task environment, each of the agent’s actions is divided into atomic
incidents or episodes. There is no dependency between current and previous incidents. In
each incident, an agent receives input from the environment and then performs the
corresponding action.

 Example: Consider an example of Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot(agent) will make the decision on the
current part i.e. there is no dependency between current and previous decisions.

 In a Sequential environment, the previous decisions can affect all future decisions. The
next action of the agent depends on what action he has taken previously and what action he
is supposed to take in the future.

 Example:

 Checkers- Where the previous move can affect all the following moves.

8. Known vs Unknown

 In a known environment, the output for all probable actions is given. Obviously, in case of
unknown environment, for an agent to make a decision, it has to gain knowledge about how
the environment works.

Structure of an AI Agent

The task of AI is to design an agent program which implements the agent function. The structure of
an intelligent agent is a combination of architecture and agent program. It can be viewed as:

Agent = Architecture + Agent program

Following are the main three terms involved in the structure of an AI agent:

Architecture: Architecture is machinery that an AI agent executes on.

Agent Function: Agent function is used to map a percept to an action.

Agent program: Agent program is an implementation of agent function. An agent program


executes on the physical architecture to produce function f.
Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 10
Unit- Introduction VI Sem BCA (Artificial Intelligence)

Note: For Detail information for structure of AI Agent refer class notes

Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability :
 Simple Reflex Agents
 Model-Based Reflex Agents
 Goal-Based Agents
 Utility-Based Agents
Simple Reflex Agents
Simple reflex agents ignore the rest of the percept history and act only on the basis of
the current percept. Percept history is the history of all that an agent has perceived to date. The
agent function is based on the condition-action rule. A condition-action rule is a rule that maps a
state i.e., a condition to an action. If the condition is true, then the action is taken, else not. This
agent function only succeeds when the environment is fully observable. For simple reflex agents
operating in partially observable environments, infinite loops are often unavoidable. It may be
possible to escape from infinite loops if the agent can randomize its actions.
Problems with Simple reflex agents are :
 Very limited intelligence.
 No knowledge of non-perceptual parts of the state.
 Usually too big to generate and store.
 If there occurs any change in the environment, then the collection of rules needs to be
updated.

Fig: Simple Reflex Agents

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 11


Unit- Introduction VI Sem BCA (Artificial Intelligence)

Model-Based Reflex Agents


It works by finding a rule whose condition matches the current situation. A model-based
agent can handle partially observable environments by the use of a model about the world. The
agent has to keep track of the internal state which is adjusted by each percept and that depends on
the percept history. The current state is stored inside the agent which maintains some kind of
structure describing the part of the world which cannot be seen.
Updating the state requires information about:
 How the world evolves independently from the agent?
 How do the agent’s actions affect the world?

Goal-Based Agents
These kinds of agents take decisions based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to reduce their
distance from the goal. This allows the agent a way to choose among multiple possibilities, selecting
the one which reaches a goal state. The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents more flexible. They usually require search
and planning. The goal-based agent’s behavior can easily be changed.

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 12


Unit- Introduction VI Sem BCA (Artificial Intelligence)

Fig: Goal-Based Agents


Utility-Based Agents
The agents which are developed having their end uses as building blocks are called utility-based
agents. When there are multiple possible alternatives, then to decide which one is best, utility-
based agents are used. They choose actions based on a preference (utility) for each state.
Sometimes achieving the desired goal is not enough. We may look for a quicker, safer, cheaper trip
to reach a destination. Agent happiness should be taken into consideration. Utility describes
how “happy” the agent is. Because of the uncertainty in the world, a utility agent chooses the action
that maximizes the expected utility. A utility function maps a state onto a real number which
describes the associated degree of happiness.

Fig: Utility-Based Agents

Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 13

You might also like