Unit-1 Introduction To AI (VI Sem BCA) .
Unit-1 Introduction To AI (VI Sem BCA) .
Unit-1 Introduction To AI (VI Sem BCA) .
Introduction
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to mimic human cognitive functions such as learning, problem-solving, decision-
making, perception, understanding natural language, and interacting with the environment. AI
encompasses a broad range of techniques, algorithms, and methodologies aimed at creating
systems capable of performing tasks that typically require human intelligence.
Foundations of AI:
The foundation of Artificial Intelligence (AI) draws upon several disciplines, including mathematics,
neuroscience, control theory, and linguistics. Let's briefly explore each:
1. Mathematics: Mathematics provides the theoretical framework for many AI algorithms and
techniques. Concepts such as calculus, linear algebra, probability theory, and optimization are
crucial in areas like machine learning, neural networks, and algorithm design. Mathematical models
help in understanding complex systems and developing algorithms for problem-solving and
decision-making tasks.
2. Neuroscience: Neuroscience studies the structure and function of the brain and nervous system.
AI researchers draw inspiration from neuroscience to understand how biological systems process
information, learn, and adapt. Neural networks, a fundamental concept in AI, are inspired by the
interconnected structure of neurons in the brain. Understanding neural mechanisms aids in
developing biologically inspired AI models and algorithms.
3. Control Theory : Control theory deals with the behavior of dynamical systems and the design of
control systems to regulate their behavior. In AI, control theory is relevant for designing systems
that can make decisions and take actions to achieve desired outcomes. Reinforcement learning, a
branch of machine learning, is closely related to control theory as it involves learning to control
systems by interacting with their environment to maximize rewards.
4. Linguistics: Linguistics is the scientific study of language and its structure. In AI, linguistics plays
a crucial role in natural language processing (NLP), which enables computers to understand,
interpret, and generate human language. Linguistic theories and models inform the development of
algorithms for tasks such as speech recognition, language translation, sentiment analysis, and
dialogue systems.
Each of these disciplines contributes to the understanding, development, and advancement of AI,
forming the interdisciplinary foundation upon which AI research and applications are built.
The evolution of artificial intelligence (AI) from its inception to its current state and future
prospects is a fascinating journey.
Past:
1. Foundations (1950s-1970s): The term "artificial intelligence" was coined in the 1950s. Early
efforts focused on symbolic AI, where machines were programmed with explicit rules to simulate
human intelligence.
Present:
1. Ubiquity of AI: AI is integrated into various aspects of daily life, from virtual assistants on smart
phones to personalized recommendations on streaming platforms.
2. Ethical Concerns: Issues such as bias in algorithms, privacy infringement, and job displacement
are hotly debated.
4. Research Frontiers: Areas like reinforcement learning, generative models, and AI ethics are at
the forefront of research.
Future:
1. General AI: The pursuit of artificial general intelligence (AGI), where machines possess human-
like cognitive abilities, remains a long-term goal.
2. Ethical AI: Emphasis on developing AI systems that are fair, transparent, and aligned with
human values to mitigate risks and maximize benefits.
Overall, AI's journey reflects a continuous cycle of innovation, challenges, and societal implications,
shaping our understanding of intelligence and its applications in the modern world.
Prashanth Kumar R Dept. of CS PESIAMS Shimoga Page 3
Unit- Introduction VI Sem BCA (Artificial Intelligence)
Agents in AI:
In artificial intelligence, an agent is a computer program or system that is designed to perceive its
environment, make decisions and take actions to achieve a specific goal or set of goals.
An Agent runs in the cycle of perceiving, thinking, and acting. An agent can be:
o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and
hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors
and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input and act
on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and even we are
also agents.
Before moving forward, we should first know about sensors, effectors, and actuators.
Sensor: Sensor is a device which detects the change in the environment and sends the information
to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an electric
motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels,
arms, fingers, wings, fins, and display screen.
Intelligent Agents:
An intelligent agent is an autonomous entity which act upon an environment using sensors and
actuators for achieving goals. An intelligent agent may learn from the environment to achieve their
goals. A thermostat is an example of an intelligent agent.
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to
maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational agents to use for
game theory and decision theory for various real-world scenarios.
For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong action,
an agent gets a negative reward.
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can be judged on
the basis of following points:
PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational
agent, then we can group its properties under PEAS representation model. It is made up of four
words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
Competitive vs Collaborative
Single-agent vs Multi-agent
Static vs Dynamic
Discrete vs Continuous
Episodic vs Sequential
Known vs Unknown
When an agent sensor is capable to sense or access the complete state of an agent at each
point in time, it is said to be a fully observable environment else it is partially observable.
Maintaining a fully observable environment is easy as there is no need to keep track of the
history of the surrounding.
An environment is called unobservable when the agent has no sensors in all environments.
Examples:
Chess – the board is fully observable, and so are the opponent’s moves.
Driving – the environment is partially observable because what’s around the corner
is not known.
2. Deterministic vs Stochastic
When a uniqueness in the agent’s current state completely determines the next state of the
agent, the environment is said to be deterministic.
The stochastic environment is random in nature which is not unique and cannot be
completely determined by the agent.
Examples:
Chess – there would be only a few possible moves for a coin at the current state and
these moves can be determined.
Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to
time.
3. Competitive vs Collaborative
The game of chess is competitive as the agents compete with each other to win the game
which is the output.
When multiple self-driving cars are found on the roads, they cooperate with each other to
avoid collisions and reach their destination which is the output desired.
4. Single-agent vs Multi-agent
5. Dynamic vs Static
An environment that keeps constantly changing itself when the agent is up with some action
is said to be dynamic.
A roller coaster ride is dynamic as it is set in motion and the environment keeps changing
every instant.
An empty house is static as there’s no change in the surroundings when an agent enters.
6. Discrete vs Continuous
The game of chess is discrete as it has only a finite number of moves. The number of moves
might vary with every game, but still, it’s finite.
The environment in which the actions are performed cannot be numbered i.e. is not
discrete, is said to be continuous.
Self-driving cars are an example of continuous environments as their actions are driving,
parking, etc. which cannot be numbered.
7. Episodic vs Sequential
In an Episodic task environment, each of the agent’s actions is divided into atomic
incidents or episodes. There is no dependency between current and previous incidents. In
each incident, an agent receives input from the environment and then performs the
corresponding action.
Example: Consider an example of Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot(agent) will make the decision on the
current part i.e. there is no dependency between current and previous decisions.
In a Sequential environment, the previous decisions can affect all future decisions. The
next action of the agent depends on what action he has taken previously and what action he
is supposed to take in the future.
Example:
Checkers- Where the previous move can affect all the following moves.
8. Known vs Unknown
In a known environment, the output for all probable actions is given. Obviously, in case of
unknown environment, for an agent to make a decision, it has to gain knowledge about how
the environment works.
Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function. The structure of
an intelligent agent is a combination of architecture and agent program. It can be viewed as:
Following are the main three terms involved in the structure of an AI agent:
Note: For Detail information for structure of AI Agent refer class notes
Types of Agents
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability :
Simple Reflex Agents
Model-Based Reflex Agents
Goal-Based Agents
Utility-Based Agents
Simple Reflex Agents
Simple reflex agents ignore the rest of the percept history and act only on the basis of
the current percept. Percept history is the history of all that an agent has perceived to date. The
agent function is based on the condition-action rule. A condition-action rule is a rule that maps a
state i.e., a condition to an action. If the condition is true, then the action is taken, else not. This
agent function only succeeds when the environment is fully observable. For simple reflex agents
operating in partially observable environments, infinite loops are often unavoidable. It may be
possible to escape from infinite loops if the agent can randomize its actions.
Problems with Simple reflex agents are :
Very limited intelligence.
No knowledge of non-perceptual parts of the state.
Usually too big to generate and store.
If there occurs any change in the environment, then the collection of rules needs to be
updated.
Goal-Based Agents
These kinds of agents take decisions based on how far they are currently from
their goal(description of desirable situations). Their every action is intended to reduce their
distance from the goal. This allows the agent a way to choose among multiple possibilities, selecting
the one which reaches a goal state. The knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents more flexible. They usually require search
and planning. The goal-based agent’s behavior can easily be changed.