Unit 1 Question With Answer2
Unit 1 Question With Answer2
Unit 1 Question With Answer2
Ans-To conduct this test, we need two people and the machine to be evaluated. One
person plays the role of the interrogator, who is in a separate room from the compute
and the other person. The interrogator can ask questions of either the person or the
computer by typing questions and receiving typed responses. However, the
interrogator knows them only as A and B and aims to determine which the person is
and which are the machine. The goal of the machine is to fool the interrogator into
believing that is the person. If the machine succeeds at this,then we will conclude that
the machine is acting humanly. But programming a computer to pass the test provides
plenty to work on, to posses the following capabilities.
Natural Language Processing – to enable it to communicate successfully in English.
Knowledge Representation - to store information provided before or during the
interrogation.
Automated Reasoning - to use the stored information to answer questions and to draw
new conclusions.
Machine Learning - to adopt to new circumstances and to learn from experience ,
example etc.
Total Turing Test: The test which includes a video signal so that the interrogator can
test the perceptual abilities of the machine. To undergo the total Turing test, the
computer will need:
Computer Vision - to perceive objects
Robotics - to move them
Ans- To construct a machine program to think like a human, first it requires the
knowledge about the actual workings of human mind. After completing the study about
human mind it is possible to express the theory as a computer program. It the
program‘s input/output and timing behavior matches with the human behavior then
we can say that the program‘s mechanism is working like a human mind.
Example: General Problem Solver (GPS)
Ans- Some Software Agents (or) Software Robots (or) Soft bots exist in rich, unlimited
domains. So,Actions are done in real time.Action is performed in the complex
environment. It is designed to scan online news sources and shows the actions with
natural language processing.
Ans-
Figure: gives the structure of this general program in schematic form, showing how
the condition – action rules allow the agent to make the connection from percept to
action. In the above schematic diagram, the shapes are described as:
Rectangle – to denote the current internal state of the agent‘s decision process.
Oval – to represent the background information in the process.
The agent program, which is also very simple, is shown in the following figure.
Figure: A model-based reflex agent. It keeps track of the current state of the world
using an internal model. It then chooses an action in the same way as the reflex agent.
UPDATE-STATE - This is responsible for creating the new internal state description by
combining percept and current state description.
3. Goal-based agents
An agent knows the description of current state and also needs some sort of goal
information that describes situations that are desirable. The action matches with the
current state is selected depends on the goal state.
The goal based agent is more flexible for more than one destination also. After
identifying one destination, the new destination is specified, goal based agent is
activated to come up with a new behavior. Search and Planning are the subfields of AI
devoted to finding action sequences that achieve the agent‘s goals.
The goal-based agent appears less efficient, it is more flexible because the knowledge
that supports its decisions is represented explicitly and can be modified. The goal-based
agent‘s behavior can easily be changed to go to a different location.
An agent generates a goal state with high – quality behavior (utility) that is, if more
than one sequence exists to reach the goal state then the sequence with more reliable,
safer, quicker and cheaper than others to be selected.
A utility function maps a state (or sequence of states) onto a real number, which
describes the associated degree of happiness. The utility function can be used for two
different cases: First, when there are conflicting goals, only some of which can be
achieved (for e.g., speed and safety), the utility function specifies the appropriate
tradeoff. Second, when the agent aims for several goals, none of which can be achieved
with certainty, then the success can be weighted up against the importance of the
goals.
Learning element - This is responsible for making improvements. It uses the feedback
from the critic on how the agent is doing and determines how the performance
element should be modified to do better in the future.
Critic - It tells the learning element how well the agent is doing with respect to a fixed
performance standard.
Problem generator - It is responsible for suggesting actions that will lead to new and
informative experiences. In summary, agents have a variety of components, and those
components can be represented in many ways within the agent program, so there
appears to be great variety among learning methods. Learning in intelligent agents can
be summarized as a process of modification of each component of the agent to bring
the components into closer agreement with the available feedback information,
thereby improving the overall performance of the agent (All agents can improve their
performance through learning).