0% found this document useful (0 votes)
23 views16 pages

1 AI UNIT I Lecture Notes

AI

Uploaded by

tanmaydaskashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views16 pages

1 AI UNIT I Lecture Notes

AI

Uploaded by

tanmaydaskashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT I Lecture Notes

Dr. Kangkana Bora


Department of CS and IT
Phone no:9401801343
Email id: [email protected]

Definition of AI

Definition: Artificial Intelligence is the study of how to make computers do things, which, at
the moment, people do better.

According to the father of Artificial Intelligence, John McCarthy, it is “The science and
engineering of making intelligent machines, especially intelligent computer programs”.

Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a


software think intelligently, in the similar manner the intelligent humans think.

AI is accomplished by studying how human brain thinks and how humans learn, decide, and
work while trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.

It has gained prominence recently due, in part, to big data, or the increase in speed, size and
variety of data businesses are now collecting. AI can perform tasks such as identifying patterns
in the data more efficiently than humans, enabling businesses to gain more insight out of their
data.

From a business perspective AI is a set of very powerful tools, and methodologies for using
those tools to solve business problems.

From a programming perspective, AI includes the study of symbolic programming, problem


solving, and search.

Applications of AI

AI has applications in all fields of human study, such as finance and economics, environmental
engineering, chemistry, computer science, and so on. Some of the applications of AI are listed
below:

. Perception

■ Machine vision

■ Speech understanding
■ Touch ( tactile or haptic) sensation

. Robotics

. Natural Language Processing

Natural Language Understanding

Speech Understanding

Language Generation

Machine Translation

. Planning

Expert Systems

Machine Learning

Theorem Proving

. Symbolic Mathematics

. Game Playing

Area of AI

A list of branches of AI is given below. However, some branches are surely missing, because
no one has identified them yet. Some of these may be regarded as concepts or topics rather than
full branches.

Logical AI — In general the facts of the specific situation in which it must act, and its goals
are all represented by sentences of some mathematical logical language. The program decides
what to do by inferring that certain actions are appropriate for achieving its goals.

Search — Artificial Intelligence programs often examine large numbers of possibilities – for
example, moves in a chess game and inferences by a theorem proving program.
Pattern Recognition — When a program makes observations of some kind, it is often planned
to compare what it sees with a pattern. For example, a vision program may try to match a
pattern of eyes and a nose in a scene in order to find a face. More complex patterns are like a
natural language text, a chess position or in the history of some event. These more complex
patterns require quite different methods than do the simple patterns that have been studied the
most.

Representation — Usually languages of mathematical logic are used to represent the facts
about the world.

Inference — Others can be inferred from some facts. Mathematical logical deduction is
sufficient for some purposes, but new methods of non-monotonic inference have been added
to the logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning
in which a conclusion is to be inferred by default. But the conclusion can be withdrawn if there
is evidence to the divergent. For example, when we hear of a bird, we infer that it can fly, but
this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a
conclusion may have to be withdrawn that constitutes the non-monotonic character of the
reasoning. Normal logical reasoning is monotonic, in that the set of conclusions can be drawn
from a set of premises, i.e. monotonic increasing function of the premises. Circumscription is
another form of non-monotonic reasoning.

Common sense knowledge and Reasoning — This is the area in which AI is farthest from
the human level, in spite of the fact that it has been an active research area since the 1950s.
While there has been considerable progress in developing systems of non-monotonic reasoning
and theories of action, yet more new ideas are needed.

Learning from experience — There are some rules expressed in logic for learning. Programs
can only learn what facts or behaviour their formalisms can represent, and unfortunately
learning systems are almost all based on very limited abilities to represent information.

Planning — Planning starts with general facts about the world (especially facts about the
effects of actions), facts about the particular situation and a statement of a goal. From these,
planning programs generate a strategy for achieving the goal. In the most common cases, the
strategy is just a sequence of actions.

Epistemology — This is a study of the kinds of knowledge that are required for solving
problems in the world.

Ontology — Ontology is the study of the kinds of things that exist. In AI the programs and
sentences deal with various kinds of objects and we study what these kinds are and what their
basic properties are. Ontology assumed importance from the 1990s.

Heuristics — A heuristic is a way of trying to discover something or an idea embedded in a


program. The term is used variously in AI. Heuristic functions are used in some approaches to
search or to measure how far a node in a search tree seems to be from a goal. Heuristic
predicates that compare two nodes in a search tree to see if one is better than the other, i.e.
constitutes an advance toward the goal, and may be more useful.

Genetic programming — Genetic programming is an automated method for creating a


working computer program from a high-level problem statement of a problem. Genetic
programming starts from a high-level statement of ‘what needs to be done’ and automatically
creates a computer program to solve the problem.

Expert System

Definition:

The expert systems are the computer applications developed to solve complex problems in a
particular domain, at the level of extra-ordinary human intelligence and expertise.

Characteristics of Expert Systems

• High performance
• Understandable
• Reliable
• Highly responsive

Capabilities of Expert Systems

The expert systems are capable of −

• Advising
• Instructing and assisting human in decision making
• Demonstrating
• Deriving a solution
• Diagnosing
• Explaining
• Interpreting input
• Predicting results
• Justifying the conclusion
• Suggesting alternative options to a problem

They are incapable of −

• Substituting human decision makers


• Possessing human capabilities
• Producing accurate output for inadequate knowledge base
• Refining their own knowledge

Components of Expert Systems

The components of ES include −

a) Knowledge Base
b) Inference Engine
c) User Interface

Let us see them one by one briefly −

Fig – An overview of expert system

a) Knowledge Base

It contains domain-specific and high-quality knowledge. Knowledge is required to exhibit


intelligence. The success of any ES majorly depends upon the collection of highly accurate and
precise knowledge.

Knowledge is collection of facts. The information is organized as data and facts about the task
domain. Data, information, and past experience combined together are termed as knowledge.

Components of Knowledge Base

The knowledge base of an ES is a store of both, factual and heuristic knowledge.

• Factual Knowledge − It is the information widely accepted by the Knowledge


Engineers and scholars in the task domain.
• Heuristic Knowledge − It is about practice, accurate judgement, one’s ability of
evaluation, and guessing.

Knowledge representation
It is the method used to organize and formalize the knowledge in the knowledge base. It is in
the form of IF-THEN-ELSE rules.

Knowledge Acquisition

The success of any expert system majorly depends on the quality, completeness, and accuracy
of the information stored in the knowledge base.

The knowledge base is formed by readings from various experts, scholars, and the Knowledge
Engineers. The knowledge engineer is a person with the qualities of empathy, quick learning,
and case analyzing skills. Knowledge engineer acquires information from subject expert by
recording, interviewing, and observing him at work, etc. He then categorizes and organizes the
information in a meaningful way, in the form of IF-THEN-ELSE rules, to be used by
interference machine. The knowledge engineer also monitors the development of the ES.

b) Inference Engine

Use of efficient procedures and rules by the Inference Engine is essential in deducting a correct,
flawless solution.

In case of knowledge-based ES, the Inference Engine acquires and manipulates the knowledge
from the knowledge base to arrive at a particular solution.

In case of rule based ES, it −

• Applies rules repeatedly to the facts, which are obtained from earlier rule application.
• Adds new knowledge into the knowledge base if required.
• Resolves rules conflict when multiple rules are applicable to a particular case.

To recommend a solution, the Inference Engine uses the following strategies −

• Forward Chaining
• Backward Chaining

Forward Chaining

It is a strategy of an expert system to answer the question, “What can happen next?”

Here, the Inference Engine follows the chain of conditions and derivations and finally deduces
the outcome. It considers all the facts and rules, and sorts them before concluding to a solution.
This strategy is followed for working on conclusion, result, or effect. For example, prediction
of share market status as an effect of changes in interest rates.

Backward Chaining

With this strategy, an expert system finds out the answer to the question, “Why this
happened?”

On the basis of what has already happened, the Inference Engine tries to find out which
conditions could have happened in the past for this result. This strategy is followed for finding
out cause or reason. For example, diagnosis of blood cancer in humans.

c) User Interface

User interface provides interaction between user of the ES and the ES itself. It is generally
Natural Language Processing so as to be used by the user who is well-versed in the task domain.
The user of the ES need not be necessarily an expert in Artificial Intelligence.

It explains how the ES has arrived at a particular recommendation. The explanation may appear
in the following forms −
• Natural language displayed on screen.
• Verbal narrations in natural language.
• Listing of rule numbers displayed on the screen.

The user interface makes it easy to trace the credibility of the deductions.

What is Natural Language Processing?

Natural Language Processing, usually shortened as NLP, is a branch of artificial intelligence


that deals with the interaction between computers and humans using the natural language.

The ultimate objective of NLP is to read, decipher, understand, and make sense of the human
languages in a manner that is valuable.

Most NLP techniques rely on machine learning to derive meaning from human languages.

In fact, a typical interaction between humans and machines using Natural Language Processing
could go as follows:

1. A human talks to the machine

2. The machine captures the audio

3. Audio to text conversion takes place

4. Processing of the text’s data

5. Data to audio conversion takes place

6. The machine responds to the human by playing the audio file

What is NLP used for?

Natural Language Processing is the driving force behind the following common applications:

• Language translation applications such as Google Translate


• Word Processors such as Microsoft Word and Grammarly that employ NLP to check
grammatical accuracy of texts.
• Interactive Voice Response (IVR) applications used in call centers to respond to certain
users’ requests.
• Personal assistant applications such as OK Google, Siri, Cortana, and Alexa.
Why is NLP difficult?

Natural Language processing is considered a difficult problem in computer science. It’s the
nature of the human language that makes NLP difficult.

The rules that dictate the passing of information using natural languages are not easy for
computers to understand.

Some of these rules can be high-leveled and abstract; for example, when someone uses a
sarcastic remark to pass information.

On the other hand, some of these rules can be low-levelled; for example, using the character
“s” to signify the plurality of items.

Comprehensively understanding the human language requires understanding both the words
and how the concepts are connected to deliver the intended message.

While humans can easily master a language, the ambiguity and imprecise characteristics of the
natural languages are what make NLP difficult for machines to implement.

Note: More details will be discussed in UNIT V

Speech Recognition

Speech recognition is technology that can recognize spoken words, which can then be
converted to text. A subset of speech recognition is voice recognition which is the technology
for identifying a person based on their voice.

Facebook, Amazon, Microsoft, Google and Apple — five of the world’s top tech companies
— are already offering this feature on various devices through services like Google Home,
Amazon Echo and Siri.
Organization of AI system

Developing an AI solution is a software development project, therefore, there are fundamental


similarities with other such projects. However, there are a few unique flavors to an AI
development project.

The life cycle of an AI development project is as follows:

1. Identify the AI capabilities you need

AI isn‘t monolithic. The tremendous value that AI creates comes from various AI capabilities,
and the proposed AI solution might need several of them. One need to study the following AI
capabilities and choose the ones needed:

• Machine Learning (ML): This includes deep learning, supervised algorithms, and
unsupervised algorithms.
• Natural Language Processing (NLP): This encompasses content extraction,
classification, machine translation, answering questions, and text generation.
• Expert systems is another key capability.
• Vision: This includes image recognition and machine vision.
• Speech: Speech-to-text and text-to-speech are included in this capability.

5. Agree on the right SDLC model for the project

A software development project to develop an AI solution is a strategic one since it addresses


high-value objectives. Finalizing the requirements upfront is important here since scope creeps
later in the cycle are costly.

The Waterfall SDLC model is the right one for such projects. It stresses on carefully baselining
the requirements before the design starts, moreover, this model facilitates timely reviews of the
project after key phases.

This model has the following phases:

• Requirements analysis;
• Design;
• Development;
• Testing;
• Deployment;

6. Requirements analysis

You need to onboard business analysts in your team at this point so that the requirements
analysis phase can start. While you ought to follow the industry-standard requirements analysis
processes, there are a few best practices for AI development projects.

Business analysis should consider the following factors while analyzing the requirements for
an AI solution:

• Customer empathy;
• Experiments;
• The AI solution should be consisting of smaller components;
• Avoiding bias arising from wrong data.

7. Design

The next step in the AI development lifecycle is the design phase, and you now need the AI
development lead. Assuming you are planning to launch the app on the web, Android, and iOS,
you need the corresponding development leads. The test lead and the DevOps lead need to
participate as well.

AI development platforms can expedite the project since they offer the following:
• AI capabilities like ML, NLP, expert systems, automation, vision, and speech;
• A robust cloud infrastructure.

During this phase, you need to evaluate the various AI development platforms, e.g.:

• Microsoft Azure AI Platform;


• Google Cloud AI Platform;
• IBM Watson AI platform;
• BigML;
• Infosys Nia.

8. Development

You need your complete development team ready before starting this phase, therefore, you
need to induct the AI, web, and mobile developers.

Different AI development platforms offer extensive documentation to help the development


teams. Depending on your choice of the AI platform, you need to visit the appropriate
webpages for this documentation, which are as follows:

• Microsoft Azure AI Platform;


• Google Cloud AI Platform;
• IBM Watson Developer platform;
• BigML;
• Infosys Nia resources.

9. Testing

Onboard your testing and DevOps teams before this phase, and look for testers with experience
in AI and ML systems. While the fundamental testing concepts are fully applicable in AI
development projects, there are additional considerations too. These are as follows:

• The volume of test data can be large, which presents complexities.


• Human biases in selecting test data can adversely impact the testing phase, therefore,
data validation is important.
• Your testing team should test the AI and ML algorithms keeping model validation,
successful learnability, and algorithm effectiveness in mind.
• Regulatory compliance testing and security testing are important since the system might
deal with sensitive data, moreover, the large volume of data makes performance testing
crucial.
• You are implementing an AI solution that will need to use data from your other systems,
therefore, systems integration testing assumes importance.
• Test data should include all relevant subsets of training data, i.e., the data you will use
for training the AI system.
• Your team must create test suites that help you validate your ML models.
10. Deployment

You should take into account certain considerations when deploying AI/ML systems, and these
are as follows:

• The project team needs a robust internal handoff process between the IT operations and
the development teams. Considering AI/ML is new to several organizations, the IT
operations team need a sufficient understanding of the development project.
• Deploy the AI/ML solution as a centralized service that the entire organization can tap
into.

11. Maintenance

This includes post-deployment support, warranty support, and long-term maintenance. You
need to have a part of your development team available during this phase since this will help
the maintenance team to learn the system.

The Underlying assumptions

Disadvantage: There is no logical way to prove and disapprove on logical ground to prove the
assumptions.

AI Techniques

Artificial Intelligence research during the last three decades has concluded that
Intelligence requires knowledge. To compensate overwhelming quality, knowledge
possesses less desirable properties.

A. It is huge.

B. It is difficult to characterize correctly.

C. It is constantly varying.

D. It differs from data by being organized in a way that corresponds to its application.
E. It is complicated.

An AI technique is a method that exploits knowledge that is represented so that:

. The knowledge captures generalizations that share properties, are grouped together, rather
than being allowed separate representation.

. It can be understood by people who must provide it—even though for many programs bulk
of the data comes automatically from readings.

. In many AI domains, how the people understand the same people must supply the knowledge
to a program.

. It can be easily modified to correct errors and reflect changes in real conditions.

. It can be widely used even if it is incomplete or inaccurate.

. It can be used to help overcome its own sheer bulk by helping to narrow the range of
possibilities that must be usually considered.

Symbolic Vs Non-symbolic AI

If one looks at the history of AI, the research field is divided into two camps – Symbolic &
Non-symbolic AI that followed different path towards building an intelligent system.
Symbolists firmly believed in developing an intelligent system based on rules and knowledge
and whose actions were interpretable while the non-symbolic approach strived to build a
computational system inspired by the human brain.

Symbolic AI: The traditional symbolic approach, introduced by Newell & Simon in 1976
describes AI as the development of models using symbolic manipulation. In AI applications,
computers process symbols rather than numbers or letters. In the Symbolic approach, AI
applications process strings of characters that represent real-world entities or concepts.
Symbols can be arranged in structures such as lists, hierarchies, or networks and these
structures show how symbols relate to each other. An early body of work in AI is purely
focused on symbolic approaches with Symbolists pegged as the “prime movers of the field”.

A research paper from University of Missouri-Columbia cites the computation in these models
is based on explicit representations that contain symbols put together in a specific way and
aggregate information. In this approach, a physical symbol system comprises of a set of
entities, known as symbols which are physical patterns. Search and representation played a
central role in the development of symbolic AI.

This approach, also known as the traditional AI spawned a lot of research in Cognitive Sciences
and led to significant advances in the understanding of cognition.

Non-symbolic AI: It do not manipulate a symbolic representation to find solutions to


problems. Instead, they perform calculations according to some principles that have
demonstrated to be able to solve problems. Without exactly understanding how to arrive at the
solution. Examples of Non-symbolic AI include genetic algorithms, neural networks and deep
learning. The origins of non-symbolic AI come from the attempt to mimic a human brain and
its complex network of interconnected neurons. Non-symbolic AI is also known as
“Connectionist AI” and the current applications are based on this approach – from Google’s
automatic transition system (that looks for patterns), IBM’s Watson, Facebook’s face
recognition algorithm to self-driving car technology.

Differences between symbolic and non-symbolic AI:

• Symbolic AI refers to the fact that all steps are based on symbolic human readable
representations of the problem that use logic and search to solve problem.
• Key advantage of Symbolic AI is that the reasoning process can be easily understood –
a Symbolic AI program can easily explain why a certain conclusion is reached and what
the reasoning steps had been.
• A key disadvantage of Non-symbolic AI is that it is difficult to understand how the
system came to a conclusion. This is particularly important when applied to critical
applications such as self-driving cars, medical diagnosis among others.
• A key disadvantage of Symbolic AI is that for learning process – the rules and
knowledge has to be hand coded which is a hard problem.
• Non-symbolic systems such as DL-powered applications cannot take high-risk
decisions
• So far, symbolic AI has been confined to the academic world and university labs with
little research coming from industry giants.

You might also like