0% found this document useful (0 votes)
11 views58 pages

A.I. Lecture 3

The document discusses major fields and applications of Artificial Intelligence, focusing on Expert Systems, Image Recognition, and Natural Language Processing (NLP). Expert Systems emulate human decision-making using a knowledge base and inference engine, while Image Recognition identifies and classifies objects within images. NLP enables communication with AI systems through natural language, involving tasks like speech recognition and sentiment analysis.

Uploaded by

saeed.abdulai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views58 pages

A.I. Lecture 3

The document discusses major fields and applications of Artificial Intelligence, focusing on Expert Systems, Image Recognition, and Natural Language Processing (NLP). Expert Systems emulate human decision-making using a knowledge base and inference engine, while Image Recognition identifies and classifies objects within images. NLP enables communication with AI systems through natural language, involving tasks like speech recognition and sentiment analysis.

Uploaded by

saeed.abdulai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

ARTIFICIAL

INTELLIGENCE
Lecture 3
Major Fields and Areas of Artificial Intelligence
Applications
EXPERT SYSTEMS
EXPERT SYSTEMS
 In artificial intelligence, an expert system is a computer system that
emulates the decision-making ability of a human expert.
 Expert systems are designed to solve complex problems by reasoning
through bodies of knowledge, represented mainly as if-then rules rather
than through conventional procedural code.
 Expert systems have specific knowledge to one problem domain, e.g.,
medicine, science, engineering, etc. The expert’s knowledge is called a
knowledge base, and it contains accumulated experience that has been
loaded and tested in the system.
 Much like other artificial intelligence systems, expert system’s knowledge
may be enhanced with add-ons to the knowledge base, or additions to the
rules.
 The more experience entered into the expert system, the more the system
can improve its performance.
CHARACTERISTICS OF
EXPERT SYSTEMS
 High performance
 Understandable
 Reliable
 Highly responsive
CAPABILITIES OF EXPERT
SYSTEMS
The expert systems are capable of −
 Advising
 Instructing and assisting human in decision making
 Demonstrating
 Deriving a solution
 Diagnosing
 Explaining
 Interpreting input
 Predicting results
 Justifying the conclusion
 Suggesting alternative options to a problem
COMPONENTS OF
EXPERT SYSTEMS
The components of ES include −
 Knowledge Base
 Inference Engine
 User Interface
KNOWLEDGE BASE
 It contains domain-specific and high-quality knowledge.
 Knowledge is required to exhibit intelligence. The success of any ES majorly depends
upon the collection of highly accurate and precise knowledge.
 What is Knowledge?
 The data is collection of facts. The information is organized as data and facts about
the task domain. Data, information, and past experience combined together are
termed as knowledge.
 Components of Knowledge Base

The knowledge base of an ES is a store of both, factual and heuristic knowledge.


 Factual Knowledge − It is the information widely accepted by the Knowledge
Engineers and scholars in the task domain.
 Heuristic Knowledge − It is about practice, accurate judgement, one’s ability of
evaluation, and guessing.
 Knowledge representation
 It is the method used to organize and formalize the knowledge in the
knowledge base. It is in the form of IF-THEN-ELSE rules.
 Knowledge Acquisition
 The success of any expert system majorly depends on the quality,
completeness, and accuracy of the information stored in the knowledge
base.
 The knowledge base is formed by readings from various experts, scholars,
and the Knowledge Engineers. The knowledge engineer is a person with
the qualities of empathy, quick learning, and case analyzing skills.
 He acquires information from subject expert by recording, interviewing, and
observing him at work, etc. He then categorizes and organizes the
information in a meaningful way, in the form of IF-THEN-ELSE rules, to be
used by interference machine. The knowledge engineer also monitors the
development of the ES.
INFERENCE ENGINE
 Use of efficient procedures and rules by the Inference Engine is essential in
deducting a correct, flawless solution.
 In case of knowledge-based ES, the Inference Engine acquires and manipulates the
knowledge from the knowledge base to arrive at a particular solution.
In case of rule based ES, it −
 Applies rules repeatedly to the facts, which are obtained from earlier rule
application.
 Adds new knowledge into the knowledge base if required.
 Resolves rules conflict when multiple rules are applicable to a particular case.

To recommend a solution, the Inference Engine uses the following strategies −


 Forward Chaining
 Backward Chaining
Forward Chaining
 It is a strategy of an expert system to answer the question, “What can
happen next?”
 Here, the Inference Engine follows the chain of conditions and derivations
and finally deduces the outcome. It considers all the facts and rules, and
sorts them before concluding to a solution.
 This strategy is followed for working on conclusion, result, or effect. For
example, prediction of share market status as an effect of changes in
interest rates.
Backward Chaining
 With this strategy, an expert system finds out the answer to the question,
“Why this happened?”
 On the basis of what has already happened, the Inference Engine tries to find
out which conditions could have happened in the past for this result. This
strategy is followed for finding out cause or reason. For example, diagnosis of
blood cancer in humans.
USER INTERFACE
 User interface provides interaction between user of the ES and the ES
itself. It is generally Natural Language Processing so as to be used by the
user who is well-versed in the task domain. The user of the ES need not be
necessarily an expert in Artificial Intelligence.
It explains how the ES has arrived at a particular recommendation. The
explanation may appear in the following forms −
 Natural language displayed on screen.
 Verbal narrations in natural language.
 Listing of rule numbers displayed on the screen.

The user interface makes it easy to trace the credibility of the deductions.
REQUIREMENTS OF EFFICIENT
ES USER INTERFACE
 It should help users to accomplish their goals in shortest possible way.
 It should be designed to work for user’s existing or desired work practices.
 Its technology should be adaptable to user’s requirements; not the other
way round.
 It should make efficient use of user input.
BENEFITS OF EXPERT
SYSTEMS
 Availability − They are easily available due to mass production of
software.
 Less Production Cost − Production cost is reasonable. This makes them
affordable.
 Speed − They offer great speed. They reduce the amount of work an
individual puts in.
 Less Error Rate − Error rate is low as compared to human errors.
 Reducing Risk − They can work in the environment dangerous to
humans.
 Steady response − They work steadily without getting motional, tensed
or fatigued.
EXPERT SYSTEMS
LIMITATIONS
No technology can offer easy and complete solution. Large systems are
costly, require significant development time, and computer resources. ESs
have their limitations which include −
 Difficult knowledge acquisition
 ES are difficult to maintain
 High development costs
IMAGE
RECOGNITION
IMAGE RECOGNITION
WHAT IS IMAGE
RECOGNITION?
 Image Recognition is the task of identifying objects of interest within an
image and recognizing which category they belong to. Photo recognition
and picture recognition are terms that are used interchangeably.
 When we visually see an object or scene, we automatically identify objects
as different instances and associate them with individual definitions.
However, visual recognition is a highly complex task for machines to
perform.
 Image recognition using artificial intelligence is a long-standing research
problem in the computer vision field. While different methods evolved over
time, the common goal of image recognition is the classification of
detected objects into different categories. Therefore, it is also called object
recognition.
 Example of image recognition to identify multiple objects in video. We used
the object detector YOLOv3 algorithm.
MEANING AND DEFINITION
OF IMAGE RECOGNITION
 In the area of Computer Vision, terms such as Segmentation, Classification,
Recognition, and Detection are often used interchangeably, and the
different tasks overlap. While this is mostly unproblematic, things get
confusing if your workflow requires you to specifically perform a particular
task.
IMAGE RECOGNITION VS.
COMPUTER VISION
 The terms image recognition and computer vision are often used
interchangeably but are actually different. In fact, image recognition is an
application of computer vision that includes a set of tasks, including
object detection, image identification, and image classification.
An application of object detection for mask detection
HOW DOES IMAGE
RECOGNITION WORK?
 Using traditional Computer Vision
 The conventional computer vision approach of image recognition is a
sequence of image filtering, segmentation, feature extraction, and rule-
based classification.
 However, the traditional computer vision approach requires a high level of
expertise, a lot of engineering time and contains many parameters that
need to be manually determined, while the portability to other tasks is
pretty limited.
WHAT IS IMAGE
RECOGNITION USED FOR?
 Face Analysis and identification
 Face analysis is a prominent image recognition application. Modern ML
methods allow using the video feed of any digital camera or webcam. In
such applications, image recognition software employs AI algorithms for
simultaneous face detection, face pose estimation, face alignment, gender
recognition, smile detection, age estimation, and face recognition using a
deep convolutional neural network.
 The facial analysis with computer vision allows systems to recognize
identity, intentions, emotional and health states, age, or ethnicity. Some
photo recognition tools even aim to quantify levels of perceived
attractiveness with a score
Example of face analysis with image recognition, using the DeepFace
software library.
MEDICAL IMAGE
ANALYSIS
 Visual recognition technology is widely used in the medical industry to
make computers understand images that are routinely acquired throughout
the course of treatment. Medical image analysis is becoming a highly
profitable subset of artificial intelligence. For example, there are multiple
works regarding the identification of melanoma, a deadly skin cancer. Deep
learning image recognition software allows tumor monitoring across time,
for example, to detect abnormalities in breast cancer scans.
 Read more about applications of image recognition in Healthcare.
ANIMAL MONITORING
 Agricultural visual AI systems use novel techniques that have been trained
to detect the type of animal and its actions. AI image recognition software
is used for animal monitoring in farming, where livestock can be monitored
remotely for disease detection, anomaly detection, compliance with animal
welfare guidelines, industrial automation, and more.
Image Recognition technology used for animal monitoring
PATTERN AND OBJECTS
DETECTION
 AI photo recognition and video recognition technologies are useful for
identifying people, patterns, logos, objects, places, colors, and shapes. The
customizability of image recognition allows it to be used in conjunction with
multiple software programs. For example, after an image recognition
program is specialized to detect people, it can be used for people counting,
a popular computer vision application in retail stores.
 Image Recognition application to detect dangerous objects automatically
AUTOMATED PLANT
IMAGE IDENTIFICATION
 Image-based plant identification has seen rapid development and is
already used in research and nature management. A
research paper from July 2021 analyzed the identification accuracy of
image identification to determine plant family, growth forms, lifeforms, and
regional frequency. The tool performs image search recognition using the
photo of a plant with image matching software to query the results against
an online database.
 Results indicate high recognition accuracy, where 79.6% of the 542 species
in about 1500 photos were correctly identified, while the plant family was
correctly identified for 95% of the species.
FOOD IMAGE
RECOGNITION
 Deep learning image recognition of different types of food is applied for
computer-aided dietary assessment. Computer vision systems were
developed to improve the accuracy of current measurements of dietary
intake by analyzing the food images captured by mobile devices. An image
recognizer app is used to perform online pattern recognition in images that
are uploaded by students.
IMAGE SEARCH
RECOGNITION
 Image search recognition uses visual features learned from a deep neural
network to develop efficient and scalable methods for image retrieval. The
goal is to perform content-based retrieval of images for image recognition
online applications. Researchers have developed a
large-scale visual dictionary from a training set of neural network features
to solve this challenging problem.
NATURAL
LANGUAGE
PROCESSING
(NLP)
NLP
 Natural Language Processing (NLP) refers to AI method of communicating
with an intelligent systems using a natural language such as English.
 Processing of Natural Language is required when you want an intelligent
system like robot to perform as per your instructions, when you want to
hear decision from a dialogue based clinical expert system, etc.
 The field of NLP involves making computers to perform useful tasks with
the natural languages humans use. The input and output of an NLP system
can be −
 Speech
 Written Text
COMPONENTS OF NLP
 There are two components of NLP as given −
 Natural Language Understanding (NLU)
 Natural Language Generation (NLG)
COMPONENTS OF NLP
There are two components of NLP as given −
 Natural Language Understanding (NLU)
 Understanding involves the following tasks −
 Mapping the given input in natural language into useful
representations.
 Analyzing different aspects of the language.
COMPONENTS OF NLP
 Natural Language Generation (NLG)
 It is the process of producing meaningful phrases and sentences in the
form of natural language from some internal representation.
 It involves −
 Text planning − It includes retrieving the relevant content from
knowledge base.
 Sentence planning − It includes choosing required words, forming
meaningful phrases, setting tone of the sentence.
 Text Realization − It is mapping sentence plan into sentence structure.

The NLU is harder than NLG.


DIFFICULTIES IN NLU
NL has an extremely rich form and structure.
It is very ambiguous. There can be different levels of ambiguity −
 Lexical ambiguity − It is at very primitive level such as word-level.
 For example, treating the word “board” as noun or verb?
 Syntax Level ambiguity − A sentence can be parsed in different ways.
 For example, “He lifted the beetle with red cap.” − Did he use cap to lift the
beetle or he lifted a beetle that had red cap?
 Referential ambiguity − Referring to something using pronouns. For example,
Rima went to Gauri. She said, “I am tired.” − Exactly who is tired?
 One input can mean different meanings.
 Many inputs can mean the same thing.
NLP TERMINOLOGY
 Phonology − It is study of organizing sound systematically.
 Morphology − It is a study of construction of words from primitive meaningful
units.
 Morpheme − It is primitive unit of meaning in a language.
 Syntax − It refers to arranging words to make a sentence. It also involves
determining the structural role of words in the sentence and in phrases.
 Semantics − It is concerned with the meaning of words and how to combine words
into meaningful phrases and sentences.
 Pragmatics − It deals with using and understanding sentences in different
situations and how the interpretation of the sentence is affected.
 Discourse − It deals with how the immediately preceding sentence can affect the
interpretation of the next sentence.
 World Knowledge − It includes the general knowledge about the world.
NLP TASKS
 Human language is filled with ambiguities that make it incredibly difficult to
write software that accurately determines the intended meaning of text or
voice data. Homonyms, homophones, sarcasm, idioms, metaphors,
grammar and usage exceptions, variations in sentence structure—these
just a few of the irregularities of human language that take humans years
to learn, but that programmers must teach natural language-driven
applications to recognize and understand accurately from the start, if those
applications are going to be useful.
 Several NLP tasks break down human text and voice data in ways that help the computer
make sense of what it's ingesting. Some of these tasks include the following:
 Speech recognition, also called speech-to-text, is the task of reliably converting voice
data into text data. Speech recognition is required for any application that follows voice
commands or answers spoken questions. What makes speech recognition especially
challenging is the way people talk—quickly, slurring words together, with varying
emphasis and intonation, in different accents, and often using incorrect grammar.
 Part of speech tagging, also called grammatical tagging, is the process of determining
the part of speech of a particular word or piece of text based on its use and context. Part
of speech identifies ‘make’ as a verb in ‘I can make a paper plane,’ and as a noun in
‘What make of car do you own?’
 Word sense disambiguation is the selection of the meaning of a word with multiple
meanings through a process of semantic analysis that determine the word that makes
the most sense in the given context. For example, word sense disambiguation helps
distinguish the meaning of the verb 'make' in ‘make the grade’ (achieve) vs. ‘make a bet’
(place).
 Named entity recognition, or NEM, identifies words or phrases as useful
entities. NEM identifies ‘Kentucky’ as a location or ‘Fred’ as a man's name.
 Co-reference resolution is the task of identifying if and when two words
refer to the same entity. The most common example is determining the
person or object to which a certain pronoun refers (e.g., ‘she’ = ‘Mary’),
but it can also involve identifying a metaphor or an idiom in the text (e.g.,
an instance in which 'bear' isn't an animal but a large hairy person).
 Sentiment analysis attempts to extract subjective qualities—attitudes,
emotions, sarcasm, confusion, suspicion—from text.
 Natural language generation is sometimes described as the opposite of
speech recognition or speech-to-text; it's the task of putting structured
information into human language.
STEPS IN NLP
 There are general five steps −
 Lexical Analysis − It involves identifying and analyzing the structure of
words. Lexicon of a language means the collection of words and phrases in
a language. Lexical analysis is dividing the whole chunk of txt into
paragraphs, sentences, and words.
 Syntactic Analysis (Parsing) − It involves analysis of words in the
sentence for grammar and arranging words in a manner that shows the
relationship among the words. The sentence such as “The school goes to
boy” is rejected by English syntactic analyzer.
 Semantic Analysis − It draws the exact meaning or the dictionary meaning
from the text. The text is checked for meaningfulness. It is done by mapping
syntactic structures and objects in the task domain. The semantic analyzer
disregards sentence such as “hot ice-cream”.
 Discourse Integration − The meaning of any sentence depends upon the
meaning of the sentence just before it. In addition, it also brings about the
meaning of immediately succeeding sentence.
 Pragmatic Analysis − During this, what was said is re-interpreted on what
it actually meant. It involves deriving those aspects of language which
require real world knowledge.
NLP TOOLS AND
APPROACHES
 Python and the Natural Language Toolkit (NLTK)
 The Python programing language provides a wide range of tools and
libraries for attacking specific NLP tasks. Many of these are found in the
Natural Language Toolkit, or NLTK, an open source collection of libraries,
programs, and education resources for building NLP programs.
 The NLTK includes libraries for many of the NLP tasks listed above, plus
libraries for subtasks, such as sentence parsing, word segmentation,
stemming and lemmatization (methods of trimming words down to their
roots), and tokenization (for breaking phrases, sentences, paragraphs and
passages into tokens that help the computer better understand the text). It
also includes libraries for implementing capabilities such as semantic
reasoning, the ability to reach logical conclusions based on facts extracted
from text.
ARTIFICIAL INTELLIGENCE FOR AFRICA
 In Africa, AI can help with some of the region’s most pervasive problems:
from reducing poverty and improving education, to delivering healthcare
and eradicating diseases, addressing sustainability challenges — and from
meeting the growing demand for food from fast-growing population to
advancing inclusion in societies. AI democratises access to innovative and
productivity-boosting
technology to fuel the growth the continent needs
AI SOLUTIONS FOR
AGRICULTURE
 FarmDrive — The Kenyan data analysis startup is an alternative credit
scoring platform for smallholder farmers. It uses mobile phones, alternative
data, and machine learning to close the critical data gap that prevents
financial institutions from lending to creditworthy smallholder farmers.
 See & Spray — Blue River Technology has built “smart farm” machines to
manage crops at a plant- level. Today, the best practice is to treat all plants
as if they have the same needs. However, their See & Spray technology
changes this paradigm, empowering growers to make every individual
plant count at scale. Using computer vision and AI, their smart machines
can detect, identify, and make management decisions about every single
plant in the field
AI SOLUTIONS FOR
HEALTHCARE
 Seeing AI — Microsoft’s project is designed to help the blind and low vision
community by harnessing the
power of AI to turn the visual world into an audible experience. The Seeing
AI intelligent camera app allows
users to hear information about the world around them just by holding up
their phones as it can describe
people, text, currency, colour, and objects. It can speak short text as soon
as it appears in front of the
camera, provide audio guidance to capture a printed page, and recognize
and narrate the text along with its
original formatting. The app can also scan barcodes with guided audio cues
to identify products, recognize
and describe people and their facial expressions, as well as describing
scenes using the power of AI. Seeing
AI is an ongoing project that keeps developing new abilities
 Corti — A Danish machine learning company that provides accurate
diagnostic support to emergency
services, allowing patients to get the right treatment faster. It helps
emergency medical dispatchers make
life-saving decisions by identifying patterns of anomalies or conditions of
interest with a high level of speed
and accuracy. In the case of out-of-hospital cardiac arrest (OHCA), the
technology can reduce the number
of undetected OHCAs by more than 50 percent.
AI SOLUTIONS FOR
GOVERNMENT
 Spatial Wave — The Microsoft CityNext Partner designed SANSTAR for the
Los Angeles Bureau of Sanitation
(LASAN) using Microsoft Azure cloud services. The smartphone application
is used by truck drivers to map
and record their daily routes and by citizens to report clean-up issues. The
mobile app allows drivers to
complete their routes faster and respond to more customer requests.
AI SOLUTIONS FOR
FINANCIAL SERVICES
 Zenith Bank Plc — Located in Nigeria, Zenith launched several new
solutions that enable more
convenient, safe and quick customer transactions. These include the
bank’s Scan to Pay App which can
be used by Zenith and non-zenith customers to make online and in-store
payments in seconds through
quick response code scanning on any internet enabled phone. The bank’s
mobile app also offers
enhanced functionalities such as instant account opening for new
customer.3
 ALAT — Africa’s first fully digital bank, launched in May 2017 by Wema
Bank in Nigeria. ALAT targets
the youth segment based on the three pillars of convenience, simplicity,
and reliability. Customers can
open an account via mobile phone or Internet in under five minutes and
debit cards are delivered
anywhere in Nigeria within two to three days, free of charge. ALAT also
promises “no paperwork”:
photos of KYC documents can be uploaded via mobile app or website.
 Strider — A South African fintech company that provides a toolbox of
platforms that banks and financial institutions can rapidly white-label
in order to provide financial education and meaningful services to new
and existing clients.
AI SOLUTIONS FOR
EDUCATION
 Georgia Tech University (GTU) – GTU developed “Jill Watson”34, an AI
teaching assistant based on IBM's
Watson platform35. The system was developed specifically to handle the
high number of forum posts
by students enrolled in an online course that is a requirement for GTU's
online master of science in
computer science program. It attained a 97% accuracy rate in answering
student queries – and
according to reports by GTU, most students were unaware that “Jill Watson”
was not a real person
 ETS37 – At the company Educational Testing Services, education experts
uses an e-rater to identify
specific features indicative of writing proficiency in student essays to score
more efficiently and offer
better feedback. The e-rater engine provides a holistic score for an essay
as well as real-time diagnostic
feedback about grammar, usage, mechanics, style and organization, and
development. This feedback
is based on natural language processing research specifically tailored to
the analysis of student
responses. Teachers use the tool to help their students to develop their
writing skills independently
and receive automated, constructive feedback. Equally, students use its
engine's feedback to evaluate
their essay-writing skills as well as to identify areas that need improvement

You might also like