0% found this document useful (0 votes)
92 views25 pages

BI Unit 5-1

The document outlines the principles of Knowledge Management (KM) and its integration with Artificial Intelligence (AI) and Expert Systems, detailing the KM process, metrics, organizational culture, and maturity models. It emphasizes the importance of collecting, organizing, summarizing, analyzing, synthesizing, and utilizing knowledge for effective decision-making. Additionally, it discusses the implementation challenges of KM systems and provides a structured approach for organizations to enhance their knowledge management capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views25 pages

BI Unit 5-1

The document outlines the principles of Knowledge Management (KM) and its integration with Artificial Intelligence (AI) and Expert Systems, detailing the KM process, metrics, organizational culture, and maturity models. It emphasizes the importance of collecting, organizing, summarizing, analyzing, synthesizing, and utilizing knowledge for effective decision-making. Additionally, it discusses the implementation challenges of KM systems and provides a structured approach for organizations to enhance their knowledge management capabilities.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

5 Knowledge Management and

Artificial Intelligence and


Expert Systems
Syllabus
Knowledge Management Knowledge Management Metrics, Organizational Culture-Types and analySIS,
Organizational maturity model,
Artificial lntelligence and Expert Systemns :Concepts and Definitions of Artificial Intelligence, Artificlal Intelligence
Versus Natural Intelligence, Machine Learning- Data Distribution, Machine Learning Process, Tools,
TensorFlow.

5.1 Introduction to Knowledge Management


Knowledage management is an activity practised by enterprises all over the world. In the process of knowiedge
management,these enterprises comprehensively gather information using many methods and tools.
Then, gathered information is organized, stored, shared, and analysed using defined techniques. The analysis of
such information willbe based on resources, documents, people and their skills.
Properiy analysed information willthenbe stored as knowledge' of the enterprise. This knowledge is later used
for activities such as organizational decision making and training new staff members.
There have been many approaches to knowledge management from early days. Most of early approaches have
been manualstoring and analysis of information.
With the introduction of computers, most organizational knowledge and management processes have been
automated.

Therefore, information storing, retrieval and sharing have become convenient. Nowadays, most enterprises have
their own knowledge management framework in place.
The framework defines the knowledge gathering points, gathering techniques, tools used, data storing tools and
techniques and analysing mechanism.

5.1.1 The Knowledge Management Process

The process of knowledge management is universal for any enterprise. Sometimes, the resources used, such as
tools and techniques, can be unique to the organizational environment.
The Knowledge Management process has s0x basic steps assisted by different tools and techniques. When these
steps are followed sequentially, the data transforms into knowledge.
Business Intelllgence and Data Analytics 5-2 Knowledge Mgmt. & Al&Expert Systeme

Decision Makíng

Synthesizing Knowledge
Analyzing
Information
Summarizing
Organizing
Data
Collecing
Fig. 5.1.1

Step 1: Collecting
This is the most important step of the knowledge management process. If you collect the incorrect or irrelevant
data, the resulting knowledge may not be the most accurate. Therefore, the decisions made based on such
knowledge could be inaccurate as well.
There are many methods and tools used for data collection. First of all, data collection should be a procedure in
knowledge management process. These procedures should be properly documented and followed by people
involved in data collection process.
The data collection procedure defines certain data collection points. Some points may be the
summary of certain
routine reports. As an example, monthly sales report and daily attendance reports may be two good
resources
for data collection.

With data collection points, the data extraction techniques and tools are also defined. As an
example, the sales
report may be a paper-based report where a data entry operator needs to feed the data
manually to a database
whereas, the daily attendance report may be an online report where it is directly stored in the
database.
In addition to data collecting points and extraction mechanism, data
storage is also defined in this step. Most of
the organizations now use a software database application for this purpose.
Step 2: Organízing
The data collected need to be organized. This organization
usually happens based on certain rules. These rules
are defined by the organization.
As an example, all sales-related data can be filed
together and all staff-related data could be stored in the same
database table. This type of organization helps to maintain data accurately withina
database.
Jfthere is much data in the database, techniques such as 'normalization' can be used for organizing and
reducing
the duplication.
This way, data is logically arranged and related to one another for
easy retrieval. When data passes step 2, it
becomes information.

Step 3: Summarizing
In this step, the information is summarized in order to take the essence of it. The lengthy information is
presented in tabular or graphical format and stored appropriately.
For summarizing, there are many tools that can be used such as software packages, charts (Pareto, cause-and
effect), and different techniques.

Tech Knouledge
Pubi|Cations
Business Intelligencee and Data Analytics 5-3 Knowledge Mgmt. &Al &Expert Systems
Step 4: Analyzing
At this stage, the informatlon is analyzed in order tofndthe relationships, redundancles and patterns.
An expert or an expert team should be assigned for this purpose as the experlence of the person/team plays e
vital role. Usually, there are reports created after analysis of information.
Step 5: Synthesizlng
At this point, information becomes knowledge, The results of analysis (usually the reports) are combine
together to derive varlous concepts and artefacts.
Apattern or behavior of one entity can be applied to explain another, and collectively, the organization wll nave
aset of knowledge elements that can be used across the organization.
This knowledge is then stored in the organizational knowledge base for further use. Usually, the knowiedge base
is asoftware implementation that can be accessed from anvwhere through the Internet. You can also buy sucn
knowledge base software or download an open-$ource implementation of the same for ree.
Step 6: Decision Making
At this stage, the knowledge is used for decision making. As an example, when estímating a specife type of a
project or atask, the knowledge related to previous estimates can be used.
This accelerates the estimation process and adds high accuracy. This is how the organizational knowledge
management adds value and saves money in the long run.

S.1.2 Roles of People in Knowledge Management


People are ultimately the holders of knowledge. The goal is to encourage them to not only search for it and
improve it for applying it toimproving internal processes, but tomake them see the benefits of sharing it with
the organization, in thiscontext it is important:
1 Togive people autonomy in their jobs and find new ways to fulfill them.
2 To provide proper storage and sharing of knowledge systems.
3. To empower them and continually train them
4. To keep them motivated
5. Togive them adequate remuneration, to ensure their commitment.
The manager should always be aware of the fact that decisions made by people can affect the entire organization.
Thae's why your motivation is crucial, that's what will make employees share and replicate the knowledge they
accumulate in their activities in the company with colleagues.
The worst that can happen isto lose that talent to the competition, along with everything they have learned.

5.2 Knowledge Management Metrics


Knowledge Management (KM) is the process of capturing, organizing sharing, and analyzing knowledge to
improve decision-making and operational efficiency. In the context of Business Intelligence (BI) and Data
Analytics, KM metrics measure how effectively organizations use their knowledge assets (data, insights, and
expertise) to drive business outcomes.
These metrics bridge the gap between raw data (analytics) processes.
andactionable insights (inteligence), enabling
organizations to evaluate and optimize their knowledge-driven
KM metrics help evaluate the success and effectiveness of KM initiatives. These metrics can be broadly
categorized as follows:
TechKnowledge
PubllCatlons
5-4
Business Intelligence and Data Analytics Knowledge Mgmt. &Al &
Expert
1. Quantitative Metrics
Usage Metrics: Track the frequency of knowledge resource access, number of logins to
tSysterms
KM
the time spent using KM tools.
Content Metrics: Measure the volume of content added, edited, or deleted. Also includes
platforms, and
the
knowledge repositories created.
Collaboration Metrics: Evaluate participation in forums, contributions to wikis, or team number of
discussione
2. Qualitative Metrics
Emplovee Feedback: Use surveys and interviews to gauge
employee satisfaction with KM systems
Knowledge Impact : Measure the
effectiveness of shared knowledge in improving
innovation.
decision-making
3. Business Impact Metrics

Efficiency Gains :Reduction in project cycle times or time spent


Financial Metrics : Cost savings due to reduced locating information.
Outcomes. duplication of effort, faster onboarding, or better
client
5.2.1 Organizational Culture - Types and Analysis
Organizational culture plays a crucial role in enabling or hindering
are comnmon culturaltypes based on the
Competing Values Framework: knowledge-sharing practices. The following
1. Clan Culture
(Collaborative)
Characteristics: Team-oriented, empathetic, and focused on employee engagement.
KM Implication : Promotes open
knowledge-sharing and collective problem-solving.
2. Adhocracy Culture (Innovative)
Characteristics : Creativity-focused, agile, and risk-tolerant.
KM Implication : Encourages innovation through shared
ideas and experimentation.
3. Market Culture (Competitive)
Characteristics: Results-driven, competitive, and performance-focused.
KM Implication: Knowledge sharing is used as a strategic tool to
outperform competitors.
4. Hierarchy Culture (Controlled)
Characteristics :Formal, process-oriented, and structured.
KM Implication : Knowledge sharing occurs in a top-down, formalized
manner.
5.2.2 Culture Analysis for Knowledge Management
Surveys & Interviews: Identify knowledge-sharing behaviors and barriers.
Behavioral Patterns: Analyze how employees collaborate (informally or formally).
Gap Analysis: Compare the ex0sting culture with the desired KM-enabled culture.
Organizations can bridge cultural gaps by encouraging trust, creating incentives for sharing knowledge, and
workflows.
integrating KM into daily
TechKaenlde
PubIICatIons
BusinessIntelligence and Data Analytics 5-5 Knowledge Mgmt. & AI&Expert Systems
5.2.3 Organizational Maturity Model

An organizatonal maturity model is aframework toassess and improve an organization's capabillitles in Bl and
dataanalytics..It describes stages of development and provides guidance for advancing capablities.
Stagesof the Maturity Model
1 Initial (Ad Hoc)
Features : Data use is informal and unstructured; tools are limited.
Challenges : Inefficiencies and inconsistencies in reporting.
2. Repeatable (Defined Processes)
Features :Basic BItools are introduced, and some processes are standardized.
Benefits : Reliable data reporting for specific departments.
3. pefined (Integrated Systems)
Features :Organization-wide analytics systems and consistent data governance.
Benefits :Reliable and shared insights for decision-making.
4, Managed (Advanced Analytics)
Features: Use of predictiveanalytics, machine learning, and automation.
Benefits :Proactive decision-making supported by data insights.
5. Optimized (Transformative Analytics)
Features : Analytics drives innovation and strategic differentiation.
Benefits:Real-time insights, AI-driven decisions, and measurable business transformation.

5.3 Knowledge Management Systems Implementation

Steps to Implementation
Implementing a knowledge management program is no easy feat. You will encounter many challenges along the
way including many of the following:
Inability to recognize or articulate knowledge; turning tacit knowledge into explicit knowledge.
Geographical distance and/or language barriers in an international company.
Limitations of information and communication technologies.
Loosely defined areas of expertise.
Internal conflicts (e.g, professional territoriality).
Lack of incentives or performance management goals.
Poor training or mentoring programs.
Cultural barriers (e.g. "this is how we've always done it" mentality).
The following eight-step approach will enable you to identify these challenges so you can plan for them, thus
minímizing the risks and maximizing the rewards. This approach was developed based on logical, tried-and-true
activities for implementing any new organizational program. The early steps involve strategy, planning, and
requirements gathering while the later steps focus on execution and continual improvement.

TechKnowledge
PubICations
Business Intelligence and Data Analytics 5-6 Knowledge Mgmt. & Al & Expert
Step 1: Establish Knowledge Management Program Ob<ectlves
Before selecting atool, defining aprocess, and developing workflows, you should envision and
Systems
state. In order to establish the appropriate program objectives, identify and document the articulate the end
that need resolution and the business drivers that will provide momentumand justification forbusi
the
ness problem.
Provide both short-termand long-term objectives that address the business problems and supportendeavor.
the
drivers. Short-term objectives should seek to provide validation that the program is on the
term objectives will help to create and communicate the big picture. right path whlle busilnoesngs
Step 2: Prepare for Change
Knowledge management is more than just an application of technology. It involves
employees perceive and share knowledge they develop or possess. cultural changes in the
way
One common cultural hurdle to increasing the
sharing of knowledge is that companies primarily
individual performance. rewav3
This practice promotes a "knowledge is power" behavior that
contradicts the desired
knowledge-driven culture end state you are after.
Successfully implementing a new knowledge
knowledge-sharing.
management program
organization's norms and shared values; changes that may require changes within the
some people might resist or even attempt to quash.
To minimize the negative impact of such changes, it's wise to
cultural change. follow an established approach for managing
Step 3: Define High-Level Process
To facilitate the effective
management of your organization's knowledge assets,you should
high-level knowledge management process. begin by laying out a
The process can be progressively
developed with detailed procedures and work
four, five,and six. However, it should be instructions throughout steps
finalized and approved prior to step seven
Organizations that overlook or loosely define the knowledge (implementation).
potential of their knowledge management objectives. management process will not realize the full
How knowledge is identified, captured,
categorized, and disseminated will be ad hoc at best. There are a number
of knowledge management best practices, all of
which comprise similar activities.
In general, these activities include
knowledge strategy, creation, identification, classification, capture,
transfer, maintenance, archival, measurement, and reporting. validation,
Step 4: Determine and Prioritize Technology Needs
Depending on the program objectives established in step one and the process controls and criteria
defined in
step three, you can begin to determine and prioritize your knowledge
management technology needs.
With such a variety of knowledge management solutions, it is imperative to understand the
cost and benefit of
each type of technology and the primary technology providers in the marketplace.
Don't be too quick to purchase a new technology without first determining if your existing technologies can meet
your needs.

You can also wait to make costly technology decisions after the knowledge management program is wel
underway if there is broad support and a need for enhanced computing and automation.

TechKnouledge
PubI|Cations
Business Intelligence and Data Analytics 5-7 Knowledge Mgmt. &AI &Expert Systems
Step 5: Assess Current State
Now tnat you ve established your program obiectives to solve vour business problem, prepared ror change o
address Ciltural issues, defined a high-level process to enable the effective
assets, and management Or you
determined and prioritized your technology needs that will enhance and automate knowledge
managemnent related activities, you are in a position to assess the current state of knowledge manag
your organization.
The knoiedge management assessment should cover all fve core knowledge management components: peoPie,
processes, technology, structure, and culture.
Atyplcal asSessment should provide an overview of the assessment. the gans between current and desired
states, and the recommendations for attenuating jdentified gans. The recommendations will becone ue
foundation for the roadmap in step six.
Step 6: Build a Knowledge Management
Implementation Roadmap
With the current-state assessment
hand, it is time to build the implementation roadmap for your knowledge
management program.
But betore going too far, youshould re-confirm senior leadership'ssupport and commitment, as well as the
funding to implement and maintain the knowledge management program.
Without these prerequisites, your efforts will be futile. Having solid evidence of your organization S
shortcomings, via the assessment, should drive the urgency rate up.
Having a strategy on how to overcome the shortcomings will be critical in gaining leadership's support and
getting the funding you will need.
This strategy can be presented as a roadmap of related projects, each addressing specific gaps identified by the
assessment.

The roadmap can span months and years and illustrate key milestones and dependencies. Agood roadmap will
yield some short-term wins in the first step of projects, which will bolster support for subsequent steps.
As time progresses, continue to review and evolve the roadmap based upon the changing economic conditions
and business drivers.

You will undoubtedly gain additional insight through the lessons learned from earlier projects that can be
applied to future projects as well.
Step 7: Implementation
Implementing a knowledge management program and maturing the overall effectiveness of your organization
willrequire significant personnel resources and funding.
Be prepared for the long haul, but at the same time, ensure that incremental advances are made and publicized.
As long as there are recognized value and benefits, especially in light of ongoing successes, there should be little
resistance to continued knowledge management investments.
With that said, it's time for the rubber to meet the road. You know what the objectives are. You have properly
mitigated all cultural issues.
C
You've got the processes and technologies that will enable and launch your knowledge management program.
You know what the gaps are and have a roadmap to tell you how to address them.
As you advance through each step of the roadmap, make sure you are realizing your short-term wins. Without
them, your program may lose momentum and the support of key stakeholders.

TechKaouledge
PUbIICatlonS
Business Intelligence and Data Analytics 5-8 Knowledge Mgmt. & Al&Expert Systeme

Step 8: Measure and Improve the Knowledge Management Program


You will need a way of measuring
how will you know your knowledge management investments are working?
your actual effectiveness and comparing that to anticipated results.
capture the before shot of the organization':
Ir possible, establish some baseline measurements in order to
performance prior to implementing the knowledge management program.
to see how performance ha2e
Then, after implementation, trend and compare the new results to the old results
improved.
will take time for the
Don't be disillusioned if the delta is not as large as you would have anticipated. It
time, the results should
organization to become proficient with the new processes and improvements. Over
follow suit.
blish a balanced
When deciding upon the appropriate metrics to measure your organization's progress, esta
scorecard that provides metrics in the areas of performance, quality, compliance, and value.
The key point behind establishing a knowledge management balanced scorecard is that it provides valuable
insight into what's working and what's not.
You can then take the necessary actions to mitigate compliance, performance, quality, and value gaps, thus
improving overall efficacy of the knowledge management program.
5.4 Introduction to Artificial Intelligence
Since the invention of computers or machines, their capability to perform various tasks went on growing
exponentially.
Humans have developed the power of computer systems in terms of their diverse working domains, their
increasing speed, and reducing size with respect to time.
A branch of Computer Science named Artificial Intelligence pursues creating the computers or machines as
intelligent as human beings.
According to the father of Artificial Intelligence, John McCarthy, it is "The science and engineering of making
intelligent machines, especially intelligent computer programs".
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think
intelligently, in the similar manner the intelligent humans think.
Al is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying
to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and
systems.

5.5 Differences between Artificial Intelligence and Human Intelligence


Intelligence can be defined as a general mental ability for reasoning problem-solving, and learning, Because of
its general nature, intelligence integrates cognitive functions such as
perception, attention, memory, language, or
planning.
On the basis of this definition, intelligence can be reliably measured by standardized tests with
obtained scores
predicting several broad social outcomes such as educational achievement, job performance, health, and
longevity. So let's study the differences between Artificial Intelligence and Human Intelligence in a detail.

TechKaouledge
PubI|Cati0ns
Business Intelligence and Data Analytics 5-9 Knowledge Mgmt. & AI&Expert Systems
Artificial Intelligence

Artificial Ihtelligence is the study and design of Intelligent agent, These intelligent agents have the ability to
analyze the environments and produce actions which maximize success.

Al research uses tools and insights from many ields, including computer science, psychology. philosophy.
neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability,
optimization and logic.
Al research also overlaps with tasks such as robotics, controlsystems,scheduling, data mining, logistics, speech
recognition,facial recognition and many others.
Human Intelligence
Human Intelligence defined as the quality of the mind that is made up of capabilities to learn from past
experience, adaptation to new situations, handling of abstract ideas and the ability to change his/her own
environment using the gained knowledge.
Human Intelligence can provide several kinds of information. It can provide observations during travel or other
events from travellers, refugees, escaped friendly POWs, etc.
can provide data on things about which the subject has specific knowledge, which can be another human
subject, or, in the case of defectors and spies, sensitive information to which they had access. Finally, it can
provide information on interpersonal relationships and networks of interest.
Key Differences between Artificial Inteligence and Human Intelligence
Below are the lists of points, describe the key Differences between Artificial Intelligence and Human Intelligence.
Key Differences between Artificial
Intelligence and Human Intelligence

1.Nature of Existence

2.Memory usage
3.Mode of creation

4.Learning process

5.Dominance

Fig. 5.5.1: Key Differences between Artificial Intelligence and Human Intelligence
1. Nature of Existence

Human intelligence revolves around adapting to the environment using a combination of several cognitive
processes. The field of Artificial intelligence focuses on designing machines that can mimic human behaviour.
2. Memory usage

Humans use content memory and thinking whereas, robots are using the built-in instructions, designed by
scientists.

TechKaouledge
Publlcatlons
Business Intelligence and Data Analytics 5-10 Knowledge Mgmt. & Al &
3. Mode of creation
Expert Systems
Human intelligence is bigger because its creation of God and artificial intelligence as the name
artificial, lite and temporary created by humans. Also, Humans intelligence is the real creator of the SuggestS Is
4.
intelligence even but they cannot create ahuman being with superiority.
Learning process
artiflclal
Human intelligence is based on the variants they encounter in life and responses they get
in millions of functions overall in their lives. which may
result
However. for Artificial intelligence is defined or developed for specilic tasks only
and its applicabilib.
other tasks may not be easily possible.
5. Dominance
Artificial intelligence can beat human
hanten the human player due to being intelngence in Some specific areas such as in Chess a
able to store all the moves played by all supercomputer ba.
Fhink ahead 10 moves as compared to humans so far and being able ta
human players who can think 10 steps ahead but
retrieve that number of moves in Chess. cannot store and
Table 5.5.1
Sr. Comparison Human Intelligence
No. Factor Artificial Intelligence
1. Energy efficiency 25 watts human brain
2 watts for modern machine learning machine.
2. Universal Humans usually learn how to While
consuming kilowatts of energy, this machine is
manage hundreds of different usually designed for a few tasks.
skills during life.
3.
Multitasking Human worker work on The time
needed to teach system on each and every
multiple responsibilities. response is considerably high.
4. Decision Making Humans have the ability to Even the most
learn decision making from advanced robots can hardly compete in
mobility with 6 years old child. And this results we
experienced scenarios. have after 60 yearsof research and
5. State Brains are Analogue
development.
Computers are digital

5.6 Basic Concepts of Expert Systems


Expert Systems (ES) are one of the prominent research domains of Al. It is
introduced by the researchers at
Stanford University, Computer Science Department.
The expert systems are the computer applications developed to solve complex problems in a particular
domain,
at the level of extra-ordinary human intelligence and expertise.
Characteristics of Expert Systems
High performance.
Understandable.
Reliable.

Highly responsive.
TechKnouledy1
PubICatlons
Systems
BusinessIntelligence and Data Analytics 5-11 Knowledge Mgmt. &Al &Expert

Cpabilities of Expert Systems


The expert systems are capable of:
Advising.
Instructing and assisting human in decision making.
Demonstrating.
Deriving a solution.
Diagnosing.
Explaining.
Interpreting input.
Predicting results.
Justifying the conclusion.
Suggesting alternative options to a problem.
In Capabilities of Expert Systems
They are incapable of :
Substituting human decision makers.
Possessing human capabilities.
Producing accurate output for inadequate knowledge base.
Refining their own knowledge.

5.7 Components of Expert Systems

The components of ES include:


Knowledge Base.
Inference Engine.
User Interface.
Let us see them one by one briefly:

Knowledge
Base
Human Knowledge
Expert Engíneer
Inference
Engine

User
Interface
User
(May not be an expert)

Fig. 5.7.1

TechKnowledge
PubliC ation s
Business Intelligence and Data Analytics 5-12 Knowledge Mgmt. &AI &
Expert
5.7.1 Knowledge Base
It contains domain-speciñc and high-quality knowledge.
Systems
Knowledge is required to exhibit intelligence. The success of any ES majorly depends upon the
highly accurate and precise knowledge.
The data is collection of facts. The information is organized as data and facts about the
col ection
task domaln,
of
Informatlon, and past experience combined together are termedas knowledge.
5.7.1(A) Components of Knowledge Base Dat ,
The knowledge base of an ES is a store of both, factual and heuristic knowledge.
Factual Knowledge : It is the information widely accepted by the Knowledge Engineers and scholars in
domain. the task
Heuristic Knowledge: It is about practice, accurate judgement, one's ability of
evaluation, and guessing
Knowledge representation
It is the method used to organize and formalize the
knowledge in the knowledge base. Itt is in
THEN-ELSE rules. the form of IF-
Knowledge acquisition
The success of any expert system majorly
depends on the quality, completeness, and accuracy of the
stored in the knowledge base. informatas
The knowledge base is formed by readings from
various experts, scholars, and the Knowledge
knowledge engineer is a person with the qualities of empathy, quick Engineers. The
learning, and case analyzing skills.
He acquires ínformation from subject expert by
recording, interviewing, and observing him at work, etc. He then
categorizes and organizes the information in a meaningful way, in the
form of IF-THEN-ELSE rules, to be used by
interference machine. The knowledge engineer also monitors the
development of the ES.
5.7.2 Inference Engine
Use of efficient procedures and rules by the
Inference Engine is essential in deducting a correct, flawless
In case of solution.
knowledge-based ES, the Inference Engine acquires and manipulates the knowledge from
knowledge base to arrive at a particular solution. the
In case of rule based ES, it:
C
Applies rules repeatedly to the facts, which are obtained from
earlier rule application.
Adds new knowledge into the knowledge base if required.
Resolves rules conflict when multiple rules are applicable to a particular case.
To recommend a solution, the Inference Engine uses
the following strategies :
1. Forward Chaining
2 Backward Chaining

TechKnouedge
PubIcation s
Pusiness,
Intelllgenceand| Data Analytics Expert Systems
5-13 Knowledge Mgmt. &Al &
ForwardChalning

It is astrategy of an expert systemto answer the question,"What can happen next?"


outcome.
Here,theInference Engine follows the chain of conditlons and and finally deduces the
derivations
Itconsiders all the facts and rules, and sorts them before concluding gto asolution.

This strategy is followed for working prediction of share


on conclusion, result, or effect. For example,
market sstatus as an effecttof changes in interest rates.
Fact 1

Fact 2 AND)Decision
(AND)-Decision 4
Fact 3
OR Decision
Fact 4

Fig. 5.7.2

2 Backward Chaining
With this strategy, an expert system finds out the answer to the question, "Why this happenedr
out which conditions could
On the basis of what has already happened, the Inference Engine tries to find
or reason. For
have happened in the past for this result. This strategy is followed for finding out cause
example, diagnosis of blood cancer in humans.
Fact 1
(AND Decision1
Fact 2
(AND)Decision 4|
Fact 3
OR Decision
Fact 4

Fig. 5.7.3

5.7.3 User Interface


and the ES itself. It is generally Natural Language
User interface provides interaction between user of the ES
well-versed in the task domain.,
Processing so as to be used by the user who is
expert in Artificial Intelligence.
The user of the ES need not be necessarily an
The explanation may appear in the following
Itexplains how the ES has arríved at a particular recommendation.
forms:
Natural language displayed on screen.
Verbal narrations in natural language.
Listing of rule numbers displayed on
the screen.
O deductions.
to trace the credibility of the
ne user interface makes it easy
TechKnouledge
PubIICations
Business Intelligence and Data Knowledge Mgmt. &Al &
Analytics 5-14
Expert Systems
Requirements of Efficient ES user interface
It should help users to accomplish their goals in
shortest possible way.
It should be designed to work for user's existing or
desired work practices.
lts technology should be adaptable to user's requirements; not theother way
rouna.
It should make
efficient use of user input.
Expert systems limitations
No technology can offer
easy and complete solution. Large systems are costly, require significant
dme, and computer resources, ESs have their limitations which
include: development
Limitations of the technology.
Difficult knowledge acquisition.
FOR ES aredifficult to maintain.
High development costs.
5.8
Applications of Expert System
The Table 5.8.1 shows where ES can be
applied.
Table 5.8.1

SPE Application Description


Design Domain Camera lens design, automobile design.
4
Medical Domain Diagnosis Systems to deduce cause of disease from observed data,
operationson humans.
conduction medical
Monitoring Comparing data continuously with observed system or with prescribed
Systems leakage monitoring in long petroleum pipeline. behavior such as
Process Control Controlling a physical process based on monitoring.
Systems
Knowledge Domain Finding out faults in vehicles,computers.
Finance/Commerce Detection of possible fraud, suspicious transactions, stock market trading,
scheduling, cargo scheduling. Airline

5.8.1 Expert System Technology


There are several levels of EStechnologies available. Expert
systems technologies include:
Levels of ES Technologies

1. Expert System Development Environment


2. Tools

3. Shells

Fig. 5.8.1: Levels of ES Technologies


TechKaouledge
PuDIicati0ns
Bus/nessIntelligence and Data Analytics 5-15 Knowledge Mgmt. &Al &Expert Systems
1. Expert System Development Environment
The ES development
environment includes hardware and tools.
They are :

Workstations, minicomputers, mainframes.


Hign level symbolic Programming Languages such as LISt Programming (LISP) and PROgrammaton en
LOGique (PROLOG).
Large databases.
2. Tools
They reduce the effort and cost involved in developing an
expert system to large extent.
Powerful editors and debugging tools with
multi-windows.
They provide rapid prototyping.
Have Inbuilt definitions of model, knowledge
representation, and inference des\gn.
3. Shells
A
shell is nothing but an expert system without knowledge base. Ashell provides the developers with knowieu8
acquisition, inference engine, user interface, and explanation facility. For example, few shells are given below :
Java Expert System Shell (ESS) that provides fully developed Java API for creating an expert system.
Vidwan, a shell developed at the National Centre for Software Technology, Mumbai in 1993. It enables
knowledge encoding in the form of IF-THEN rules.

5.9 Development of Expert Systems : GeneralSteps


The process of ES development is iterative. Steps in developing the ES include :
Steps in developing the Expert Systems

Step 1 - ldontify Problern Domain

Step 2 Design the System

Step 3 -’ Develop the Prototype

Step 4 - Test and Refine the Prototype

Step 5 Develop and Complete the ES

Step 6 Mainlain the System

Fig. 5.9.1: Steps in developing the Expert Systems


1. Identify Problenm Domain
system to solve it.
The problem must be suitable for an expert
project.
Find the experts in task domain for the ES
Establish cost-effectiveness of the system.
TechKnowledye
Publcations
Business Intelligence and Data Knowledge Mgmt. & Al &1Expert
2.
Analytics 5-16
Systems
Design the System
Identify the ES Technology.
Know and establish the degree of integration with the other systems and databases.
Keaize how the concepts can represent the domaln knowledge best.
3. Develop the Prototype
From Knowledge Base: The knowledge engineer
works to :
Acquire domain knowledge from the expert.
Represent it in the form of If-THEN-ELSE rules.
4. Test and Refine the
Prototype
Ine knowledge engineer uses sample cases to test the prototype for any deficiencies in performance.
End users test the prototypes of the ES.
5. Develop and Complete the ES
Test and ensure the interaction of the ES with all elements of its environment, including end usere
databases, and other information systems.
Document the ES project well.
Train the user to use ES.
6. Maintain the System
Keep the knowledge base up-to-date by regular review and update.
Cater for new interfaces with other information systems, as those systems evolve.
Benefits of Expert Systems
Availability :They are easily available due to mass production of software.
Less Production Cost: Production cost is reasonable. This makes them affordable.
Speed: They offer great speed. They reduce the amount of work an individual puts in.
Less Error Rate: Error rate is low as compared to human errors.
Reducing Risk: They can work in the environment dangerous to humans.
Steady response : They work steadily without getting motional, tensed or fatigued.

5.10 Machine Learning


Machine Learning, often abbreviated as ML is a branch of Artificial
Intelligence (AI) that works on
developments and statistical models that allow computers to learn from data and make predictions or algorithm
decisions
without being explicitly programmed. Hence, in simpler terms, machine learning allows
computers to learn from data
and make decisions or predictions without being explicitly programmed to do so.
Essentially, machine
algorithms learn patterns and relationships from data, allowing them to generalize from instances andlearning make
predictions or conclusions on new and uncovered data.
How does Machine Learning Work?
Broadly Machine Learning process includes Project Setup, Data Preparation, Modeling and Deployment The
following figure demonstrates the common working process of Machine Learning, It follows some set of steps to do
the task; a sequential process of its workflow is as follows -
TechKnouledge
PubIC ations
RusinessIntelligenceand Data Analytics 5-17 Knowledge Mgmt. &AI &Expert Systems

ldentifled the
Project) goal choose Hyperparameter
Setup the flexlble Modelinal uning train model
solutlon make predictlon
acces model
performance

Data collectlon Deploy the


model monitor
Data Data cleaning
Preparatlon) feature ||Deployment) mode
performance
Engineering
split the data
improve model

Fig. 5.10.1
Stagesof Machine Learning
A
detailed sequential process of Machine Learning incudes some set of steps of phases which are as =

Data
collection
Data
Predictions
and
pre
processing
deployment

Choosing
Hyperpara the right
meter model
tuning and
optimizationy
Training
Evaluating the
the model
model

Fig. 5.10.2

1. Data collection: Data collection is an initial step in the process of machine learning. Data is a fundamental part
of machine learning, the quality and quantity of your data can have direct consequences for model performance.
Different sources such as databases, text files, pictures, sound files, or web scraping may be used for data
collection. Data needs to be prepared for machine learning once it has been collected. This process is to organize
the data in an appropriate format, such as a CSV file or database, and make sure that they are useful for solving
your problem.
learning. It involves
L. Data pre-processing : Pre-processing of data is a key step in the process of machine
deleting duplicate data, fixing errors, managing missing data either by eliminating or filling it in, and adjusting
and formatting the data. Pre-processing improves the quality of your data and ensures that your machine
learning model can read it right. The accuracy of your model may be significantly improved by this step.

Tech Knouledge
PuDIICations
Business lntelligence and Data Analytics 5-18 Knowledge Mgmt. &Al &Expert Systeme
3. Choosing the right model: The next step is to select a machine learning model; once data is prepared then we
apply it to ML Models like Linear regression, decision trees, and Neural Networks that may be selected t
implement. The selection of the model generally depends on what kind of data you're dealing with and your
problem. The size and type of data, complexity, and computationalresources should be taken into account when
choosing a model to apply.
4. Tralning the model : The next step is to train it with the data that has been prepared after you have chosen a
model. Training is about connecting the data to the model and enabling it to adjust its parameters to predict
output more accurately. Overfitting and underfitting must be avoided during the training.
5. Evaluating the model :It is important to assess the model's performance before deployment as soon as a model
has been trained. This means that the model has to be tested on new data that they haven't been able to sea
during training. Accuracy in classifying problems, precision and recall for binary classification problems, as well
model.
as mean error squared with regression problems, are common metrics to evaluate the performance of a
6. Hyperparameter tuning and optimization : You may need to adjust its hypeparameters to make it more
efficient after you've evaluated the model. Grid searches, where you try diferent combinations of parameters,
and cross-validation, where you divide your data into subsets and train your model on each subset, to ensure
that it performs well on different data sets, are techniques for hyperparameter tuning
7. Predictions and deployment :As soon as the model has been programmed and optimized, it will be ready to
estimate new data. This is done by adding new data to the model and using its output for decision-making or
other analysis. The deployment of this model involves its integration into a production environment where it is
capable of processing real-world data and providing timely information.
Machine learning models fall into the following categories
1. Supervised Machine Learning (SVM) : Supervised machine learning uses labeled datasets to train algorithms
to classify data or predict outcomes. As input data is inputted into the model, its weights modify until it fits into
the model; this process is known as cross validation which ensures the model is not overfitted or underfitted.
Supervised learning helps organizations scale real-world challenges like spam classification in a different folder
from your inbox. Different methods for supervised learning include neural networks, naive Bayes, linear
regression, logistic regression, random forest, and SVM.

Labelled Data
Prediction
Tomato
Model Training
Carrot

Bell Pepper
Tomato
Carrot Bell Pepper
Output

Text Data

Fig. 5.10.3
2, Unsupervised Machine Learning : Unsupervised machine learning analyses and clusters unlabelled datasets
using machine learning methods. The algoríthms find hidden patterns or data groupings without human
interaction. This method is useful for exploratory data analysis, cross-seling, consumer segmentation, and image
and pattern recognition. It also reduces model features through dimensionality reduction using prominent
methods of Principal component analysis (PCA) and singular value decomposition (SVD). Neural networks, k
means clustering, and probabilistic clustering are some popular methods of unsupervised learning.
TechKnouledge
PubIcatlons
Systerns
BusinesssIntelligence and Data Analytics
5-19 Knowledge Mgmt. & Al&Expert

Unsupervised
Machlne Learning

Fig. 5.10.4
and
Semi-supervised learning : As its name implies; Semi-supervised learning is an integration of supervised
unsupervised learning. This method uses both labeled and unlabelled data to train ML models for classiicaton
and regression tasks. Semi-supervised learning is a best practice to utilize to solve the problem where a use
doesn't have enough labeled data for asupervised learning algorithm. Hence, it's an appropriate method to Soive
the problem where data is partially labeled or unlabelled. Self-training, co-training, and graph-based labeling are
some of the popular Semi-supervised learning methods.
Input Data

Semi-supervised learning Models Prediction

Its Tomato

Unlabelled Data
Partial Labels

Fig. 5.10.5

4, Reinforcement Machine Learning: Reinforcement machine learning is a type of machine learning model that is
by trial and
similar to supervised learning but does not use sample data to train the algorithm. This model learns
error.

State

Reward

Environment Agent

Action

Fig. 5.10.6

TechKnouledge
PubIlCations
Knowledge Mgmt. & Al & Expert
Business Intelligence and Data Analytics 5-20
Systems
5.10.1 TensorFlow
TensorFlow is one of the most known software libraries developed by Google to implement machine learning
various hardware
and deep learning tasks. The creation of computational graphs and efficient execution on
s made easier with this. It is widely used for the development of
platforms
tasks like natural language processing, imaos
recognition and handwriting recognition.
Installation and Execution
following command to install TensorFlow usine
ror GPO platform on Windows operating svstem. vou can use the
pip -
pip install tensorflow
You can refer to the to the following link for installation of TensorFlow with more options
rOR https://www.tensorflow.org/install/pip
To import TensorFlow use the following -
importtensorflowastf
After installing TensorFlow, you can import it into your Python script as did above.
Example
SPEh Following is an example of creating a tensor data or object using TensorFlow -
importtensorflowastf
data =tf.constant(([2, 1],(4,6]])
print(data)
Output
The above example code willproduce the following result
cf.Tensor(
((2 1]
(46]], shape-(2, 2), dtype=int32)
Keras

Keras is an high level neural network library that creates deep learning models. It runs on top of
TensorFlow,
CNTK, or Theano. It provides a simple and intuitive API for building and training deep learning models, making it an
excellent choice for beginners and researchers. Keras is one of the popular library as it allows for easy and fast
prototyping.
Installation and Execution

For CPUplatform on Windows operating system, use the following to install Keras using pip -
píp install keras
To import TensorFlow use the following -
Ímportkeras
After installing Keras, you can import it into your Python script as we did above.

Teck Knouledge
Pub||CatI0ns
BusinessIntelligence.and Data Analytics Systems
5-21 Knowledge Mgmt. &AI &Expert
Example

Inthe example below, we are importing CIFAR-10 dataset from Keras and training data and
testdata-
printing the shape of

Importkeras

rainy train). (X_testy.testj=keras.datasets.clfar10.0ad data()


print(x,train.shape)
print(x_test.shape)
print(y.train.shape)
print(y_test.shape)

5.10.2 Machine Learning - Data Distribution


In machine learning, data distribution refers to the wav in which data points are distributed or spread out across
adataset. It is important to understand the distribution of data in a dataset. as it can have a significant impact on
the performance of machine learning algorithms.
mode, standard
Data distribution can be characterized by several statistical measures. including mean, median,
deviation, and variance. These measures help to describe the central tendency, spread, and shape of the data.
Some common types of data distribution in machine learning are given below -
1 Normal Distribution
distribution that is
Normal distribution, also known as Gaussian distribution, is a continuous probability
probability
widely used in machine learning and statistics. It is a bell-shaped curve that describes the
distribution has two
distribution of a random variable that is symmetric around the mean. The normal
parameters, the mean (4) and the standard deviation (o).
distribution of error terms in linear
In machine learning, normal distribution is often used to model the
hypothesis tests and confidence
regression and other statistical models. It is also used as a basis for various
intervals.
rule, also known as the 68-95-99.7 rule. This
One important property of normal distribution is the empirical
one standard deviation of the mean, 95%
rule states that approximately 68% of the observations fall within
mean, and 99.7% of the observations fall
of the observations fall within two standard deviations of the
within three standard deviations of the mean.
normal distributions. ne such library is
Python provides various libraries that can be used to work with
density function (PDF), cumulative
scipy.stats, which provides functions for calculating the probability
variables for normal distribution.
distribution function (CDF), percent point function (PPF), and random
Example
and visualize a normal distribution -
Here is an example of using scipy.stats to generate
Import numpy as np
trom scipy.stats import norm
Import matplotlib.pyplot as plt
normal distribution
#Generate aarandom sample of 1000 values from a
Tech lkaowledge
PUDICations
Knowledge Mynt yens
Business Intelligence and Data Analytic 522
mu s0# Mean
sigma 1# Standarddeviaton
sample np.random.normal(mu, slgma,1000)
Calculate the PDF for the normal distributton
x np.linspace(mu -3sigma, mu 3"sigma,100)
pdf enorm.pdf(x, mu, slgma)
Plotthe histogram of the random sample and the PDF of the normal
distributton
pltfigure(figsize-(7.5,3.5))
plt.hist(sample, bins 30, density«True, alpha 0.5)
plt.plot(x, pdf)
pltshow) distribution with mean 0
example, We first generate a random sample of 1000 values from a normal the
s We then use norm.pdf to calculate the PDF for
and standard devlatlon 1 usíng np.random, normal, 100 evenly spaced values between u -3o and
hormaldistributlon and np.l/nspace to generate an arrayof
+3a
and overlay the PDF of the normal
Finally, we plot the histogram of the random sample using plt.híst
distribution using plt.plot.
Output
random
The resulting plot shows the bell-shaped curve of the normal dístribution and the hístogram of the
sample that approx:mates the normal distrlbution.

SPE 0.4

0.3

0.2

0.1

0.0
-3 -2 -1 0 1 2 3
Flg. 5.10.7: Normal Distribution
2. Skewed Distribution
Askewed distríbutíion inmachíne learnlng refers to a dataset that is not evenly distributed around its mean,
or average value. In a skewed distributlon, the majority of the data points tend to cluster towards one end of
the distrlbutlon, with a smaller number of data points at the other end.
There are two types of skewed distributlons: left-skewed and right-skewed. A left-skewed distribution, also
known as a negative-skewed dlstributlon, has a long tail towards the left side of the distribution, with the
majority of data points towards the rlght slde. In contrast, a right-skewed distribution, also known as a
posítive-skewed distribution, has a long tall towards the right side of the distribution, with the majority of
data polnts towards the left slde.
Skewed distributions can occur ln many dífferent types of datasets, such as financial data, social media
metrics, or health care records. In machlne learning, It is Important to identify and handle skewed
distributions approprlately, as they can affect the performance of certaln algorithms and models. For
example, skewed data can lead to blased predictions and inaccurate results in some cases and may require
preprocessing techniques such as normalization or data transformation to improve the performance of the
model,
TecdKaeledge
PuDIIcations
Expert Systems
BusinessIntelligence and Data Analytics 5-23 Knowledge Mgmt. & AI&
Example
Matplotlib
Here is an example of generating and plotting a skewed distribution using Python's NumPy and
libraries:
OpenCompiler
np
importnumpyas
importmatplotlib.pyplot as plt
#Generateaaskewed distribution using NumPy's random function
Bata =np.random.gamma(2,1,1000)
#Plota histogramn of the data to visualize the distribution
pltfigure(figsize=(7.5,3.5))
plthist(data, bins=30)
Addlabels and title to the plot
pltxlabel('Value')
pltylabel('Frequency')
nlt.title('Skewed Distribution)
#Show the plot
pltshow)
Output
Onexecuting this code, you will get the following plot as the output -
Skewed Distributlon
140

120

100
Frequency
80

60

40

20

0 2 4 6 10
Value
Fig. 5.10.8: Skewed Distribution
Explore our latest online courses and learn new skillsat your own pace. Enroll and become a certified expert
to boost your career.
3. Uniform Distribution
A unifornm distribution in machine learning refers to a probability distribution in which all possible
outcomes are equally likely to occur. In other words, each value in a dataset has the same probability of
being observed, and there is noclustering of data points around a particular value.
The unlform distribution is often used as abaseline for comparison with other distributions, as it represents
a random and unbiased samplingof the data. It can also be useful in certain types of applications, such as
generating random numbers or selecting items from a set without bjas.

TechKnowledge
Pub|C atlonS
Business Intelligence and Data Analytics Knowledge Mgmt. &Al &Expert
5-24
tSystems
In probabilitytheory, the probability density function of a continuous uniform distribution ís defined as -

for a[xsb
b-a
f(x) =
btherwise
distribution, respectively. mean of a unik
e d and bare the minimum and mavimum values of the
distribution is a+b
2 and the variance is b-a)2
12
Example
In rynon, the NumPy library provides functions for generating random numbers from a unifor
uIstibution, such as numpy.random.uniform). These functions take as arguments the minimum and
naximum values of the distribution and can he used to generate datasets with a uniform distribution.
Here is an example of generating auniform distribution using Python's NumPy library -
Open Compiler
import numpy as np
importmatplotlib.pyplot as plt
# Generate 10,000 random numbers from a uniform distribution between 0and 1
uniform_data=np.random.uniform(low=0, high=1, size=10000)
# Plot the histogram of the uniform data
plt.figure (figsize=(7.5,3.5))
plt.hist(uniform_data, bins=50, density=True)
# Add labels and title'to the plot
plt.xlabel('Value')
pltylabel(Frequency')
plt.title('Uniform Distribution')
#Show the plot
plt.show0
Output:It willproduce the following plot as the output -
Unlform Distributlon
1.21

1.0

0.8
AouYnba1H
0.6 -

0.4

0.2

0.0
0.0 0.2 0.4 0.6 0.8 1.0
Value
Fig. 5.10.9

4 Bimodal Distribution
In machine learning, a bimodal distribution is a probability distribution that has two distinct modes o
peaks. In other words, the distribution has twO regions where the data values are most likely to occul
separated by a valley or trough where the data is less likely to occur.

TechKnouledgo
PubliCations
BusinessIntelligence and Data Analytics
5-25 Knowledge Mgmt. & AI&Expert Systems
Bimodal dlstributions can arise in various tynes of data such as blometric measurements, economne
indicators, or social media metrics. They can represent diferent subpopulations within the dataset, o
different modes of behavior or trends over time.
Bimodal distributions can be identified and analvzed using varius statistical methods, such as histograms,
kernel density estimations, or hypothesis testing. In some cases, bimodal distributions can be nteu
specific pro bability distributions, such as the Gaussian mixture model, which allows for modeiing
underlying subpopulations separately.
Example
In Python, libraries such as NumPy, SciPy, and Matplotlib provide functions for generating and visualzn
bimodal distributions.
For example, the following code generates and plots abimodal distribution
OpenCompiler
import numpyas np
iport matplotlib.pyplot as plt

#Generate 10,000 random numbers from abimodaldistribution


himodal_data=np.concatenate(np.random.normal (loc=-2, scale=1, size=5000),
np.random.normal(loc=2, scale=1, size=5000)

#Plot the histogram of the bimodal data


pltfigure(figsize=(7.5,3.5))
plthist(bimodal_data, bins=50, density=True)
#Add labels and title to the plot
pltxlabel('Value')
pltylabel(Frequency')
plLtitle('Bimodal Distribution').

#Show the plot


pltshow)
Output
plot as the output :
Onexecuting this code, you will get the followingBiomodal Distribution

0.200

0.175

0.150
Frequency
0.125

0.100

0.075

0.050

0.025

0.000 -2 2 4

Value
Fig. 5.10.10
TechKnouledga
PubIlC ationS

You might also like