KM Secc
KM Secc
Advantages of GDSS
Features of GDSS
1. Communication Support:
○ GDSS offers features such as chat, video conferencing, and shared
virtual workspaces, allowing group members to interact and
communicate effectively in real-time.
2. Anonymity:
○ Many GDSS tools provide anonymous input options, which reduce
the risk of dominance by more vocal members and allow quieter
participants to contribute more openly.
3. Idea Generation and Brainstorming:
○ GDSS includes tools for idea generation, such as brainstorming
platforms, mind mapping, and voting, enabling a free flow of ideas
and collaborative problem-solving.
4. Data Organization and Analysis:
○ GDSS supports organizing and analyzing large amounts of
information, helping groups evaluate options, consider risks, and
make evidence-based decisions.
5. Decision Models and Scenarios:
○ It allows the use of predefined decision models or scenarios (e.g.,
SWOT analysis, cost-benefit analysis, decision trees) to
systematically evaluate alternatives and outcomes.
6. Document and File Sharing:
○ GDSS typically includes mechanisms for sharing files, documents,
and other media, ensuring all team members have access to
necessary resources and information.
7. Voting and Ranking Tools:
○ Some GDSS tools incorporate voting mechanisms or ranking
systems, allowing the group to prioritize options and make decisions
based on group consensus.
Components of GDSS
1. Hardware:
○ This includes the computers, servers, and communication devices
that facilitate the exchange of information within the group. It may
also involve special equipment for video conferencing or virtual
collaboration.
2. Software:
○ GDSS software consists of tools and applications that support the
decision-making process. This could include:
■ Communication tools (chat, video conferencing, forums)
■ Collaborative tools (document sharing, whiteboards, surveys)
■ Decision support tools (decision trees, optimization models)
■ Analytical tools (data mining, simulations, and modeling
software)
3. People:
○ The participants or decision-makers who use the GDSS. This group
can be a team within a business, external consultants, or
stakeholders from different departments or organizations.
4. Procedures:
○ GDSS incorporates structured procedures or processes that guide
the group through decision-making steps. These procedures ensure
that the group follows a logical, systematic approach and adheres to
the best practices for problem-solving and decision-making.
5. Data and Information:
○ The raw data and information that the group will analyze to make
decisions. GDSS helps organize, filter, and present this data in ways
that are useful for the decision-making process.
6. Facilitator:
○ Often, a facilitator or moderator is involved in GDSS sessions to
guide the process, keep the group focused, ensure participation, and
help resolve conflicts.
1. Purpose
● OLAP:
○ Purpose: OLAP is designed for data analysis and decision-making. It is
used for querying, reporting, and analyzing large volumes of data from
different perspectives.
○ Usage: It is primarily used in data warehousing and business intelligence
for complex querying, trend analysis, and reporting.
● OLTP:
○ Purpose: OLTP is designed for transaction processing. It handles
day-to-day operations of an organization by supporting routine transaction
processing such as order entry, inventory management, and customer
transactions.
○ Usage: It is used in systems where real-time transaction data needs to be
captured, such as in banks, online retail, and reservation systems.
2. Data Volume
● OLAP:
○ Volume: OLAP systems typically handle large volumes of historical data
stored in data warehouses or data marts for analysis.
○ Data Type: It involves aggregated data, such as sales performance over
the last year, and is often summarized or multidimensional.
● OLTP:
○ Volume: OLTP systems deal with a high volume of individual
transactions. They are optimized for speed and real-time operations,
dealing with millions of small, transactional data points.
○ Data Type: It involves current, real-time transactional data, such as
customer orders, payments, and inventory updates.
3. Query Complexity
● OLAP:
○ Complexity: OLAP queries tend to be complex and involve aggregations,
filtering, sorting, and multi-dimensional analysis.
○ Examples: A query like "What were the total sales for each region in the
last five years, broken down by product category?"
● OLTP:
○ Complexity: OLTP queries are generally simple, focused on inserting,
updating, or retrieving individual records from a database.
○ Examples: A query like "What is the current stock level of product X?" or
"Retrieve the last order placed by customer Y."
4. Transaction Frequency
● OLAP:
○ Frequency: OLAP systems have low transaction frequency, as they
perform fewer, but more complex, queries. They are not focused on
real-time updates but rather on historical data analysis.
● OLTP:
○ Frequency: OLTP systems handle high-frequency transactions, such
as thousands or millions of real-time updates, inserts, and deletes to
maintain up-to-date transactional data.
5. Database Design
● OLAP:
○ Design: OLAP databases are typically designed with dimensional
models, such as star schemas or snowflake schemas, to facilitate fast
querying and data analysis.
○ Data Structure: Data is organized in cubes, with dimensions and
measures (e.g., time, location, sales) for efficient aggregation.
● OLTP:
○ Design: OLTP databases are designed with relational models, focusing
on ensuring data integrity and supporting efficient insert, update, and
delete operations.
○ Data Structure: Data is structured in normalized tables to minimize
redundancy and maintain consistency in transactional operations.
6. Performance Optimization
● OLAP:
○ Optimization: OLAP systems are optimized for read-heavy operations,
focusing on the performance of complex queries and multi-dimensional
analysis. The data is pre-aggregated and indexed for faster querying.
● OLTP:
○ Optimization: OLTP systems are optimized for write-heavy operations,
prioritizing fast transaction processing, ensuring ACID (Atomicity,
Consistency, Isolation, Durability) properties, and maintaining data
integrity.
7. Example Systems
● OLAP:
○ Examples: Business Intelligence tools like Microsoft Power BI, Tableau,
SAP BusinessObjects, and data warehouses that support analytical
queries.
● OLTP:
○ Examples: Online transaction systems like banking applications,
e-commerce platforms, and point-of-sale (POS) systems.
Summary of Differences
Feature OLAP (Online Analytical OLTP (Online
Processing) Transaction Processing)
Purpose Data analysis and decision support Transaction processing for
daily operations
An Expert System is a type of artificial intelligence (AI) system designed to mimic the
decision-making ability of a human expert in a specific domain. It is a software
application that uses knowledge and inference rules to solve complex problems by
reasoning through a body of knowledge, much like a human expert would.
Expert systems are often used in situations where specialized expertise is required, but
human experts may not always be available. They provide solutions to problems based
on the knowledge encoded in the system, enabling decision-making, advice, or
recommendations in a wide range of fields.
1. Knowledge Base:
○ The knowledge base is the core of an expert system, containing all the
relevant information, facts, rules, heuristics, and relationships specific to
the problem domain.
○ It can be built by knowledge engineers or domain experts and is
constantly updated as new information becomes available.
2. Inference Engine:
○ The inference engine processes the knowledge base to draw conclusions
and make decisions. It applies logical rules to the facts in the knowledge
base to derive new information or solve problems.
○ It can use forward chaining (data-driven reasoning) or backward
chaining (goal-driven reasoning) to find solutions.
3. User Interface:
○ The user interface allows users to interact with the expert system,
inputting data, asking questions, and receiving results or
recommendations. It is designed to be user-friendly so that non-experts
can use the system effectively.
4. Explanation Component:
○ The explanation component provides users with explanations of how the
system arrived at its conclusions, helping to build trust in the system and
allowing users to understand the reasoning process.
5. Knowledge Acquisition Module:
○ This component is responsible for updating and refining the knowledge
base by allowing new information to be added or existing knowledge to be
modified.
Expert systems are valuable in many contexts due to their ability to replicate the
decision-making processes of human experts. The following are key reasons why
expert systems are important:
1. Accessibility to Expertise:
3. Cost Efficiency:
● Fast Responses: Expert systems can process information and make decisions
much faster than human experts. This is crucial in situations that require rapid
decision-making, such as emergency response, diagnostics, or financial trading.
Data Mining is the process of discovering patterns, correlations, trends, and useful
information from large datasets using statistical, mathematical, and computational
techniques. It is a part of the broader field of data analysis and is used to extract
valuable insights from data, often to aid in decision-making processes, predictions, or to
find hidden patterns in the data. The goal of data mining is to uncover relationships and
patterns in data that are not immediately apparent, allowing businesses and
organizations to make more informed decisions.
The implementation process of data mining typically follows a structured approach that
involves several stages:
1. Problem Definition:
○ The first step is to define the problem or objective clearly. What is the goal
of the data mining project? This could be anything from improving sales,
predicting customer behavior, to detecting fraud.
2. Data Collection:
○ Data is gathered from different sources, which could include databases,
data warehouses, spreadsheets, or external sources. This data should be
relevant to the problem and contain enough historical information to
provide meaningful insights.
3. Data Preparation (Data Cleaning):
○ The collected data is cleaned and preprocessed to ensure it is accurate,
consistent, and in a usable format. This involves handling missing data,
removing duplicates, correcting errors, and transforming data into a
suitable form for analysis.
4. Data Exploration and Transformation:
○ This stage involves exploring the data to understand its characteristics
and structure. Descriptive statistics and data visualization tools may be
used to identify patterns, outliers, and trends. Data may also be
transformed into a different format if necessary, such as normalizing
values or aggregating data.
5. Modeling:
○ In this step, data mining algorithms are applied to the prepared data to
build models that can predict or classify future outcomes. Various
techniques like decision trees, clustering, regression analysis, or neural
networks can be used depending on the objective.
6. Evaluation:
○ Once the model is built, its performance is evaluated using a set of metrics
such as accuracy, precision, recall, and F1 score. The model is assessed
to ensure it meets the defined business objectives and provides
meaningful results.
7. Deployment:
○ After the model is evaluated and validated, it is deployed into the
production environment where it can make real-time decisions or
predictions. The insights gained from the model are then used to make
informed business decisions.
8. Monitoring and Maintenance:
○ Data mining models need to be continuously monitored to ensure their
effectiveness over time. If the model’s performance deteriorates due to
changes in data or the environment, it may need to be retrained or
updated.
1. Classification:
○ Classification is a supervised learning technique where the goal is to
predict the categorical label of an item based on its features or attributes.
It involves building a model that maps input data to predefined categories
or classes.
○ Example: Predicting whether an email is spam or not based on its content
(spam or not spam being the classes).
○ Popular Algorithms: Decision Trees, Naive Bayes, k-Nearest Neighbors
(k-NN), Support Vector Machines (SVM), and Neural Networks.
2. Clustering:
○ Clustering is an unsupervised learning technique that groups a set of
objects in such a way that objects in the same group (or cluster) are more
similar to each other than to those in other groups. Unlike classification,
there are no predefined labels, and the algorithm tries to identify inherent
structures in the data.
○ Example: Grouping customers into segments based on purchasing
behavior without prior knowledge of the customer categories.
○ Popular Algorithms: k-Means, DBSCAN, Hierarchical Clustering, and
Gaussian Mixture Models (GMM).
BI involves the use of data analytics, data mining, querying, reporting, dashboards, and
visualization tools to gain insights that support strategic, tactical, and operational
decisions.
Process Used in BI
1. Data Collection:
○ Data is gathered from various sources such as transactional databases,
data warehouses, external data providers, spreadsheets, and even social
media or IoT devices.
○ The data can be structured (e.g., databases), semi-structured (e.g., XML
files), or unstructured (e.g., text documents).
2. Data Integration:
○ The collected data from different sources is integrated into a central
repository, typically a data warehouse or data mart. This stage involves
cleaning, transforming, and consolidating data to ensure consistency and
accuracy. Tools such as ETL (Extract, Transform, Load) are commonly
used to prepare the data for analysis.
3. Data Storage:
○ The integrated data is stored in a secure and optimized environment, like
a data warehouse, database, or cloud storage, where it can be accessed
easily for analysis.
○ Data warehouses store historical data, while data marts typically contain
data relevant to a specific business area (e.g., sales, marketing).
4. Data Analysis:
○ Data analysis is performed using various tools and techniques, such as
querying, reporting, and advanced analytics (e.g., predictive analytics,
data mining). BI tools help identify trends, correlations, patterns, and
outliers in the data.
5. Data Visualization:
○ Data insights are communicated through visualization tools like
dashboards, charts, graphs, and reports. Visualization makes it easier for
business users to understand complex data and identify key trends or
issues at a glance.
○ Common BI tools like Tableau, Power BI, and QlikView allow users to
create interactive reports and visuals.
6. Decision-Making:
○ Based on the analysis and insights generated, decision-makers use the
information to make informed business decisions. This step could involve
operational decisions (e.g., optimizing production schedules) or strategic
decisions (e.g., entering a new market).
7. Performance Management and Reporting:
○ After decisions are made, BI systems help monitor the outcomes and track
performance using key performance indicators (KPIs) and other metrics.
Regular reports and dashboards help to measure whether the goals are
being met and identify areas for improvement.
Types of Users in BI
Business Intelligence supports various types of users, each with different roles, levels of
technical expertise, and business objectives:
1. Operational Users:
○ These users are typically involved in day-to-day business operations and
require real-time data to make operational decisions. They may use
dashboards and reports for tasks like monitoring sales, inventory, or
customer service performance.
○ Example: Customer service representatives or store managers who need
daily performance metrics.
2. Analytical Users:
○ These users are responsible for analyzing trends, patterns, and
performing complex data analysis. They typically use OLAP tools, data
mining techniques, and advanced statistical models to explore the data.
○ Example: Business analysts, data scientists, and marketing analysts who
generate insights from historical data to forecast trends.
3. Executive Users:
○ Executives and senior managers need high-level summaries of business
performance to make strategic decisions. They often use KPIs,
scorecards, and dashboards to monitor organizational performance.
○ Example: CEOs, CFOs, or senior executives who need an overview of the
company's financial status, market trends, and performance metrics.
4. Power Users:
○ These users are skilled in using BI tools and may have a deeper
understanding of the data and its structures. They are capable of creating
custom reports and dashboards for various business needs.
○ Example: IT professionals or BI specialists who configure and customize
BI systems to meet specific business needs.
1. Strategic Business:
○ BI tools help in making long-term, high-level decisions, such as setting
organizational goals, developing new strategies, or entering new markets.
○ Example: Analyzing market trends and customer behavior to decide on
product innovation or expansion into new geographical areas.
2. Tactical Business:
○ Tactical BI helps with mid-level decision-making, typically involving
planning and resource allocation to achieve specific goals.
○ Example: Reviewing sales and marketing performance to plan
promotional campaigns or allocate budget for regional advertising.
3. Operational Business:
○ Operational BI focuses on short-term decisions and day-to-day operations,
enabling employees to act based on real-time data.
○ Example: Monitoring supply chain performance to ensure products are
delivered on time or adjusting inventory levels based on current sales
trends.
4. Customer-Facing Business:
○ BI supports customer relationship management (CRM) by providing data
on customer preferences, behavior, and satisfaction to improve
engagement and retention.
○ Example: Analyzing customer feedback and purchase history to
recommend products and personalize marketing messages.