BIA Model Test Paper - 2014
BIA Model Test Paper - 2014
BIA Model Test Paper - 2014
Ques1: Differentiate between Decision Support System and Group Decision Support System.
GDSS and DSS are computer based information system that may assist decision making processes
within a group, company or office. By using GDSS and DSS, the company can speed up decision
making process allowing more time for employees to focus on particular issues. Learning and
training can be promoted through this system.
Group Decision Support System: - GDSS or Group Decision Support System is a subclass or
subcategory of DSS. It is defined as a computer based information system built to support and
promotes positive group decision making. GDSS has three important components: software, which
consists of the database with management capabilities for group decision making. Another
component is the hardware and lastly the people. The latter will include the decision making
participants.
A group decision support system is a decision support system that facilitates decision making by a
team of decision markets working as a group. The importance of collective decisions is being felt
today. For main issue to be sorted out, brainstorming sessions are carried out and the collective pool
of ideas and opinions give a final shape to a decision. A GDSS is a DSS that facilitates decision
making by a team of decision maker working as a group.
A GDSS is an interactive, computer based system that facilitates solution of unstructured problems
by a set of decisions makers working together as a group. A GDSS is superior then DSS because in
GDSS the decisions are taken by a group of DSS. So it is superior to the DSS.
Decision Support System: - Meanwhile, DSS also known as Decision Support System is meant to
affect how individuals decide or process decision making. Through the use of DSS, both human
capabilities and computer capacities are maximized to result to one great positive decision. The
system will provide assistance for the human element and not the sole decision maker. DSS also
allows customization of the programs particularly the decision making capabilities to better suit
individual needs.
In other words, a Decision Support System (DSS) is a computer-based information system that
supports business or organizational decision- making activities. DSSs serve the management,
operations, and planning levels of an organization (usually mid and higher management) and help
to make decisions, which may be rapidly changing and not easily specified in advance
(Unstructured and Semi-Structured decision problems). Decision support systems can be either
fully computerized, human or a combination of both.
Groupware : - The term "groupware" refers to specialized software applications that enable group
members to share and sync information and also communicate with each other more easily. A class
of software that helps groups of colleagues (workgroups) attached to a local-area network organize
their activities. Typically, groupware supports the following operations:
Groupware can allow both geographically dispersed team members and a company's on-site
workers to collaborate with each other through the use of computer networking technologies (i.e.,
via the Internet or over an internal network/intranet). As such, groupware is especially important for
remote workers and professionals on the go, since they can collaborate with other team members
virtually.
Increased productivity
Features
A centralized repository for documents and files that users can access and save to
Document version management and change management
Shared calendars and task management
Web conferencing, instant messaging, message boards, and/or whiteboards
Current market leaders are Lotus Notes and Domino, Microsoft Exchange, Novell GroupWise and
Oracle Office
It is a Individual tools inside the software suite include a meeting manager (Lotus Sametime) and
message exchange (Lotus Notes Mail)
1. Messaging systems
2. Conferencing systems
3. Collaborative authoring systems: eg Google doc , Media wiki
4. Group DSS
5. Coordination systems
Groupware exists to facilitate the movement of messages or documents so as to enhance the quality
of communication among individuals in remote locations. It provides access to shared databases,
document handling, electronic messaging, work flow management, and conferencing. In fact,
groupware can be thought of as a development environment in which cooperative applications
including decisionscan be built. Groupware achieves this through the integration of eight distinct
technologies: messaging, conferencing, group document handling, work flow, utilities/development
tools, frameworks, services, and vertical market applications. Hence, it provides the foundation for
the easy exchange of data and information among individuals located far apart. Although no
currently available product has an integrated and complete set of capabilities.
Ques3. Which information system is suitable for top level management? Why
Ans. The information system suitable for top level management is the expert system &
Execuitve Information system.
An expert system is a computer system that emulates the decision- making ability of a human
expert. Expert systems are designed to solve complex problems by reasoning about knowledge,
represented primarily as IF-THEN rules rather than through conventional procedural code. The first
expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were
among the first truly successful forms of AI software.
It is an artificial intelligence based system that converts the knowledge of an expert in a specific
subject into a software code. This code can be merged with other such codes (based on the
knowledge of other experts) and used for answering questions (queries) submitted through a
computer. Expert systems typically consist of three parts: (1) a knowledge base which contains the
information acquired by interviewing experts, and logic rules that govern how that information is
applied; (2) an Inference engine that interprets the submitted problem against the rules and logic of
information stored in the knowledge base; and an (3) Interface that allows the user to express the
problem in a human language such as English.
Expert systems are computer applications which embody some non-algorithmic expertise for
solving certain types of problems. For example, expert systems are used in diagnostic applications
servicing both people and machinery. They also play chess, make financial planning decisions,
configure computers, monitor real time systems, underwrite insurance policies, and perform many
other services which previously required human expertise. Expert systems generate decisions that
an expert would make: they can recommend solutions to nursing problems which mimic the clinical
judgment of a nurse expert. These systems are developed to facilitate a nd enhance the clinical
judgment of nurses, not to replace them. Like decision support systems, expert systems provide
information to help health professionals to make informed judgments when assessing the validity of
data, information, diagnoses, and choices for treatment and care.
The spectrum of applications of expert systems technology to industrial and commercial problems
is so wide as to defy easy characterization. The applications find their way into most areas of
knowledge work. They are as varied as helping salespersons sell modular factory-built homes to
helping NASA plan the maintenance of a space shuttle in preparation for its next flight.
This class comprises systems that deduce faults and suggest corrective actions for a malfunctioning
device or process. Medical diagnosis was one of the first knowledge areas to which ES technology
was applied (for example, see Shortliffe 1976), but diagnosis of engineered systems quickly
surpassed medical diagnosis. There are probably more diagnostic applications of ES than any other
type. The diagnostic problem can be stated in the abstract as: given the evidence presenting itself,
what is the underlying problem/reason/cause?
Configuration, whereby a solution to a problem is synthesized from a given set of elements related
by a set of constraints, is historically one of the most important of expert system applications.
Configuration applications were pioneered by computer companies as a means of facilitating the
manufacture of semi-custom minicomputers (McDermott 1981). The technique has found its way
into use in many different industries, for example, modular home building, manufacturing, and
other problems involving complex engineering design and manufacturing.
The financial services industry has been a vigoro us user of expert system techniques. Advisory
programs have been created to assist bankers in determining whether to make loans to businesses
and individuals. Insurance companies have used expert systems to assess the risk presented by the
customer and to determine a price for the insurance. A typical application in the financial markets is
in foreign exchange trading.
Knowledge Publishing
This is a relatively new, but also potentially explosive area. The primary function of the expert
system is to deliver knowledge that is relevant to the user's problem, in the context of the user's
problem. The two most widely distributed expert systems in the world are in this category. The first
is an advisor which counsels a user on appropriate grammatical usage in a text. The second is a tax
advisor that accompanies a tax preparation program and advises the user on tax strategy, tactics, and
individual tax policy.
These systems assist in the design of physical devices and processes, ranging from high- level
conceptual design of abstract entities all the way to factory floor configuration of manufacturing
processes.
Ans. Database
A database is an organized collection of data. The data are typically organized to model relevant
aspects of reality in a way that supports processes requiring this information. For example,
modeling the availability of rooms in hotels in a way that supports finding a hotel with vacancies.
Knowledge Base
A knowledge base (KB) is a technology used to store complex structured and unstructured
information used by a computer system. The initial use of the term was in connection with expert
systems which were the first knowledge-based systems.
The original use of the term knowledge-base was to describe one of the two sub-systems of a
knowledge-based system. A knowledge-based system consists of a knowledge-base that represents
facts about the world and an inference engine that can reason about those facts and use rules and
other forms of logic to deduce new facts or highlight inconsistencies.
1. A database stores data - for example, personnel data, sales data etc. As they stand, simply raw
data are not of much practical value, unless they can be transformed into information - for
example you may be able to analyze the sales data and arrive at purchase patterns, so your
company can leverage that information into profit. Now that data has become information.
Now, the question to ask is: what kind of "expertise" did you apply in transforming the raw data
into useful information? Can you store that "expertise", that "how to", in some place, so that
somebody else or perhaps some automated process can use that "stored knowledge" to do future
analysis? There, you have your knowledge base.
2. Knowledge base is just a collection of information based on experience or test cases which are
compiled into documents. This would relate to more of a functional domain. This is more of
subjective data.
3. Database is related collection of information compiled into rows and column. The data stored is
objective in nature. It is used in IT related applications. Some of the famous databases are
ORACLE, MS SQL, MS Access.
4. Knowledge base is a compilation of information that is based on Fact, Research, and life
experiences.
Database is a compilation of information that is just that Data, random information about
anything at all, pick a subject and compile information about it wether true or false,
Knowledge = If one puts his or her hand in Fire = It will burn, ( you gained knowledge )
Data = There are 100 ways to build a fire, and the list is compiled of data collected from others
experience.
Example: If we collect last 10 years data about flight reservation, the data can give us much
meaningful information such as the trends in reservation. This may give useful information like
peak time of travel, and what kinds of people are traveling in the various classes available
(Economy/Business).
OLTP (On-line Transaction Processing) deals with operational data, which is data involved in the
operation of a particular system and it is characterized by a large number of short on- line
transactions (INSERT, UPDATE, and DELETE). The main emphasis for OLTP systems is put on
very fast query processing, maintaining data integrity in multi-access environments and an
effectiveness measured by number of transactions per second. In addition, in an OLTP system, the
data is frequently updated and queried and to prevent data redundancy and to prevent update
anomalies the database tables are normalized, which makes the write operation in the database
tables more efficient.
Example: In a banking System, you withdraw amount through an ATM. Then account Number,
ATM PIN Number, Amount you are withdrawing and Balance amount in account are operational
data elements.
The following table summarizes the major differences between OLTP and OLAP system
design.
To control and run fundamental business To help with planning, problem solving, and
Purpose of data
tasks decision support
Inserts and Short and fast inserts and updates initiated Periodic long-running batch jobs refresh the
Updates by end users data
The process starts with determining the KDD goals, and ends with the implementation of the
discovered knowledge. Then the loop is closed - the
Active Data Mining part starts (which is beyond the scope of this book and the process defined
here). As a result, changes would have to be made in the application domain (such as offering
different features to mobile phone users in order to reduce churning). This closes the loop, and the
effects are then measured on the new data repositories, and the KDD process is launched again.
Following is a brief description of the nine-step KDD process, starting with
a managerial step:
1. Developing an unde rstanding of the application domain: This is the initial preparatory step. It
prepares the scene for understanding what should be done with the many decisions (about
transformation, algorithms, representation, etc.). The people who are in charge of a KDD project
need to understand and define the goals of the end-user and the environment in which the
knowledge discovery process will take place (including relevant prior knowledge). As the KDD
process proceeds, there may be even a revision of this step.
Having understood the KDD goals, the preprocessing of the data starts, defined in the next three
steps (note that some of the methods here are similar to Data Mining algorithms, but are used in the
preprocessing context):
3. Preprocessing and cleansing. In this stage, data reliability is enhanced. It includes data clearing,
such as handling missing values and removal of noise or outliers. There are many methods
explained in the handbook, from doing nothing to becoming the major part (in terms of time
consumed) of a KDD project in certain projects. It may involve complex statistical methods or
using a Data Mining algorithm in this context. For example, if one suspects that a certain attribute is
of insufficient reliability or has many missing data, then this attribute could become the goal of a
data mining supervised algorithm. A prediction model for this attribute will be developed, and then
missing data can be predicted. The extension to which one pays attention to this level depends on
many factors.
In any case, studying the aspects is important and often revealing by itself, regarding enterprise
information systems.
4. Data transformation. In this stage, the generation of better data for the data mining is prepared
and developed. Methods here include dimension reduction (such as feature selection and extraction
and record sampling), and attribute transformation (such as discretization of numerical attributes
and functional transformation). This step can be crucial for the success of the entire KDD project,
and it is usually very project-specific.
For example, in medical examinations, the quotient of attributes may often be the most important
factor, and not each one by itself. In marketing, we may need to consider effects beyond our control
as well as efforts and temporal issues (such as studying the effect of advertising accumulation).
However, even if we do not use the right transformation a t the beginning, we may obtain a
surprising effect that hints to us about the transformation needed (in the next iteration). Thus the
KDD process reflects upon itself and leads to an understanding of the transformation needed.
Having completed the above four steps, the following four steps are related to the Data Mining part,
where the focus is on the algorithmic aspects employed for each project:
5. Choosing the appropriate Data Mining task. We are now ready to decide on which type of
Data Mining to use, for example, classification, regression, or clustering. This mostly depends on
the KDD goals, and also on the previous steps. There are two major goals in Data Mining:
prediction and description. Prediction is often referred to as supervised Data Mining, while
descriptive Data Mining includes the unsupervised and visualization aspects of Data Mining. Most
data mining techniques are based on inductive learning, where a model is constructed explicitly or
implicitly by generalizing from a sufficient number of training examples.
The underlying assumption of the inductive approach is that the trained model is applicable to
future cases. The strategy also takes into account the level of meta- learning for the particular set of
available data.
6. Choosing the Data Mining algorithm. Having the strategy, we now decide on the tactics. This
stage includes selecting the specific method to be used for searching patterns (including multiple
inducers). For example, in considering precision versus understandability, the former is better with
neural networks, while the latter is better with decision trees.
For each strategy of meta- learning there are several possibilities of how it can be accomplished.
Meta- learning focuses on explaining what causes a Data Mining algorithm to be successful or not in
a particular problem.
Thus, this approach attempts to understand the conditions under which a
Data Mining algorithm is most appropriate. Each algorithm has parameters and tactics of learning
(such as ten-fold cross-validation or another division for training and testing).
7. Employing the Data Mining algorithm. Finally the implementation of the Data Mining
algorithm is reached. In this step we might need to employ the algorithm several times until a
satisfied result is obtained, for instance by tuning the algorithms control parameters, such as the
minimum number of instances in a single leaf of a decision tree.
8. Evaluation. In this stage we evaluate and interpret the mined patterns (rules, reliability etc.),
with respect to the goals defined in the first step. Here we consider the preprocessing steps with
respect to their effect on the Data Mining algorithm results (for example, adding features in Step 4
and repeating from there). This step focuses on the comprehensibility and usefulness of the induced
model. In this step the discovered knowledge is also documented for further usage.
The last step is the usage and overall feedback on the patterns and discovery results obtained by the
Data Mining:
9. Using the discovered knowledge. We are now ready to incorporate the knowledge into another
system for further action. The knowledge becomes active in the sense that we may make changes to
the system and measure the effects. Actually the success of this step determines the effective ness of
the entire KDD process. There are many challenges in this step, such as losing the laboratory
conditions under which we have operated. For instance, the knowledge was discovered from a
certain static snapshot (usually sample) of the data, but now the data becomes dynamic. Data
structures may change (certain attributes become unavailable), and the data domain may be
modified (such as, an attribute may have a value that was not assumed before).
iii) Display the employee names which were hired in the year 2007
A data warehouse (DW, DWH), or an enterprise data warehouse (EDW), is a database used
for reporting (1) and data analysis (2). Integrating data from one or more disparate sources creates a
central repository of data, a data warehouse (DW). Data warehouses store current and historical
data and are used for creating trending reports for senior management reporting such as annual and
quarterly comparisons.
Subject Oriented: In the Data Warehouse, data is stored by objects, not by applications.
Business subjects differ from enterprise to enterprise. These are the subjects critical for the
enterprise. For a manufacturing company, sales, shipment, and inventory are critical
business subjects.
Integrated: Integration is closely related to subject orientation. Data warehouses must put
data from disparate sources into a consistent format. They must resolve such problems as
naming conflicts and inconsistencies among units of measure. When they achieve this, they
are said to be integrated.
Nonvolatile: Nonvolatile means that, once entered into the warehouse, data should not
change. This is logical because the purpose of a warehouse is to enable you to analyze what
has occurred.
Time Variant: In order to discover trends in business, analysts need large amounts of data.
This is very much in contrast to online transaction processing (OLTP) systems, where
performance requirements demand that historical data be moved to an archive. A data
warehouse's focus on change over time is what is meant by the term time variant.
Data Extraction:This function has to deal with numerous data sources. One has to employ the
appropriate technique for each data source.Source Data may be from different source machines in
diverse data formats. Part of the source data may be in relational database systems.Some data may
be on the other legacy network and hierarchical data models.Many data sources may still be in flat
files. Thus data extraction may become quite complex.So, the organization develop in- house
programs to extract data or use outside tools.Most frequently,datawarehouse implementation teams
extract the source into a separate physical environment from which moving the data into the data
warehouse would be easier. In the separate environment, one may extract the source data into a
group of flat files,or a data-staging relational database, or a combination of both.
Data Transformation:One may perform a number of individual tasks as part of data tranformation.
First data is cleaned. Cleaning may just be correction of mis-spellings,or may deal with providing
default values for missing data elements,or elimination of duplicates. Also , standardization forms a
large part of data transformation.
Data Loading: Two distinct groups of tasks form the data loading function. When one completes
the design and construction of the data warehouse and go live for the first time,he does the initial
loading of the data in the data warehouse storage.The initial load moves large volumes of data using
up substantial amount of time. As the data warehouse starts functioning,one may continue to extract
the changes to the source data,transform the data revisions,and feed the incremental data revisions
on an ongoing basis.
3. Data Storage Component:The data storage for the data warehouse is a separate repository
because the operational systems of the organization support the day-to day operations only.
When the analysts use the data in the datahouse for analysis,they need to know that the data
is stable nd that it represents snapshots at specified periods.As they are working with the
data,the data storage must not be in a stage of continual updating.For this reason,the data
warehouses are read-only data repositories.The data may be yearly refreshed or monthly
refreshed etc. Many of the data warehouses also employ Multidimensional database
management.
4. Information Delivery Component: It includes different methods of information delivery for
the healthcare oranization.Ad hoc reports are predefined reports primarily meant meant
for novice and casual users .Provision of for complex queries,multi-dimensional
analysis(MD), and statistical analysis cater to the needs of analysts and power users. EIS is
meant for Senior Doctors and high- level managers. Data Mining applications are knowledge
discovery systems where the mining algorithms help you discover tre nds and patterns from
the usage of data. This helps the healthcare organizations to maintain the clinical inventory
like surgical tools,medicines and check the occupancy and vacant bed ratio.
5. Meta Data Component: Metadata in a data warehouse is similar to the data dictionary or the
data catalog in data base management. It keeps the information about the logical data
structures, the information about the files and addresses, the information about the indexes,
and so on. For eg Data Mart could be created for the number and name of patients in the
general ward for the year 2012.
6. Management and Control component:The management and control component coordinates
the services and activities within the data warehouse. This component controls the data
transformation and the data transformation and the data transfer into the data warehouse
storage. On the other hand,it moderates the information delivery to the users. The
management and control component interacts with the metadata component to perform its
function.
Ques8. How are data warehousing, data mining, expert system technologies associated with
knowledge management?
Knowledge management refers to the critical issues of organizational adaptation,
survival and competence against radical discontinuous environmental change.
Essentially it embodies organizational process that seeks synergistic combination of data
and information processing capacity of information technologies,and creative and
innovative capacity of human beings.
Knowledge Management (KM) refers to a multi-disciplined approach to achieving
organizational objectives by making the best use of knowledge. KM focuses on
processes such as acquiring, creating and sharing knowledge and the cultural and
technical foundations that support them.
Knowledge management is a surprising mix of strategies, tools, and techniques: some of
which are nothing new under the sun. Storytelling, peer-top mentoring, and learning
from mistakes, for example, all have precedents in education, training, and artificial
intelligence practices. Knowledge management makes use of a mixture of techniques
from knowledge-based system design, such as structured knowledge acquisition
strategies from subject matter experts and educational technology(e.g., task and job
analysis to design and develop task support systems. This makes it both easy and
difficult to define what KM is.
At one extreme, KM encompasses everything to do with knowledge. At the other
extreme, it is narrowly defined as an information technology system that dispenses
organizational know-how. KM is in fact both of these and many more. One of the few
areas of consensus in the field is that KM is a highly multidisciplinary field.
Management and coordination of the diverse technology architectures, data architecture
and system architecture poses a knowledge management challenges. Such challenges
result from the need for the integrating need for the diverse technologies, computer
programs, and data sources across internal business processes. These challenges are
compounded manifold by the concurrent need for simultaneously adapting enterprise
architectures to keep up with the changes in external business environment. For most
high- risk and high- return strategic decisions, timely information often unavailable as
more and more of such information is external in nature. Also, internal information may
often be hopelessly out of date with respect to evolving strategic needs.
This is where data warehousing, data mining and external system technologies play a
role in the area of knowledge management.
A data warehouse is a relational database that is designed for query and analysis rather
than for transaction processing. It usually contains historical data derived from
transaction data, but it can include data from other sources. It separates analysis
workload from transaction workload and enables an organization to consolidate data
from several sources.
In addition to a relational database, a data warehouse environment includes an
extraction, transportation, transformation, and loading (ETL) solution, an online
analytical processing (OLAP) engine, client analysis tools, and other applications that
manage the process of gathering data and delivering it to business users.
Data mining or knowledge discovery, is the computer-assisted process of digging
through and analyzing enormous sets of data and then extracting the meaning of the
data. Data mining tools predict behaviours and future trends, allowing businesses to
make proactive, knowledge-driven decisions. Data mining tools can answer business
questions that traditionally were too time consuming to resolve. They scour databases
for hidden patterns, finding predictive information that experts may miss because it lies
outside their expectations.
Data mining derives its name from the similarities between searching for valuable
information in a large database and mining a mountain for a vein of valuable ore. Both
processes require either sifting through an immense amount of material, or intelligently
probing it to find where the value resides.
An Expert system or an Artificial intelligence based system that converts the
knowledge of an expert in a specific subject into a software code. This code can be
merged with other such codes (based on the knowledge of other experts) and used for
answering questions (queries) submitted through a computer. Expert systems typically
consist of three parts: (1) a knowledge base which contains the information acquired by
interviewing experts, and logic rules that govern how that information is applied; (2) an
Inference engine that interprets the submitted problem against the rules and logic of
information stored in the knowledge base; and an (3) Interface that allows the user to
express the problem in a human language such as English. expert systems technology
has found application only in areas where information can be reduced to a set of
computational rules, such as insurance underwriting.