Notes For Finals
Notes For Finals
1. flat-file model
2. database model
1. FLAT-FILE APPROACH
• are data files that contain records with no structured relationships to other files.
• is most often associated with so-called legacy systems.
2. DATABASE APPROACH
• special software system that is programmed to know which data elements each user is
authorized to access.
The traditional problems associated with the flat-file approach may be overcome through:
a. Elimination of Data Storage Problem
b. Elimination of Data Update Problem
c. Elimination of Currency Problem
d. Elimination of Task-Data Dependency Problem
TYPICAL FEATURES:
a. Program development - contains application development software.
- create applications to access the database.
b. Backup and recovery - makes backup copies of the physical database.
c. Database usage reporting - captures statistics on what data are being used
d. Database access - permit authorized user access, both formal and informal, to the database.
- lowest level of representation, which is one step removed from the physical
database.
- describes the structures of data records, the linkages between files, and the
physical arrangement and sequence of records in a file.
FORMAL ACCESS:
2) Data Manipulation Language
• proprietary programming language that a particular DBMS uses to retrieve, process, and
store data.
DBMS Operation - illustrates how the DBMS and user applications work together.
INFORMAL ACCESS:
3) Query Language
• query is an ad hoc access methodology for extracting information from a database.
• Structured Query Language (SQL) - standard query language for both mainframe and
microcomputer DBMSs.
▪ a fourth-generation, nonprocedural language.
3. DATABASE ADMINISTRATOR
• is responsible for managing the database resource. The sharing of a common database
by multiple users requires organization, coordination, rules, and guidelines to protect the
integrity of the database.
Another important function of the DBA is the creation and maintenance of the data dictionary.
The data dictionary describes every data element in the database.
4. PHYSICAL DATABASE
• lowest level of the database and the only level that exists in physical form.
Data Structures - are the bricks and mortar of the database.
- allows records to be located, stored, and retrieved, and enables movement from
one record to another.
1. Data Organization - the way records are physically arranged on the secondary storage
device. This may be either sequential or random.
➢ sequential files are stored in contiguous locations that occupy a specified area of
disk space.
➢ random files are stored without regard for their physical relationship to other
records of the same file.
2. Data Access Methods – the technique used to locate records and to navigate through the
database.
5. DBMS MODELS
Database Terminologies:
• Data Attribute/Field - a single item of data, such as customer’s name, account balance,
or address.
• Entity - a database representation of an individual resource, event, or agent about which
we choose to collect data
• Record Type (Table or File) - When we group together the data attributes that logically
define an entity.
• Database - set of record types that an organization needs to support its business
processes.
• Associations - Record types that constitute a database exist in relation to other record
types.
Three basic record associations are:
a. one-to-one
b. One to-many
c. many-to-many.
information management system (IMS) is the most prevalent example of a hierarchical database.
▪ constructed of sets that describe the relationship
between two linked files. Each set contains a parent and
a child.
▪ Siblings - Files at the same level with the same parent.
▪ This structure is also called a tree structure.
a normalized array of data that is similar, but not precisely equivalent, to a record in a flat-file
system.
CHAPTER 5
APPLICATION CONTROLS - are programmed procedures designed to deal with potential exposures that
threaten specific applications.
Three broad categories:
1. input controls
2. processing controls
3. output controls
1. Input controls - designed to ensure that these transactions are valid, accurate, and
complete.
Data input procedures can be either be:
A. source document-triggered (batch)
B. direct input (real time)
Source document input requires human involvement and is prone to clerical errors.
Control Procedures:
a. Use Pre-numbered Source Documents - come prenumbered from the printer with a unique
sequential number on each document.
b. Use Source Documents in Sequence - distributed to the users and used in sequence.
c. Periodically Audit Source Documents - identify missing source documents.
Data Coding Controls - are checks on the integrity of data codes used in processing.
check-digit techniques
1. Assign weights.
2. Sum the products
3. Divide by the modulus.
4. Subtract the remainder from the modulus.
5. Add the check digit to the original code
Batch controls - effective method of managing high volumes of transaction data through a system.
Two documents:
1. batch transmittal sheet
2. batch control log
Hash totals - a simple control technique that uses nonfinancial data to keep track of the records in a
batch.
Validation Controls - are intended to detect errors in transaction data before the data are processed.
Three Levels of Input Validation Controls:
1. Field interrogation - programmed procedures that examine the characteristics of the data in the field.
types of field interrogation:
a. Missing data checks - examine the contents of a field for the presence of blank spaces.
b. Numeric-alphabetic data checks - determine whether the correct form of data is in a field.
c. Zero-value checks - used to verify that certain fields are filled with zeros.
d. Limit checks - determine if the value in the field exceeds an authorized limit.
e. Range checks - assign upper and lower limits to acceptable data values.
f. Validity checks - compare actual values in a field against known acceptable values
g. Check digit - controls identify keystroke errors in key fields by testing the internal validity
of the code.
2. Record interrogation - validate the entire record by examining the interrelationship of its field
values.
Typical Tests:
a. Reasonableness checks - determine if a value in one field, which has already passed a
limit check and a range check, is reasonable when considered along with other data fields
in the record.
b. Sign checks are tests to see if the sign of a field is correct for the type of record being
processed.
c. Sequence checks are used to determine if a record is out of order.
3. File interrogation - ensure that the correct file is being processed by the system.
*Master files - contain permanent records of the firm.
a. Internal label checks - verify that the file processed is the one the program is actually
calling for.
b. Version checks - verify that the version of the file being processed is correct.
c. Expiration date check - prevents a file from being deleted before it expires.
Input Error Correction - when errors are detected in a batch, they must be corrected and the records
resubmitted for reprocessing.
Three common error handling techniques:
(1) correct immediately
(2) create an error file
(3) reject the entire batch.
OUTPUT CONTROLS - ensure that system output is not lost, misdirected, or corrupted and that privacy is
not violated.
Examples of privacy exposures:
o disclosure of trade secrets
o patents pending
o marketing research results
o patient medical records
Output Spooling – direct their output to a magnetic disk file rather than to the printer directly.
backlog - can cause a bottleneck, which adversely affects the throughput of the system.
Control testing techniques - provide information about the accuracy and completeness of an application’s
processes.
Two General Approaches:
(1) the black box (around the computer) approach
(2) the white box (through the computer) approach.
1. BLACK-BOX APPROACH - do not rely on a detailed knowledge of the application’s internal logic.
2. WHITE-BOX APPROACH - relies on an in-depth understanding of the internal logic of the
application being tested.
CHAPTER 11
ERP Core Applications - applications that operationally support the day-to-day activities of the business.
Core applications are also called online transaction processing (OLTP) applications.
Online analytical processing (OLAP) - includes decision support, modeling, information retrieval, ad hoc
reporting/analysis, and what-if analysis.
• data warehouse - database constructed for quick searching, retrieval, ad hoc queries, and ease
of use.
A. Big Bang Versus Phased-in Implementation - more ambitious and riskier. Attempt to switch
operations from their old legacy systems to the new system in a single event that implements the
ERP across the entire company.
B. Phased-In Approach - a popular alternative. It is particularly suited to diversified organizations
whose units do not share common processes and data.