100% found this document useful (1 vote)
279 views29 pages

Midterm Audcis Reviewer

- Accountants are involved in all stages of the systems development life cycle (SDLC) as users, members of the development team, and auditors. - As users, accountants must provide a clear picture of problems and needs. As members of the development team, they help ensure appropriate accounting and control considerations are included in system design. As auditors, they evaluate systems to ensure proper controls are incorporated. - The quality of information systems depends on thorough SDLC activities and accounting input to conceptualize important controls and auditability early in the process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
279 views29 pages

Midterm Audcis Reviewer

- Accountants are involved in all stages of the systems development life cycle (SDLC) as users, members of the development team, and auditors. - As users, accountants must provide a clear picture of problems and needs. As members of the development team, they help ensure appropriate accounting and control considerations are included in system design. As auditors, they evaluate systems to ensure proper controls are incorporated. - The quality of information systems depends on thorough SDLC activities and accounting input to conceptualize important controls and auditability early in the process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

CHAPTER 5 - Quality of AISs and their output rests directly on

SYSTEMS DEVELOPMENT & PROGRAM CHANGE SDLC activities that produce them.
ACTIVITIES
How are Accountants Involved with SDLC?
SYSTEM DEVELOPMENT PROCESS
- As Users and must provide clear picture of their
- constitutes a set of activities by which problems/needs.
organizations obtain IT-based information - Accountants are members of development
systems. team.
- Accountants are involved in systems
PARTICIPANTS IN SYSTEMS DEVELOPMENT
development as auditors to ensure system is
SYSTEMS PROFESSIONALS designed with appropriate computer audit
techniques.
- gather and analyze facts about problems with
current system and formulate a solution. The Role of the Accountant
- The product of their efforts is a new
- Accountants are responsible for conceptual system,
information system.
and systems professionals are responsible for
- are systems analysts, systems engineers,
physical system. If important accounting
database designers, and programmers.
considerations are not conceptualized at this point,
END USERS they may be overlooked and expose organization to
potential financial loss. Auditability of a system
- are those for whom the system is built. depends in part on its design characteristics.
- During systems development, systems
professionals work with the primary users to Systems Strategy
obtain an understanding of the users’ problems
- Help reduce risk of creating unneeded,
and a clear statement of their needs.
unwanted, inefficient, ineffective systems
- These include managers, operations personnel
from various functional areas including Conceptual Design
accountants, and internal auditors.
- Control implications
STAKEHOLDERS - Auditability of system

- are individuals who have an interest in the Systems Selection


system but are not formal end users.
- Economic feasibility
- These include the internal steering committee
that oversees systems development, internal INFORMATION SYSTEMS ACQUISITION
auditors including IT auditors, and external
auditors acting as consultants or serving in the IN HOUSE DEVELOPMENT
role of internal auditor. - Many organizations require systems that are
- to ensure that user’s needs are met, that highly tuned to their unique operations. Such
adequate internal controls are designed into firms frequently design their own information
the information systems under construction, systems through in-house systems development
and that the systems development process activities
itself is properly implemented and controlled. - requires maintaining a full-time systems staff of
Why Are Accountants Involved with SDLC? analysts and programmers who identify user
information needs and create custom systems.
- Creation/purchase of IS consumes significant
resources and has financial resource COMMERCIAL SYSTEMS
implications. - A popular option is to purchase commercial
systems from software vendors. Managements

| jg
are thus confronted with many competing OFFICE AUTOMATION SYSTEMS
packages, some with features in common and
- computer systems that improve the
others with unique features and attributes that
productivity of office workers.
they must choose between.

TRENDS IN COMMERCIAL SOFTWARE


BACKBONE SYTEMS
- Four factors have contributed to the growth of
the commercial software market: - provide a basic system structure on which to
1. LOW COST - the relatively low cost of build. Backbone systems come with all the
general commercial software as compared primary processing modules programmed.
to customized software
2. EMERGENCE OF SOFTWARE INSDUSTRY - VENDOR SUPPORTED SYSTEMS
the emergence of industry-specific vendors - custom systems that the vendor develops and
who target their software to the needs of maintains for the client organization
particular types of businesses
3. GROWING DEMAND OF BUSINESSES - a
growing demand from businesses that are ADVANTAGES OF COMMERCIAL SOFTWARE
too small to afford in-house systems’
development staff IMPLEMENTATION TIME
4. DOWNSIZING // DDP IT ENVIRONMENT -
- commercial software can be implemented
the trend toward downsizing organizational
almost immediately once a need is recognized.
units and the move toward distributed data
The user does not have to wait.
processing has made the commercial
software option appealing to larger COST
organizations.
- Since the cost of commercial software is spread
TYPES OF COMMERCIAL SYSTEMS across many users, however, the unit cost is
reduced to a fraction of the cost of an in-house
TURNKEY SYSTEMS
developed system.
- are completely finished and tested systems that
RELIABILITY
are ready for implementation.
- are usually sold only as compiled program - Most reputable commercial software packages
modules, and users have limited ability to are thoroughly tested before their release to
customize them to their specific needs the consumer market. Although no system is
- These are often general-purpose systems or certified as being free from errors, commercial
systems customized to a specific industry software is less likely to have errors than an
equivalent in-house system.
GENERAL ACCOUNTING SYSTEMS
DISADVANTAGES OF COMMERCIAL SOFTWARE
- designed to serve a wide variety of user needs
- By mass-producing a standard system, the INDEPENDENCE
vendor is able to reduce the unit cost of these
- Purchasing a vendor-supported system makes
systems to a fraction of in-house development
the firm dependent on the vendor for
costs.
maintenance.
SPECIAL PURPOSE SYSTEMS - This is perhaps the greatest disadvantage of
vendor-supported systems
- Some software vendors create special-purpose
systems that target selected segments of the THE NEED FOR CUSTOMIZED SYSTEM (becomes
economy. dependent on vendor)

| jg
- Sometimes, the user’s needs are unique and - allocation, processing, budgeting, informed
complex, and commercially available software is decisions by systems specialists
either too general or too inflexible.
Why Perform Strategic Systems Planning?

1. A plan that changes constantly is better than no


plan at all
2. Strategic planning reduces the crisis component
MAINTENANCE
in systems development
- Business information systems undergo frequent 3. Strategic systems planning provides
changes. If the user’s needs change, it may be authorization control for the SDLC
difficult or even impossible to modify 4. Cost management
commercial software.
PROJECT PLANNING

- allocate resources to individual applications


SYSTEMS DEVELOPMENT LIFE CYCLE within the framework of the strategic plan.
- This involves identifying areas of user needs,
preparing proposals, evaluating each proposal’s
feasibility and contribution to the business plan,
prioritizing individual projects, and scheduling
the work to be done.
- The basic purpose of project planning is to
allocate scarce resources to specific projects.

The product of this phase consists of two formal


documents: the project proposal and the project
schedule

PROJECT PROPOSAL

- provides management with a basis for deciding


whether to proceed with the project.
- The length of the systems development life 1. it summarizes the findings of the study
cycle will vary among business organizations conducted to this point into a general
depending on their industry, competition recommendation for a new or modified
pressure, the degree to which technological system
innovation impacts the company, and the scale 2. the proposal outlines the linkage between
of the project. the objectives of the proposed system and
PHASE I: SYSTEMS PLANNING the business objectives of the firm,
especially those outlined in the IT strategic
- Objective of Systems Planning is to link plan
individual projects or applications to the
strategic objectives of the firm. PROJECT SCHEDULE
- Most firms that take systems planning seriously - represents management’s commitment to the
establish a systems steering committee to project.
provide guidance and review the status of - a budget of the time and costs for all the phases
system projects. of the SDLC
Systems planning occurs at two levels: strategic ➢ AUDITOR’S ROLE: Adequate Systems Planning
systems planning and project planning. Takes Place

STRATEGIC SYSTEMS PLANNING PHASE II: SYSTEMS ANALYSIS

| jg
SYSTEMS ANALYSIS – two-step process - DATA PROCESSES
• Processing tasks are manual or
1. THE SURVEY STEP
computer operations that represent a
2. ANALYSIS OF THE USER’S NEEDS
decision or an action triggered by
THE SURVEY STEP information.
- DATA FLOWS
DISADVANTAGES OF SURVEYING THE CURRENT
• Data flows are represented by the
SYSTEM
movement of documents and reports
- Current Physical Tar Pit between data sources, data stores,
• used to describe the tendency on the processing tasks, and users.
part of the analyst to be “sucked in” • Data flows can also be represented in
and then “bogged down” by the task of Unified Modeling Language (UML)
surveying the current dinosaur system. diagrams
- Thinking inside the box - CONTROLS
• By studying and modeling the old • These include both accounting and
system, the analyst may develop a operational controls and may be
constrained notion about how the new manual procedures or computer
system should function. controls
• The result is an improved current - TRANSACTION VOLUMES
system rather than a radically new • Understanding the characteristics of a
approach systems transaction volume and its rate
of growth are important elements in
ADVANTAGES OF SURVEYING THE CURRENT assessing capacity requirements for the
SYSTEM new system
- Identifying what aspects of the old system - ERROR RATES
should be kept • As a system reaches capacity, error
- Forcing systems analysts to fully understand the rates increase to an intolerable level.
system • Although no system is perfect, the
- Isolating the root of problem symptoms analyst must determine the acceptable
error tolerances for the new system.
GATHETING FACTS - RESOURCE COSTS
- The facts gathered by the analyst are pieces of • The resources used by the current
data that describe key features, situations, and system include the costs of labor,
relationships of the system. computer time, materials (such as
invoices), and direct overhead.
System facts fall into the following broad classes - BOTTLENECK AND REDUNDANT OPERATIONS
GATHERING FACTS IN THE SURVEY OF THE CURRENT • The analyst should note points where
SYSTEM data flows come together to form a
bottleneck.
- DATA SOURCES • By identifying these problem areas
• These include external entities, such as during the survey phase, the analyst can
customers or vendors, as well as avoid making the same mistakes in the
internal sources from other design of the new system
departments.
- DATA STORES FACT-GATHERING TECHNIQUES
• Data stores are the files, databases, Systems analysts employ several techniques to gather
accounts, and source documents used the previously cited facts
in the system.

| jg
- OBSERVATION - An accountant is a stakeholder; therefore it
• Observation involves passively watching should be involved in the analysis of needs of
the physical procedures of the system. the proposed system for advanced features
- TASK PARTICIPATION
SYSTEMS DEVELOPMENT ACTIVITIES
• the analyst takes an active role in
performing the user’s work. This allows - Authorizing development of new systems
the analyst to experience first-hand the - Addressing and documenting user needs
problems involved in the operation of - Technical design phases
the current system. - Participation of internal auditors
- PERSONAL INTERVIEWS - Testing program modules before implementing
• Interviewing is a method of extracting • Testing individual modules by a team of
facts about the current system and user users, internal audit staff, and systems
perceptions about the requirements for professionals
the new system.
PHASE III - CONCEPTUAL SYSTEM DESIGN
OPEN-ENDED QUESTIONS - allow users to
- To produce several alternative conceptual
elaborate on the problem as they see it and
systems that satisfy the system requirements
offer suggestions and recommendations
identified during systems analysis
QUESTIONNARES - used to ask more
This section describes two approaches to
specific, detailed questions and to restrict
conceptual systems design: the structured
the user’s responses.
approach and the object-oriented approach.
- REVIEWING KEY DOCUMENTS
STRUCTURED DESIGN APPROACH (uses DFD)
• The organization’s documents are
another source of facts about the - disciplined way of designing systems from the
system being surveyed. top down
- It consists of starting with the “big picture” of
THE ANALYSIS STEP
the proposed system that is gradually
- Systems analysis is an intellectual process that decomposed into more and more detail until it
is commingled with fact gathering is fully understood.
- the business process under design is usually
SYSTEMS ANALYSIS REPORT
documented by data flow and structure
- The event that marks the conclusion of the diagrams.
systems analysis phase is the preparation of a - The designs should identify all the inputs,
formal systems analysis report. outputs, processes, and special features
- This report presents to management or the necessary to distinguish one alternative from
steering committee the survey findings, the another.
problems identified with the current system, - Identify/ compare all the distinguishing
the user’s needs, and the requirements of the features, input, process, output from one
new system design to another
- The systems analysis report should establish in
THE OBJECT-ORIENTED APPROACH (uses standard
clear terms the data sources, users, data files,
components) ITERATIVE APPROACH
general processes, data flows, controls, and
transaction volume capacity. - to build information systems from reusable
- The systems analysis report does not specify the standard components or objects
detailed design of the proposed system. - The benefits of this approach include reduced
time and cost for development, maintenance,
AUDITORS ROLE

| jg
and testing and improved user support and 2. Identify Benefits - These may be both
flexibility in the development process tangible and intangible
➢ Auditor’s Role as a Stakeholder, at least has an - Tangible benefits fall into two categories: those
interest in the conceptual design, because of that increase revenue and those that reduce
the impact on audit costs.
- Intangible benefits are often of overriding
PHASE IV – SYSTEM EVALUATION & SELECTION
importance in information system decisions,
- procedure for selecting the one system from they cannot be easily measured and quantified.
the set of alternative conceptual designs that 3. Compare Costs and Benefits
will go to the detailed design phase. - Net Present Value Method, the present value of
- optimization process that seeks to identify the the costs is deducted from the present value of
best system. the benefits over the life of the system. When
- to structure this decision-making process and comparing competing projects, the optimal
thereby reduce both uncertainty and the risk of choice is the project with the greatest net
making a poor decision present value.
- Payback Period is a variation of break-even
The evaluation and selection process involves two analysis. Payback speed is often a decisive
steps: Perform a detailed feasibility study and factor. The length of the payback period often
perform a cost–benefit analysis takes precedence over other considerations
represented by intangible benefits.
- Break-even Point is reached when total costs
PERFORM A DETAILED FEASIBILITY STUDY equal total benefits
- Technical Feasibility – whether the system can AUDITOR’S ROLE IN EVALUATION AND SELECTION
be developed in existing technology or if new
technology is needed - The internal auditor is concerned that the
- Economic Feasibility – availability of funds to economic feasibility of the proposed system is
complete the project measured as accurately as possible.
- Legal Feasibility – identifies any conflicts 1. Escapable Costs
between conceptual concept & discharge 2. Interest Rates
liabilities 3. One time & recurring Costs
- Operational Feasibility - shows the degree of 4. Realistic Useful Lives
compatibility between the firm’s existing 5. Intangible Values
procedures and personnel skills and the PHASE V – DETAILED DESIGN
operational requirements of the new system.
- Schedule Feasibility- firm’s ability to implement - to produce a detailed description of the
w/in acceptable time proposed system that both satisfies the system
requirements identified during systems analysis
ACRONYM: TELOS and is in accordance with the conceptual design
PERFORM A COST-BENEFIT ANALYSIS PERFORM A SYSTEM DESIGN WALKTHROUGH
1. Identify Cost - one-time costs vs recurring - to ensure that the design is free from
costs conceptual errors that could become
- One-time costs include the initial investment to programmed into the final system.
develop and implement the system - Many firms have formal, structured
- Recurring costs include operating and walkthroughs conducted by a quality assurance
maintenance costs that recur over the life of group
the system

| jg
REVIEW SYSTEM DOCUMENTS - Serves as a frame of reference for auditor in
designing and evaluating future audit tests
- Inputs & source documents
(i.e. the system has not undergone any
- Outputs, reports & operational documents
change)
- Normalized data for database tables
- Update data dictionary PHASE VII- SYSTEM IMPLEMENTATION (GO LIVE)
- Processing logic or Flow Charts
- database structures are created and populated
PHASE VI – APPLICATION PROGRAMMING & TESTING with data, equipment is purchased and
installed, employees are trained, the system is
PROGRAM THE APPLICATION SOFTWTARE
documented, and the new system is installed
1. PROCEDURAL LANGUAGES are often called - complete engagement of programmers, users,
third-generation languages including designers, database administrators, users and
COBOL, FORTRAN, C, PL1. accountants
- requires the programmer to specify the - Activities in this entail extensive costs
precise order in which the program logic is
TESTING THE ENTIRE SYSTEM
executed
2. EVENT-DRIVEN LANGUAGES DOCUMENTING THE SYSTEM
- Microsoft’s Visual Basic is an example of an
- provides the auditor with essential information
event-driven language. It has a screen-
about how the system works.
painting feature that greatly facilitates the
1. DESIGNER AND PROGRAMMER
creation of sophisticated graphical user
DOCUMENTATION
interfaces (GUI)
- to debug errors and perform maintenance on
- designed to respond to external actions or
the system.
“events” that are initiated by the user.
2. OPERATOR DOCUMENTATION
3. OBJECT-ORIENTED LANGUAGES
- Computer operators use documentation called
- Central to achieving the benefits of the
a run manual, which describes how to run the
object-oriented approach, discussed
system.
previously, is developing software using an
3. USER DOCUMENTATION
object-oriented programming (OOP)
- Users need documentation describing how to
language such as C++ or Java.
use the system
PROGRAMMING THE SYSTEM - The nature of user documentation will depend
on the user’s degree of sophistication with
- follow modular approach regardless of the
computers and technology. Thus, before
language used
designing user documentation, the systems
1. Programming efficiency
professional must assess and classify the user’s
2. Maintenance Efficiency
skill level.
3. Control
• NOVICES - have little or no experience
TEST THE APPLICATION SOFTWARE with computers and are embarrassed to
ask questions
- All program modules must be thoroughly tested
• OCCASSIONAL USERS - once understood
before they are implemented.
the system but have forgotten some
1. TESTING METHODOLOGY
essential commands and procedures
- Identifying programming & logical errors
• FREQUENT LIGHT USERS - are familiar
2. TESTING OFFLINE BEFORE DEPLOYING ONLINE
with limited aspects of the system.
- Never Ever Underestimate (Testing
Although functional, they tend not to
Environment vs Actual Environment)
explore beneath the surface and lack
3. TEST DATA
depth of knowledge
- Should be retained for reuse

| jg
• FREQUEMT POWER USERS - understand - involves running the old system and the new
the existing system and will readily system simultaneously for a period of time
adapt to new systems. They are - The advantage of parallel cutover is the
intolerant of detailed instructions that reduction in risk. By running two systems, the
waste their time. user can reconcile outputs to identify errors and
4. USER HANDBOOK debug errors before running the new system
- user documentation often takes the form of a solo.
user handbook, as well as online - simultaneous; reconciliation
documentation.
THE AUDITOR’S ROLE IN SYSTEM IMPLEMENTATION
5. TUTORIALS
- can be used to train the novice or the 1. Provide Technical Expertise
occasional user. The success of this technique is 2. Specify Documentation Standards
based on the tutorial’s degree of realism 3. Verify Control Adequacy & Compliance w/ SOX
6. HELP FEATURES POST IMPLEMENTATION REVIEW
- The help feature analyzes the context of what 4. Systems Design Adequacy
the user is doing at the time of the error and 5. Accuracy of Time, Cost and Benefit Estimates
provides help with that specific function (or
PHASE VIII- SYSTEMS MAINTENANCE
command).
- formal process by which application programs
CONVERTING DATABASES
undergo changes to accommodate changes in
- transfer of data from its current form to the user needs.
format or medium required by the new system. - It could be extensive
- Maintenance represents significant outlay
PRECAUTIONS
compared to initial development costs.
1. Validation
SYSTEM MAINTENANCE INTERNAL CONTROLS
2. Reconciliation
3. Backup - Last, longest and most costly phase of SDLC
• Up to 80-90% of entire cost of a system
CONVERTING TO THE NEW SYSTEM
AUDIT PROCEDURES:
- The process of converting from the old system
to the new one is called the cutover. - All maintenance actions should require
• Technical specifications
A system cutover will usually follow one of three
approaches: cold turkey, phased, or parallel operation • Testing
• Documentation updates
COLD TURKEY • Formal authorizations for changes
- also called the “Big Bang” approach
- When implementing simple systems, this is
often the easiest and least costly approach. CONTROLLING NEW SYSTEMS DEVELOPMENT
With more complex systems, it is the riskiest. - Systems Authorization Activities
- most risky; all at once - User Specifications Activities
PHASED CUTOVER - Technical Design Activities
- Internal Audit Participation
- begins operating the new system in modules - User Test and Acceptance Procedure
- by modules; gradual
SYSTEMS DEVELOPMENT
PARALLEL OPERATION CUTOVER
Auditing objectives: ensure that

| jg
- SDLC activities applied consistently and in - Auditor reconciles program maintenance
accordance with management’s policies requests, program listings and program changes
- system as originally implemented was free from to verify the need for and accuracy of program
material errors and fraud
- system was judged to be necessary and justified
PROGRAM CHANGES– SYSTEM MAINTENANCE
at various checkpoints throughout the SDLC
- system documentation is sufficiently accurate Auditing objectives: detect any unauthorized program
and complete to facilitate audit and maintenance and determine that
maintenance activities
- maintenance procedures protect applications
SYSTEMS DEVELOPMENT INTERNAL CONTROLS from unauthorized changes
• Reconcile program version numbers
AUDIT PROCEDURE:
• Confirm maintenance authorization
- New systems must be authorized. - applications are free from material errors
- Feasibility studies conducted. • Reconcile the source code
- User needs analyzed and addressed. • Review test results
- Cost-benefit analysis completed. • Retest the program
- Proper documentation completed. - program libraries (where programs are stored)
- All program modules thoroughly tested before are protected from unauthorized access
implementation. • Review programmer authority table
- Checklist of problems was kept. • Test authority table
- Systems documentation complies with
organizational requirements

CONTROLLING SYSTEMS MAINTENANCE CHAPTER 6


- Maintenance Authorization, Testing & TRANSACTION PROCESSING & FINANCIAL
Documentation REPORTING SYSTEMS OVERVIEW
- Source Program Library Controls – where
application program source code are stored FINANCIAL TRANSACTION
- SPL – No Controls - an economic event that affects the assets and
- Controlled SPLMS Environment equities of the firm, is reflected in its accounts,
1. Storing programs on the SPL and is measured in monetary terms
2. Retrieving programs for maintenance - similar types of transactions are grouped
purposes together into three transaction cycles:
3. Deleting obsolete programs from lib 1. the expenditure cycle,
4. Documenting program changes to provide 2. the conversion cycle, and
audit trail of the changes 3. the revenue cycle.
• Password Control
• Separate Test Libraries RELATIONSHIP BETWEEN TRANSACTION CYCLE
• Audit Trail & Management Reports
• Program Version Numbers
• Controlling Access to Maintenance
Commands

RECONCILE

- Auditor compares the current program version


number in the documentation file vs current
version number of the production program

| jg
- a product document of one system that
becomes a source document for another
system

JOURNALS

- a record of chronological entry


• SPECIAL JOURNALS - specific classes of
transactions that occur in high
frequency
• GENERAL JOURNAL - nonrecurring,
infrequent, and dissimilar transactions

LEDGER

- a book of financial accounts


• GENERAL LEDGER - shows activity for
EXPENDITURE CYCLE
each account listed on the chart of
- time lag between the two due to credit accounts
relations with suppliers: • SUBSIDIARY LEDGER - shows activity by
1. physical component (acquisition of goods) detail for each account type
2. financial component (cash disbursements
Flow of Economic Events into the General Ledger
to the supplier)

CONVERSION CYCLE

1. the production system (planning, scheduling,


and control of the physical product through the
manufacturing process)
2. the cost accounting system (monitors the flow
of cost information related to production)

REVENUE CYCLE

- : time lag between the two due to credit ACCOUNTING RECORDS IN A COMPUTER-BASED
relations with customers : SYSTEM
1. physical component (sales order
processing)
2. financial component (cash receipts)

MANUAL SYSTEM ACCOUNTING RECORDS

SOURCE DOCUMENTS

- used to capture and formalize transaction data


needed for transaction processing

PRODUCT DOCUMENTS

- the result of transaction processing

TURNAROUND DOCUMENTS

| jg
Example of Tracing an Audit Trail

COMPUTER-BASED SYSTEMS

- The audit trail is less observable in computer-


based systems than traditional manual systems.
EXPLANATION OF STEPS IN FIGURE: - The data entry and computer programs are the
physical trail.
1. Compare the AR balance in the balance sheet - The data are stored in magnetic files
with the master file AR control account balance.
2. Reconcile the AR control figure with the AR
subsidiary account total. COMPUTER FILES
3. Select a sample of update entries made to
accounts in the AR subsidiary ledger and trace ➢ Master File - generally contains account data
these to transactions in the sales journal (e.g., general ledger and subsidiary file)
(archive file). ➢ Transaction File - a temporary file containing
4. From these journal entries, identify source transactions since the last update
documents that can be pulled from their files ➢ Reference File - contains relatively constant
and verified. If necessary, confirm these source information used in processing (e.g., tax tables,
documents by contacting the customers. customer addresses)
➢ Archive File - contains past transactions for
AUDIT TRAIL reference purposes

DOCUMENTATION TECHNIQUES

- Documentation in a CB environment is
necessary for many reasons.

Five common documentation techniques:

1. Entity Relationship Diagram


2. Data Flow Diagrams
3. Document Flowcharts
- Accountants should be able to trace in both 4. System Flowcharts
directions. 5. Program Flowcharts
- Sampling and confirmation are two common ENTITY RELATIONSHIP DIAGRAM (ERD)
techniques.
- is a documentation technique to represent the
relationship between entities in a system.

| jg
- The REA model version of ERD is widely used in - illustrate the relationship among processes and
AIS. REA uses 3 types of entities: the documents that flow between them
• resources (cash, raw materials) - contain more details than data flow diagrams
• events (release of raw materials into - clearly depict the separation of functions in a
the production process) system
• agents (inventory control clerk, vendor,
SYMBOL SET FOR DOCUMENT FLOWCHARTS
production worker

CARDINALITIES

- represents the numerical mapping between


entities:
• one-to-one
• one-to-many
• many-to-many

DATA FLOW DIAGRAMS

- use symbols to represent the processes, data


sources, data flows, and entities in a system
- represent the logical elements of the system
- do not represent the physical system

SYSTEM FLOWCHARTS
DOCUMENTS FLOWCHART

| jg
- are used to represent the relationship between - illustrate the logic used in programs
the key elements--input sources, programs, and
PROGRAM FLOWCHART SYMBOLS
output products--of computer systems
- depict the type of media being used (paper,
magnetic tape, magnetic disks, and terminals)
- in practice, not much difference between
document and system flowcharts

SYSTEMS FLOWCHART SYMBOLS

MODERN SYSTEMS VERSUS LEGACY SYSTEMS

MODERN SYSTEMS CHARACTERISTICS

- client-server based and process transactions in


real time
- use relational database tables
- have high degree of process integration and
data sharing
- some are mainframe based and use batch
processing

Some firms employ legacy systems for certain aspects of


their data processing.

- Accountants need to understand legacy


systems.

LEGACY SYSTEMS CHARACTERISTICS

- mainframe-based applications
- batch oriented
- early legacy systems use flat files for data
storage
- later legacy systems use hierarchical and
network databases
- data storage systems promote a single-user
environment that discourages information
integration

UPDATING MASTER FILES: PRIMARY KEYS (PK) AND


SECONDARY KEYS (SK)

PROGRAM FLOWCHASRTS

| jg
DATABASE BACKUP PROCEDURES

- Destructive updates leave no backup.


- To preserve adequate records, backup
procedures must be implemented, as shown STEPS IN BATCH PROCESSING/SEQUENTIAL FILE
below
• The master file being updated is copied 1. Keystroke - source documents are transcribed
as a backup. by clerks to magnetic tape for processing later
2. Edit Run - identifies clerical errors in the batch
• A recovery program uses the backup to
and places them into an error file
create a pre-update version of the
3. Sort Run - places the transaction file in the
master file.
same order as the master file using a primary
key
4. Update Run - changes the value of appropriate
fields in the master file to reflect the
transaction
5. Backup Procedure - the original master
continues to exist and a new master file is
COMPUTER-BASED ACCOUNTING SYSTEMS created
Two broad classes of systems: ADVANTAGES OF BATCH PROCESSINH
1. batch systems - Organizations can increase efficiency by
2. real-time systems grouping large numbers of transactions into
BATCH PROCESSING batches rather than processing each event
separately.
- A batch is a group of similar transactions that - Batch processing provides control over the
are accumulated over time and then processed transaction process via control figures.
together.
- The transactions must be independent of one
another during the time period over which the REAL-TIME SYSTEMS
transactions are accumulated in order for batch
processing to be appropriate. - process transactions individually at the
- A time lag exists between the event and the moment the economic event occurs
processing. - have no time lag between the economic event
and the processing
BATCH PROCESSING/SEQUENTIAL FILE - generally require greater resources than batch
processing since they require dedicated
processing capacity; however, these cost
differentials are decreasing

| jg
- oftentimes have longer systems development - validate collected transactions/ maintain
time accounting controls (e.g., equal debits and
credits).
CHARACTERISTIC DIFFERENCES BETWEEN BATCH AND
- process transaction data.
REAL-TIME PROCESSING
• post transactions to proper accounts
• update general ledger accounts and
transaction files
• record adjustments to accounts
- store transaction data.
- generate timely financial reports

Why Do So Many AIS Use Batch Processing?

- AIS processing is characterized by high-volume,


independent transactions, such are recording
cash receipts checks received in the mail.
- The processing of such high-volume checks can
be done during an off-peak computer time.
- This is one reason why batch processing maybe
done using real-time data collection.

DATA CODING SCHEMES RELATIONSHIP OF GLF TO OTHER INFORMATION


SUBSYSTEM
• SEQUENTIAL CODES
• BLOCK CODES
• GROUP CODES
• ALPHABETIC CODES
• MNEMONIC CODES

GENERAL LEDGER SYSTEMS

- General Ledger Systems acts as a hub


connected to other systems.
- Becomes sources of input for other systems
- Flows as feedback into the GLS.
- GLS provide data to MRS & FRS

IS FUNCTIONS OF GLS

General ledger should:

- collect transaction data promptly and


accurately. GLS DATABASE
- classify/code data and accounts.

| jg
➢ General ledger master file 6. Make Adjusting Entries.
- principal FRS file based on chart of accounts 7. Journalize & post adjusting entries.
➢ General ledger history file 8. Prepare the trial balance.
- used for comparative financial support 9. Prepare the FS.
➢ Journal voucher file 10. Journalize & post-closing entries.
- all journal vouchers of the current period 11. Prepare the post-closing trial balance.
➢ Journal voucher history file
- journal vouchers of past periods for audit trail GLS REPORTS
➢ Responsibility center file GENERAL LEDGER ANALYSIS
- financial data by responsibility centers for MRS - listing of transactions
➢ Budget master file - allocation of expenses to cost centers
- budget data by responsibility centers for MRS - comparison of account balances from prior
periods
JOURNAL VOUCHER LAYOUT FOR A GENERAL LEDGER
- trial balances
MASTER FILE
FINANCIAL STATEMENTS
- balance sheet
- income statement
- statement of cash flows

MANAGERIAL REPORTS
- analysis of sales
- analysis of cash
- analysis of receivables

CHART OF ACCOUNTS
- coded listing of accounts

POTENTIAL RISKS IN THE GL/FRS


1. A defective audit trail.
FINANCIAL REPORTING PROCESS 2. Unauthorized access to the general ledger.
3. GL accounts that are out of balance with
subsidiary accounts.
4. Incorrect GL account balances of unauthorized
or incorrect journal vouchers.

Other Potential Risks in the GL/FRS


- Improperly prepared journal entries
- Unposted journal entries
- Debits not equal to credits
- Subsidiary not equal to G/L control accounts
- Inappropriate access to the G/L
- Poor audit trail
- Lost or damaged data
- Account balances that are wrong because of
1. Capture Transactions. unauthorized or incorrect journal vouchers
2. Record the Special Journal.
GL/FRS CONTROL ISSUES
3. Post to Subsidiary Ledger.
- journal vouchers must be authorized by a
4. Post to General Ledger.
manager at the source dept
5. Prepare the unadjusted trial balance.
SEGREGATION OF DUTIES - G/L clerks should not:

| jg
- have recordkeeping responsibility for special HTML: HYPER TEXT MARKUP LANGUAGE
journals or subsidiary ledgers
- Format used to produce Web pages
- prepare journal vouchers
• defines the page layout, fonts, and
- have custody of physical assets
graphic elements
ACCESS CONTROLS • used to lay out information for display
- Unauthorized access to G/L can result in errors, in an appealing manner like one sees in
fraud, and misrepresentations in financial magazines and newspapers
statements. • using both text and graphics (including
- Sarbanes-Oxley requires controls that limit pictures) appeals to users
database access to only authorized individuals. - Hypertext links to other documents on the Web
• Even more pertinent is HTML’s support
ACCOUTNING RECORD
for hypertext links in text and graphics
- trace source documents from inception to
that enable the reader to ‘jump’ to
financial statements and vice versa
another document located anywhere
INDEPENDENT VERIFICATION
on the World Wide Web.
- G/L dept. reconciles journal vouchers and
summaries
Two important operational reports used:
• journal voucher listing – details of each XML: EXTENSIBLE MARKUP LANGUAGE
journal voucher posted to the G/L - XML is a meta-language for describing markup
• general ledger change report – the languages.
effects of journal voucher postings on - Extensible means that any markup language can
G/L accounts be created using XML.
GL/FRS USING DATABASE TECHNOLOGY • includes the creation of markup
languages capable of storing data in
relational form, where tags (formatting
commands) are mapped to data values
• can be used to model the data structure
of an organization’s internal database

COMPARISON OF HTML AND XML DOCUMENTS

ADVANTAGES:

- immediate update and reconciliation


- timely, if not real-time, information

Removes separation of transaction authorization


and processing XBRL: eXtensible Business Reporting Language
- Detailed journal voucher listing and account - XBRL is an XML-based language for
activity reports are a compensating control standardizing methods for preparing,
publishing, and exchanging financial
Centralized Access to Accounting Records
information, e.g., financial statements.
- Passwords and authorization tables as controls - XBRL taxonomies are classification schemes.
Advantages:

| jg
• Business offer expanded financial CLASSES OF INPUT CONTROLS
information to all interested parties
1. Source document controls
virtually instantaneously.
2. Data coding controls
• Companies that use XBRL database
3. Batch controls
technology can further speed the
4. Validation controls
process of reporting.
5. Input error correction
• Consumers import XBRL documents
6. Generalized data input systems
into internal databases and analysis
tools to greatly facilitate their decision- SOURCE DOCUMENT CONTROLS
making processes.
in systems that use physical source documents
IMPLICATIONS FOR ACCOUNTING in initiate transactions, careful control must be
exercised over these instruments. Source document
- AUDIT IMPLICATION FOR XBRL
fraud can be used to remove assets from the
• taxonomy creation: incorrect taxonomy
organization. To control against this type of exposure,
results in invalid mapping that may
implement control procedures over source documents
cause material misrepresentation of
to account for each document.
financial data
• validation of instance documents: ➢ Controls in systems using physical source
ensure that appropriate taxonomy and documents
tags have been applied ➢ Source document fraud
• audit scope and timeframe: impact on ➢ To control for exposure, control procedures are
auditor responsibility as a consequence needed over source documents to account for
of real-time distribution of financial each one
statements ▪ Use pre-numbered source documents
▪ Use source documents in sequence
▪ Periodically audit source documents
CHAPTER 7 Use Pre-numbered Source Documents
COMPUTER ASSISTED AUDIT TOOLS & TECHNIQUES - source documents should come pre-numbered
from the printer with a unique sequential
number on each document. This provides an
INTRODUCTION TO INPUT CONTROLS audit trail for tracing transactions through
➢ Be familiar with the classes of transaction of accounting records.
input controls used by accounting applications. Use Source Documents in Sequence
➢ Understand the objectives and techniques used - source documents should be distributed to the
to implement processing, including run to run, users and used in sequence, requiring the
operator invention, and audit trail controls. adequate physical security be maintained over
➢ Understand the methods used to establish the source document inventory at the user site.
effective output controls both batch and real Access to source documents should be limited
time systems. to authorized persons.

➢ Know the difference between black and white Periodically Audit Source Documents
approach. - the auditor should compare the numbers of
➢ Be familiar with key features of the five CAATTs documents used to date with those remaining
discussed in this chapter. in inventory plus those voided due to errors

| jg
DATA CODING CONTROLS - Two documents are used to accomplish this
task: a batch transmittal sheet and a batch
coding controls are checks on the integrity of
control log.
data codes used in processing. Three types of errors
can corrupt data codes and cause processing errors: Batch Transmittal Sheet
Transcription errors, Single Transposition errors, and
- The transmittal sheet becomes the batch
Multiple Transposition errors
control record and is used to assess the
• Addition errors occur when an extra integrity of the batch during processing. The
digit or character is added to the code. batch transmittal sheet captures relevant
• Truncation errors occur when a digit or information such as:
character is removed from the end of a ➢ Unique batch number (serial #)
code. ➢ A batch dates
• Substitution errors are the replacement ➢ A transaction codes
of one digit in a code with another. ➢ Number of records in the batch
➢ Total dollar value of a financial field
Two types of transposition errors: ➢ Sum of a unique non-financial field
- Single transposition errors occur when two • Hash total - a simple control technique
adjacent digits are reversed. adjacent digits that uses non-financial data to keep
transposed (reversed) track of the records in a batch. Any key
- Multiple transposition errors occur when field may be used to calculate a hash
nonadjacent digits are transposed. non- total.
adjacent digits are transposed • E.g., customer number

Check digit is a control digit (or digits) that is added to


the data code when it is originally assigned.

Batch controls

are an effective method of managing high


volumes of transaction data through a system. It
reconciles output produced by the system with the
input originally entered into the system.

➢ Method for handling high volumes of


transaction data – esp. paper-fed IS

Controlling the batch continues throughout all phases Batch Control Log
of the system. It assures that:

1. All records in the batch are processed.


2. No records are processed more than once.
3. An audit trail of transactions in created from
input through processing to the output.
4. It requires the grouping of similar types of input
transactions together in batches and then VALIDATION CONTROLS
controlling the batches throughout data intended to detect errors in transaction data before
processing. the data are processed. Most effective when they are
Requires controlling batch throughout performed as close to the source of the transaction as
possible. Some validation procedures require making
references against the current master file.

| jg
➢ Intended to detect errors in data before number in the header label with the
processing program’s file requirements.
• Version checks - – verify that the
➢ Most effective if performed close to the source
version of the file processed is correct.
of the transaction
The version check compares the version
➢ Some require referencing a master file number of the files being processed
with the program’s requirements.
There are three levels of input validation controls:
• Expiration date check - prevents a file
1. Field Interrogation – involves programmed from being deleted before it expires.
procedures that examine the characteristics of
INPUT ERROR CORRECTION
the data in the field.
• Missing Data Checks – used to examine when errors are detected in a batch, they must
the contents of a field for the presence be corrected, and the records resubmitted for
of blank spaces. reprocessing. This must be a controlled process to
• Numeric-Alphabetic Data Checks – ensure that errors are dealt with completely and
determine whether the correct form of correctly.
data is in a field.
➢ Batch – correct and resubmit
• Zero-Value Checks – used to verify that
certain fields are filled with zeros. ➢ Controls to make sure errors dealt with
• Limit Checks – determine if the value in completely and accurately
the field exceeds an authorized limit.
Three common error handling techniques are:
• Range Checks – assign upper and lower
limits to acceptable data values. 1. Immediate Correction – when a keystroke error
• Validity Checks – compare actual values is detected or an illogical relationship, the
in a field against known acceptable system should halt the data entry procedure
values. until the user corrects the error.
• Check Digit – identify keystroke errors 2. Create an Error File – individual errors should
in key fields by testing the internal be flagged to prevent them from being
validity of the code processed. At the end of the validation
2. Record Interrogation – procedures validate the procedure, the records flagged as errors are
entire record by examining the interrelationship removed from the batch and placed in a
of its field values. temporary error holding file until the errors can
• Reasonable Checks – determine if a be investigated. At each validation point, the
value in one field, which has already system automatically adjusts the batch control
passed a limit check and a range check, totals to reflect the removal of the error records
is reasonable when considered along from the batch. Errors detected during
with other data fields in the record. processing require careful handling. These
• Sign checks - tests to see if the sign of a records may already be partially processed.
field is correct for the type of record There are two methods for dealing with this
being processed. complexity.
• Sequence checks - determine if a record • reverse the effects of the partially
is out of order. processed transactions and resubmit
3. File Interrogation – purpose is to ensure that the corrected records to the data input
the correct file is being processed by the system stage.
• Internal label checks (tape) - verify that • reinsert corrected records to the
the file processed is the one the processing stage in which the error was
program is actually calling for. The detected
system matches the file name and serial

| jg
3. Reject the Entire Batch – some forms of errors 4. Error Reports - standardized error reports are
are associated with the entire batch and are not distributed to users to facilitate error correction
clearly attributable to individual records. The 5. Transaction Log - is a permanent record of all
most effective solution in this case is to cease validated transactions. It is an important
processing and return the entire batch to data element in the audit trail. However, only
control to evaluate, correct, and resubmit. successful transactions (those completely
Batch errors are one reason for keeping the size processed) should be entered in the journal
of the batch to a manageable number
PROCESSING CONTROLS
GENERALIZED DATA INPUT SYSTEMS (GDIS)
- programmed procedures designed to ensure
to achieve a high degree of control and that an application’s logic is functioning
standardization over input validation procedures, some properly.
organizations employ a generalized data input system
CLASSES OF PROCESSING CONTROLS
(GDIS) which includes centralized procedures to manage
the data input for all of the organization’s transaction 1. Run-to-Run Controls - use batch figures to
processing systems. A GDIS eliminates the need to monitor the batch as it moves from one
recreate redundant routines for each new application. A programmed procedure (run) to another. It
GDIS eliminates the need to recreate redundant ensures that each run in the system processes
routines for each new application. the batch correctly and completely. Specific
run-to-run control types are listed below
➢ Centralized procedures to manage data input
• Recalculate Control Totals - after each
for all transaction processing systems
major operation in the process and
➢ Eliminates need to create redundant routines after each run, $ amount fields, hash
for each new application totals, and record counts are
accumulated and compared to the
Has 3 advantages:
corresponding values stored in the
- Improves control by having one common control record
system perform all data validation • Check Transaction Codes - the
- Ensures each AIS application applies a transaction code of each record in the
consistent standard of data validation batch is compared to the transaction
- Improves systems development efficiency code contained in the control record,
ensuring only the correct type of
Major Components of GDIS
transaction is being processed.
1. Generalized Validation Module - (GVM) • Sequence Checks - the order of the
performs standard validation routines that are transaction records in the batch is
common to many different applications. These critical to correct and complete
routines are customized to an individual processing. The sequence check control
application’s needs through parameters that compares the sequence of each record
specify the program’s specific requirements in the batch with the previous record to
2. Validated Data File - the input data that are ensure that proper sorting took place
validated by the GVM are stored on a validated 2. Operator Intervention Controls - increases the
data file. This is a temporary holding file potential for human error. Systems that limit
through which validated transactions flow to operator intervention through operator
their respective applications. intervention controls are thus less prone to
3. Error File - error records detected during processing errors. Parameter values and
validation are stored in this file, corrected, and program start points should, to the extent
then resubmitted to the GVM possible, be derived logically or provided to the
system through look-up tables

| jg
➢ When operator manually enters controls into ➢ Batch systems are more susceptible to exposure
the system and require a greater degree of control those
➢ Preference is to derive by logic or provided by real-time systems.
system
Controlling Batch Systems Output - Batch systems
3. Audit Trail Controls - the preservation of an
usually produce output in the form of hard copy, which
audit trail is an important objective of process
typically requires the involvement of intermediaries.
control.
The output is removed from the printer by the
➢ Every transaction must be traceable through
computer operator, separated into sheets and
each stage of processing.
separated from other reports, reviewed for correctness
➢ Each major operation applied to a transaction
by the data control clerk, and then sent through
should be thoroughly documented.
interoffice mail to the end user. Each stage is a point of
➢ The following are examples of techniques used
potential exposure where the output could be
to preserve audit trails:
reviewed, stolen, copied, or misdirected. When
• Transaction Logs – every transaction
processing or printing goes wrong and produces output
successfully processed by the system
that is unacceptable to the end user, the corrupted or
should be recorded on a transaction
partially damaged reports are often discarded in waste
log. There are two reasons for creating
cans. Computer criminals have successfully used such
a transaction log: It is a permanent
waste to achieve their illicit objectives. Techniques for
record of transactions. Not all of the
controlling each phase in the output process are
records in the validated transaction file
employed on a cost-benefit basis that is determined by
may be successfully processed. Some
the sensitivity of the data in the reports.
of these records fail tests in the
subsequent processing stages. A • Many steps from printer to end user
transaction log should contain only • Data control clerk check point
successful transactions. • Unacceptable printing should be
• Log of Automatic Transactions – all shredded
internally generated transactions must • Cost/benefit basis for controls
be placed in a transaction log. • Sensitivity of data drives levels of
• Listing of Automatic Transactions – the controls
responsible end user should receive a
detailed list of all internally generated OUTPUT SPOOLING
transactions. - applications are often designed to direct their
• Unique Transaction Identifiers – each output to a magnetic disk file rather than to the
transaction processed by the system printer directly. The creation of an output file
must be uniquely identified with a as an intermediate step in the printed process
transaction number. presents an added exposure. A computer
• Error Listing – a listing of all error criminal may use this opportunity to perform
records should go to the appropriate any of the following unauthorized acts:
user to support error correction and RISKS
resubmission. • Access the output file and change
OUTPUT CONTROLS critical data values
• Access the file and change the number
➢ ensure that system output is not lost, of copies to be printed
misdirected, or corrupted and that privacy is • Make a copy of the output file so illegal
not violated. The type of processing method in output can be generated
use influences the choice of controls employed • Destroy the output file before printing
to protect system output. take place

| jg
PRINT Programs control figures for balance, examine the report body for
garbled, illegible, and missing data, and record the
- the print run program produces hard copy
receipt of the report in data control’s batch control log.
output from the output file. Print programs are
often complex systems that require operator Report Distribution – the primary risks associated with
intervention. report distribution include reports being lost, stolen, or
Types: misdirected in transit to the user. To minimize these
1. Pausing the print program to load risks: name and address of the user should be printed
output paper on the report, an address file of authorized users should
2. Entering parameters needed by the be consulted to identify each recipient of the report,
print run and maintaining adequate access control over the files.
3. Restarting the print run at a
- The reports may be placed in a secure mailbox
prescribed checkpoint after a
to which only the user has the key.
printer malfunction
- The user may be required to appear in person
4. Removing printer output from the
at the distribution center and sign for the
printer for review and distribution
report.
Print Program Controls - A security officer or special courier may deliver
the report to the user.
- designed to deal with two types of exposures:
• production of unauthorized copies of End User Controls – output reports should be re-
output and employee browsing of examined for any errors that may have evaded the data
sensitive data. - One way to control this control clerk’s review. Errors detected by the user
is to employ output document controls should be reported to the appropriate computer
similar to source document controls. services management. A report should be stored in a
The number of copies specified by the secure location until its retention period has expired.
output file can be reconciled with the Factors influencing the length of time a hard copy
actual number of output documents report is retained include:
used.
- Statutory requirements specified by
• Unauthorized browsing of sensitive
government agencies.
data by employees - To prevent
operators from viewing sensitive - The number of copies of the report in existence.
output, special multi-part paper can be
- The existence of magnetic or optical images of
used, with the top copy colored black to
reports that can act as permanent backup.
prevent the print from being read.
- Reports should be destroyed in a manner
Bursting - when output reports are removed from the
consistent with the sensitivity of their contents
printer, they go the bursting stage to have their pages
separated and collated. The clerk may make an Controlling real-time systems output
unauthorized copy of the report, remove a page from
the report, or read sensitive information. The primary • Eliminates intermediaries
control for this is supervision. • Threats:
- Interception
Waste – computer output waste represents a potential - Disruption
exposure. Dispose properly of aborted reports and the - Destruction
carbon copies from the multipart paper removed during - Corruption
bursting. - Exposures:
Data Control – the data control group is responsible for - Equipment failure
verifying the accuracy of compute output before it is - Subversive acts
distributed to the user. The clerk will review the batch

| jg
TESTING VOMPUTER APPLICATION CONTROLS Access Tests – ensure that the application prevents
authorized users from unauthorized access to data.
- control-testing techniques provide information
about the accuracy and completeness of an Audit Trail Tests – ensure that the application creates
application’s processes. These test follow two an adequate audit trail. Produces complete transaction
general approaches: listings, and generates error files and reports for all
• Black Box: Testing around the computer exceptions.
• White Box: Testing through the
Rounding Error Tests – verify the correctness of
computer
rounding procedures. Failure to properly account for
Black Box (Around the Computer) Technique – auditors this rounding difference can result in an imbalance
performing black box testing do not rely on a detailed between the total (control) interest amount and the
knowledge of the application’s internal logic. sum of the individual interest calculations for each
account. Rounding problems are particularly
➢ They seek to understand the functional susceptible to so-called salami funds, that tend to affect
characteristics of the application by analyzing a large number of victims, but the harm to each is
flowcharts and interviewing knowledgeable immaterial. Each victim assumes one of the small
personnel in the client’s organization. The pieces and is unaware of being defrauded. Operating
auditor tests the application by reconciling system audit trails and audit software can detect
production input transactions processed by the excessive file activity. In the case of the salami fraud,
application with output results. there would be 1000’s of entries into the computer
➢ The advantage of the black box approach is that criminal’s personal account that may be detected in this
• the application need not be removed way.
from service and tested directly. This
approach is feasible for testing ➢ Monitor activities – excessive ones are serious
applications that are relatively simple. exceptions; e.g, rounding and thousands of
➢ Complex applications require a more focused entries into a single account for $1 or 1¢
testing approach to provide the auditor with
COMPUTER AIDED AUDIT TOOLS AND TECHNIQUES
evidence of application integrity.
(CAATTs)
➢ Appropriately applied:
• Simple applications 1. Test data method
• Relative low level of risk 2. Base case system evaluation
3. Tracing
White Box (Through the Computer) Technique – relies 4. Integrated Test Facility [ITF]
on an in-depth understanding of the internal logic of the 5. Parallel simulation
application being tested. Several techniques for testing 6. GAS
application logic directly are included.
TEST DATA
➢ Uses small volume of carefully crafted, custom
test transactions to verify specific aspects of used to establish application integrity by processing
logic and controls specially prepared sets of input data through
production applications that are under review. The
➢ Allows auditors to conduct precise test with results of each test are compared to predetermined
known outcomes, which can be compared expectations to obtain an objective evaluation of
objectively to actual results application logic and control effectiveness.
White Box Test Methods Creating Test Data – when creating test data,
Redundancy Tests – determine that an application auditors must prepare a complete set of both valid
processes each record only once. and invalid transactions. If test data are
incomplete, auditors might fail to examine critical
branches of application logic and error-checking

| jg
routines. Test transactions should test every 3. Test data is “traced” through all
possible input error, logical process, and processing steps of the application, and
irregularity. a listing is produced of all lines of code
as executed (variables, results, etc.)
➢ Uses a “test deck”
• Valid data ➢ Excellent means of debugging a faulty program
• Purposefully selected invalid data
TEST DATA: ADVANTAGES AND DISADVANTAGES
• Every possible:
Input error ADVANTAGES
Logical processes
1. They employ white box approach, thus
Irregularity
providing explicit evidence
➢ Procedures:
2. Can be employed with minimal disruption to
• Predetermined results and expectations
operations
• Run test deck
3. They require minimal computer expertise on
• Compare
the part of the auditors
BASE CASE SYSTEM EVALUATION (BCSE)
DISADVANTAGES
there are several variants of the test data
1. Auditors must rely on IS personnel to obtain a
technique. When the set of test data in use is
copy of the application for testing
comprehensive, the technique is called a base case
2. Audit evidence is not entirely independent
system evaluation (BCSE). BCSE tests are conducted
3. Provides static picture of application integrity
with a set of tests transactions containing all possible
4. Relatively high cost to implement, auditing
transaction types. These results are the base case.
inefficiency
When subsequent changes to the application occur
during maintenance, their effects are evaluated by INTEGRATED TEST FACILITY
comparing current results with base case results.
➢ ITF is an automated technique that allows
➢ Variant of Test Data method auditors to test logic and controls during normal
operations
➢ Comprehensive test data
➢ Set up a dummy entity within the application
➢ Repetitive testing throughout SDLC
system
➢ When application is modified, subsequent test
1. Set up a dummy entity within the
(new) results can be compared with previous
application system
results (base)
2. System able to discriminate between
TRACING ITF audit module transactions and
routine transactions
performs an electronic walk-through of the 3. Auditor analyzes ITF results against
application’s internal logic. Implementing tracing expected results
requires a detailed understanding of the application’s
internal logic. PARALLEL SIMULATION

➢ Test data technique that takes step-by-step ➢ Auditor writes or obtains a copy of the program
walk through application that simulates key features or processes to be
reviewed / tested
1. The trace option must be enabled for
the application 1. Auditor gains a thorough understanding of
2. Specific data or types of transactions the application under review
are created as test data 2. Auditor identifies those processes and
controls critical to the application

| jg
3. Auditor creates the simulation using DATA STRUCTURES
program or Generalized Audit Software
Flat file structures
(GAS)
4. Auditor runs the simulated program using ▪ Sequential structure [Figure 8-1]
selected data and files ➢ All records in contiguous storage spaces in
5. Auditor evaluates results and reconciles specified sequence (key field)
differences ➢ Sequential files are simple & easy to process
➢ Application reads from beginning in sequence
➢ If only small portion of file being processed,
CHAPTER 8 inefficient method
➢ Does not permit accessing a record directly
CAATTs for Data Extraction and Analysis
➢ Efficient: 4, 5 – sometimes 3
DATA STRUCTURES ➢ Inefficient: 1, 2, 6, 7 – usually 3
▪ Indexed structure
➢ Organization
• In addition to data file, separate index
➢ Access method file
• Contains physical address in data file of
Access: Non-Index Methods each indexed record
Index File Hashing Pointers Data Files ▪ Indexed random file [Figure 8-2]
• Records are created without regard to
Access: Index Methods Data Organization physical proximity to other related
Sequentialisam Random Sequential Random records
• Physical organization of index file itself
may be sequential or random
• Random indexes are easier to maintain,
sequential more difficult
• Advantage over sequential: rapid
searches
• Other advantages: processing individual
records, efficient usage of disk storage
• Efficient: 1, 2, 3, 7
• Inefficient: 4
FILE PROCESSING OPERATION
Random file are not efficient structures for
1. Retrieve a record by key operations that involve processing a large
2. Insert a record portion of a file (e.g., payroll master
3. Update a record
4. Read a file ▪ Indexed Sequential Methods (ISAM)
5. Find next record • Large files, routine batch processing
6. Scan a file • Moderate degree of individual record
7. Delete a record processing
• Used for files across cylinders
INDIVIDUAL RECORDS • Uses number of indexes, with
summarized content
• Access time for single record is slower
than Indexed Sequential or Indexed
Random
• Disadvantage: does not perform record
insertions efficiently – requires physical

| jg
relocation of all records beyond that • Disadvantage
point – SOS
❑ Inefficient use of storage
• Has 3 physical components: indexes,
prime data storage area, overflow area ❑ Different keys may create same
[Figure 8-4] address
• Might have to search index, prime data
area, and overflow area – slowing down • Efficient: 1, 2, 3, 6
access time • Inefficient: 4, 5, 7
• Integrating overflow records into prime ➢ Employs algorithm to convert primary key into
data area, then reconstructing indexes physical record storage address [Figure 8-5]
reorganizes ISAM files • No separate index necessary
• Very Efficient: 4, 5, 6 • Advantage: access speed
• Moderately Efficient: 1, 3 • Disadvantage
• Inefficient: 2, 7 ▪ Inefficient use of storage
EVOLUTION OF ORGANIZATION / ACCESS METHODS ▪ Different keys may create same
DBMS etc. LEGACY SYSTEMS SEQUENTIAL ISAM address
RANDOM • Efficient: 1, 2, 3, 6
• Inefficient: 4, 5, 7

POINTER STRUCTURE

➢ Stores the address (pointer) of related record in


a field with each data record [Figure 8-6]

• Records stored randomly


• Pointers provide connections b/w
records
• Pointers may also provide links of
records b/w files [Figure 8-7]
• Types of pointers [Figure 8-8]:

EFFICIENT INEFFICIENT ❑ Physical address – actual disk


storage location
ACCESS SINGLE RECORDS
• Advantage: Access speed
ACCESS ENTIRE FILES
• Disadvantage: if related record
moves, pointer must be
changed & w/o logical
reference, a pointer could be
lost causing referenced record
to be lost

❑ Relative address – relative position


HASHING STRUCTURE
in the file (135th)
➢ Employs algorithm to convert primary key into
• Must be manipulated to
physical record storage address [Figure 8-5]
convert to physical
• No separate index necessary address
• Advantage: access speed

| jg
❑ Logical address – primary key of
related record

• Key value is converted by


hashing to physical
address

• Efficient: 1, 2, 3, 6
• Inefficient: 4, 5, 7

DATABASE STRUCTURES

➢ Hierarchical & network structures [Figure 8-9]

▪ Uses explicit linkages b/w records to


establish relationship
Record #3 of the INVOICE file has a “foreign key” for the
▪ Figure 8-9 is M:N example related CUSTOMER record (i.e., for this transaction, to
➢ Relational structure whom the merchandise was sold), which is the Primary
Key in the CUSTOMER file. That same record (#3) has a
▪ Uses implicit linkages b/w records to foreign key for the INVENTORY record (i.e., for this
establish relationship: same transaction, the item sold) on that INVOICE to that
foreign keys / primary keys CUSTOMER. Thus the foreign keys help to build a
composite picture of the transaction or event.

See Figure 8-10 for another example.


The major difference between the these two
approaches is the degree of process integration and The indexed sequential file structure uses an index in
data sharing that can be achieved. Two-dimensional conjunction with a sequential file organization, which
flat files exist as independent data structures that are allows both direct access to individual records and
not linked logically or physically to other files. Database batch processing of the entire file. Multiple indexes can
models were designed to support flat-file systems be used to create a cross-reference called an inverted
already in place, while allowing the organization to list that allows even more flexible access to data [Figure
move to new levels of data integration. 8-11].

NOTE: In this example, it is assumed only 1 item of


INVENTORY is sold on an INVOICE. Obviously, there are
other scenarios, which would be represented differently
than the one chosen here.

▪ User views

• Data a particular user needs to achieve


his/her assigned tasks
• A single view, or view without user
input, leads to problems in meeting the
diverse needs of the enterprise
• Trend today: capture data in sufficient
detail and diversity to sustain multiple
user views
• User views MUST be consolidated into a
single “logical view” or schema

| jg
• Data in the logical view MUST be ▪ Creating physical tables
normalized
▪ Query function
▪ Creating views

• Designing output reports, documents,
and input screens needed by users or
groups
• Physical documents help designer
understand relationships among the
data

• 3 user views: Table 8-2, Figure 8-


12, Table 8-3

• Then apply normalization principles to


the conceptual user views to design the
database tables

▪ Importance of data normalization

• Critical to success of DBMS


• Effective design in grouping data
• Several levels: 1NF, 2NF, 3NF, etc.
• Un-normalized data suffers from:
• Insertion anomalies
• Deletion anomalies
• Update anomalies
• One or more of these anomalies will
exist in tables < 3NF

▪ Normalization process

• Un-normalized data [Table 8-4]


• Eliminates the 3 anomalies if:

• All non-key attributes are


dependent on the primary key

• There are no partial


dependencies (on part of the
primary key)

• There are no transitive


dependencies; non-key attributes
are not dependent on other non-
key attributes

• “Split” tables are linked via embedded


“foreign keys”
• Normalized database tables examples:
Figures 8-13, 8-14

| jg

You might also like