0% found this document useful (0 votes)
18 views11 pages

Caatts 1

The document discusses IT application controls relevant to financial reporting, emphasizing the importance of input, processing, and output controls as mandated by SOX. It details various techniques for validating data, ensuring processing integrity, and preserving audit trails, as well as methods for testing these controls using both black-box and white-box approaches. Additionally, it introduces Computer-Aided Audit Tools and Techniques (CAATT) for effectively testing application controls.

Uploaded by

Henry Dublin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views11 pages

Caatts 1

The document discusses IT application controls relevant to financial reporting, emphasizing the importance of input, processing, and output controls as mandated by SOX. It details various techniques for validating data, ensuring processing integrity, and preserving audit trails, as well as methods for testing these controls using both black-box and white-box approaches. Additionally, it introduces Computer-Aided Audit Tools and Techniques (CAATT) for effectively testing application controls.

Uploaded by

Henry Dublin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Republic of the Philippines

UNIVERSITY OF EASTERN PHILIPPINES


COLLEGE OF BUSINESS ADMINISTRATION
Bachelor of Science in Accountancy

COMPUTER - ASSISTED
TOOLS AND TECHNIQUES
(WRITTEN REPORT)

PREPARED BY:

ESTILLERO, DEVI P.
PALIS, CHARLOTTE
TAN, HONEY JAYSIERY A.

2024
IT APPLICATION CONTROLS
Management and auditors are required under SOX to consider IT application controls
relevant to financial reporting. Application controls are associated with specific applications
such as payroll, purchases, and cash disbursements systems, and fall into three broad
categories: input controls, processing controls, and output controls.

INPUT CONTROLS
Also known as edits or validation controls. Input controls are programmed
procedure which perform tests on transaction data to ensure that they are free from errors
before they are processed.

• Validation in a Real-Time System - Edit controls in real-time systems are placed at


the data collection stage to monitor data as they are entered from terminals.
• Validation in a Batch-Direct Access System - Batch systems assemble data into
transaction files, where they are temporarily held for subsequent processing.
• Validation in Batch Sequential File System - This technique requires that the
transaction records being validated must first be sorted in the same order as the
master file being updated. Validating at the data input stage in this case may require
considerable additional processing. Therefore, as a practical matter, each processing
module performs validation procedures prior to updating the master file record.

Input controls fall into three categories:

1. Field Interrogation - involves programmed procedures that examine the


characteristics of the data in the field.

• Check Digit - A check digit is a control digit (or digits) that is added to the data
code when it is originally assigned. This allows the integrity of the code to be
established during subsequent processing. The check digit can be located
anywhere in the code, as a prefix, a suffix, or embedded someplace in the
middle.

TRANSCRIPTION ERRORS
occur when an extra digit or character is added to the
Additional Errors
code.
Truncation Errors occur when a digit or character is dropped from a code.
Substitution Errors are the replacement of one digit in a code with another.
TRANSPOSITION ERRORS
Single Transposition Errors occur when two adjacent digits are reversed.
Multiple Transposition Errors occur when nonadjacent digits are transposed.

• Missing Data Check - are used to examine the contents of a field for the
presence of blank spaces. When the validation program detects a blank where
it expects to see a data value, this will be interpreted as an error.
• Numeric – Alphabetic Check - This edit identifies when data in a particular
field are in the wrong form.
• Limit Check - Limit checks determine if the value in the field exceeds an
authorized limit.
• Range Check – tests whether the entered amount falls between a
predetermined lower and upper limit.
• Validity Check - A validity check compares actual field values against known
acceptable values. This control is used to verify such things as transaction
codes, state abbreviations, or employee job skill codes. If the value in the field
does not match one of the acceptable values, the record is flagged as an error.
This is a frequently used control in cash disbursement systems.
2. Record Interrogation – is a validate the entire record by examining the
interrelationship of its field values. Some typical tests are discussed next.

• Reasonableness Check - This control checks determines if a value in one


field, which has already passed a limit check and a range check, is reasonable
when considered along with other data fields in the record.
• Sign Check - Sign checks verify that the sign of a field is correct for the type
of record being processed.
• Sequence Check - are used to determine if a record is out of order. This control
is used in batch systems that use sequential master files. In such system, the
transaction file being processed must be sorted in the same order (on the
primary key) of the corresponding master file. This requirement is critical to the
processing logic of the update program. Therefore, before each transaction
record is processed, its sequence is verified relative to the previous record
processed.

3. File Interrogation - These controls are particularly important for protecting master
files, which contain permanent data such as accounts receivable, accounts payable,
and inventory. The purpose of file interrogation is to ensure that the correct file is being
processed by the system.

• Internal and External Label Checks – verify that the file being processed is
the one the program is actually calling for. Data files stored on magnetic tapes
are secured off line in a tape library (sometime called a tape silo). This
storage device employs a tape drive and many storage slots that hold the
physical tape cartridges. When a tape is called for a robotic device reads the
external barcode labels on the tape cartridges, selects the desired tape, and
automatically loads it into the tape reader. Magnetic tapes and disks also have
internal labels also called header labels that identify the physical storage
device as well as the data files they contain.
• Version Checks - are used to verify that the version of the file being processed
is the correct one. In a grandfather-father-son (GFS) backup system, many
versions of master files and transactions may exist. The version check
compares the version number of the file being processed with the program’s
requirements. An expiration date check prevents a file from being deleted
before it expires.

PROCESSING CONTROLS
Are programmed procedures designed to ensure that an application’s logic is
functioning properly. They are divided into three categories: run-to-run controls, operator
intervention controls, and audit trail controls.

1. Run-to-run Controls – are designed to monitor the batch as it moves from one run to
another. This is accomplished through batch control data that are used for reconciling
the output produced by each run with batch control data created at the data input stage.
Run-to-run controls ensure that:

• All records in the batch are processed.


• No records are processed more than once.
• A transaction audit trail is created from the input stage, throughout the processing
runs, and to the output stage of the system.
Documents used to capture and communicate data:

• Batch Transmittal Sheet


• Batch Control Log

Run-to-run Controls in a Revenue Cycle System:

1. Data Input
2. Accounts receivable update
3. Inventory update
4. Output

2. Operator Intervention Controls - Operator intervention increases the potential for


human error. Systems that limit operator intervention through operator intervention
controls are thus less prone to processing errors. Although it may be impossible to
eliminate operator involvement completely, parameter values and program start points
should, to the extent possible, be derived logically or provided to the system through
look-up tables.
3. Audit Trail Controls - In an accounting system, every transaction must be traceable
through each stage of processing from its economic source to its presentation in
financial statements. In an automated environment, the audit trail can become
fragmented and difficult to follow. It thus becomes critical that each major operation
applied to a transaction be thoroughly documented.

Examples of techniques used to preserve audit trails in computer-based accounting


systems:

• Transaction Logs – serves as a journal for every transaction successfully processed


by the system. There are two reasons for creating a transaction log: the transaction
log is a permanent record of transactions and not all of the records in the validated
transaction file may be successfully processed. Some of these records may fail tests
in the subsequent processing stages.
• Log of Automatic Transactions - all internally generated transactions must be placed
in a transaction log.
• Listing of Automatic Transactions - To maintain control over automatic transactions
processed by the system, the responsible end user should receive a detailed listing of
internally generated transactions.
• Unique Transaction Identifiers - Each transaction processed by the system must be
uniquely identified with a transaction number. This is the only practical means of tracing
a particular transaction through a database of thousands or even millions of records.
• Error Listing - A listing of all error records should go to the appropriate user to support
error correction and resubmission.

OUTPUT CONTROLS
Ensure that system output is not lost, misdirected, or corrupted and that privacy is not
violated. Exposures of this sort can cause serious disruptions to operations and may result in
financial losses to a firm.

Controlling Batch Systems - Batch systems usually produce output in the form of hard copy,
which typically requires the involvement of intermediaries in its production and distribution.
• Output Spooling. In large-scale data-processing operations, output devices such as
line printers can become backlogged with many programs simultaneously demanding
these limited resources. This backlog can cause a bottleneck, which adversely affects
the throughput of the system. To ease this burden, applications are often designed to
direct their output to a magnetic disk file rather than to the printer directly. This is called
output spooling.
• Print Programs. When the printer becomes available, the print run program produces
hard copy output from the output file. Print programs are often complex systems that
require operator intervention. Print program controls are designed to deal with two
types of exposures presented by this environment: (1) the production of
unauthorized copies of output and (2) employee browsing of sensitive data. To
prevent operators from viewing sensitive output, special multipart paper can be used,
with the top copy colored black to prevent the print from being read.

Four common types of operator actions follow:

o Pausing the print program to load the correct type of output documents (check stocks,
invoices, or other special forms).
o Entering parameters needed by the print run, such as the number of copies to be
printed.
o Restarting the print run at a prescribed checkpoint after a printer malfunction.
o Removing printed output from the printer for review and distribution.
• Bursting. When output reports are removed from the printer, they go to the bursting
stage to have their pages separated and collated. The concern here is that the bursting
clerk may make an unauthorized copy of the report, remove a page from the report, or
read sensitive information.
• Waste. Computer output waste represents a potential risk. It is important to dispose
of aborted reports and the carbon copies from multipart paper removed during bursting
properly. Computer waste is also a source of technical data, such as passwords and
authority tables, which a perpetrator may use to access the firm’s data files.
• Data Control. In some organizations, the data control group is responsible for verifying
the accuracy of computer output before it is distributed to the user. Normally, the data
control clerk will review the batch control figures for balance; examine the report body
for garbled, illegible, and missing data; and record the receipt of the report in data
control’s batch control log.
• Report Distribution. The primary risks associated with report distribution include
reports being lost, stolen, or misdirected in transit to the user. A number of control
measures can minimize these exposures.
• End User Controls. Once in the hands of the user, output reports should be
reexamined for any errors that may have evaded the data control clerk’s review. Users
are in a far better position to identify subtle errors in reports that are not disclosed by
an imbalance in control totals. Once a report has served its purpose, it should be stored
in a secure location until its retention period has expired. When the retention date has
passed, reports should be destroyed in a manner consistent with the sensitivity of their
contents. Highly sensitive reports should be shredded.

Controlling Real-Time Systems Output - Real-time systems direct their output to the user’s
computer screen, terminal, or printer. This method of distribution eliminates the various
intermediaries in the journey from the computer center to the user and thus reduces many of
the exposures previously discussed. The primary threat to real-time output is the interception,
disruption, destruction, or corruption of the output message as it passes along the
communications link. This threat comes from two types of exposures:

• exposures from equipment failure; and


• exposures from subversive acts, whereby a computer criminal intercepts the output
message transmitted between the sender and the receiver.

TESTING COMPUTER APPLICATION CONTROLS


This section examines several techniques for auditing computer applications. Control
testing techniques provide information about the accuracy and completeness of an
application’s processes.
• Black-Box Approach
The black-box approach (also called auditing around the computer) does not require
the auditor to obtain a detailed knowledge of the application’s internal logic. Instead,
auditors analyze flowcharts and interview knowledgeable personnel in the client’s
organization to understand the functional characteristics of the application. One
advantage of this technique is that the application need not be removed from service
and tested directly. Black-box testing is feasible for applications that are relatively
simple, with inputs and outputs that are easily reconciled.

• White-Box Approach
The white-box approach (also called auditing through the computer) requires the
auditor to obtain an in-depth understanding of the internal logic of the application being
tested so that he or she may test internal controls directly. White-box techniques use
small numbers of specially created test transactions to verify specific aspects of an
application’s logic and controls. In this way, auditors are able to conduct precise tests,
with known variables, and obtain results that they can compare against objectively
calculated results.

Some of the more common types of tests of controls include the following:
• Access tests verify that individuals, programmed procedures, or messages (such as
electronic data interchange [EDI] transmissions) attempting to access a system are
authentic and valid. Access tests include verifications of user IDs, passwords, valid
vendor codes, and user authority tables.
• Validity tests ensure that the system processes only data values that conform to
specified tolerances. Audit tests would include designing data for range tests, field
tests, limit tests, and reasonableness tests. Validity tests also apply to transaction
approvals, such as verifying that credit checks and AP three-way-matches are properly
performed by the application.
• Accuracy tests ensure that mathematical calculations are accurate and posted to
the correct accounts. Examples include recalculations of control totals and
reconciliations of transaction postings to subsidiary ledgers.
• Completeness tests identify missing data within a single record and/or entire
records missing from a batch. The types of tests performed are field tests, record
sequence tests, and recalculation of hash totals and financial control totals.
• Redundancy tests determine that an application process each record only once.
Redundancy tests include reviewing record counts and recalculation of hash totals and
financial control totals.
• Audit trail tests ensure that the application creates an adequate audit trail. Tests
include obtaining evidence that the application records all transactions in a transaction
log (journal), posts data values to the appropriate accounts, produces complete
transaction listings, and generates error files and reports for all exceptions.
• Rounding error tests verify the correctness of rounding procedures. Financial
systems that calculate interest payments on bank accounts or charges on mortgages
and other loans employ special rounding error applications. Rounding programs are
particularly susceptible to salami frauds. Salami frauds tend to affect a large number
of victims, but the harm on each is immaterial. This type of fraud takes its name from
the analogy of slicing a large salami (the fraud objective) into many thin pieces.

COMPUTER-AIDED AUDIT TOOLS AND TECHNIQUES (CAATT) FOR TESTING


CONTROLS
To illustrate how application controls are tested, this section describes five CAATT
approaches: the test data method, which includes BCSE and tracing, ITF, and parallel
simulation.

1. Test Data Method is used to establish application integrity by processing specially


prepared sets of input data through production applications that are under review. The
results of each test are compared to predetermined expectations to obtain an objective
evaluation of application logic and control effectiveness.

Creating Test Data


When creating test data, auditors must prepare a complete set of both valid and invalid
transactions. If test data are incomplete, auditors might fail to examine critical branches of
application logic and error-checking routines. Test transactions should test every possible
input error, logical process, and irregularity.

2. Base Case System Evaluation (BSCE) is a variant of test data method in which
comprehensive test data goes through repetitive testing until a consistent and valid base
case (result) is obtained. When application is modified, subsequent test (new) results can
be compared with previous (base case) results.
3. Tracing is another type of test data technique that takes step-by step electronic walk-
through of the application’s internal logic. The tracing procedure involves 3 steps:

1. The application under review must undergo a special compilation to activate the trace
option.
2. Specific transactions or types of transactions are created as test data.
3. The test data transactions are traced through all processing stages of the program,
and a listing is produced of all programmed instructions that were executed during the
test.

Advantages of Test Data Techniques


• Providing auditor with explicit evidence concerning application functions.
• Test data runs can be employed with only minimal disruption to the organization’s
operations.
• Require only minimal computer expertise on the part of auditors.

Disadvantages of Test Data Techniques


• Auditors must rely on computer services personnel to obtain a copy of the application
for test purposes.
• They provide a static picture of application integrity at a single point in time. They do
not provide a convenient means of gathering evidence about ongoing application
functionality.
• Relatively high cost of implementation, which results in audit inefficiency.

4. Integrated Test Facility (ITF)

• an automated technique that enables the auditor to test an application’s logic and
controls during its normal operation by setting up a dummy entity within the application
system.
• ITF is one or more audit modules designed into the application during the systems
development process and are designed to discriminate (differentiate) between ITF and
routine transactions.
• Auditor analyzes ITF results against expected results.
Advantages of Integrated Test Facility
• Supports ongoing monitoring of controls as specified by COSO control framework.
• Applications can be economically tested without disrupting the user’s operations and
without the intervention of computer services personnel.

Disadvantages of Integrated Test Facility


• Primary disadvantage of ITF is the potential for corrupting the data files of the
organization with test data.

This problem is remedied in two ways:


• Adjusting entries may be processed to remove the effects of ITF from general ledger
• Account balances or data files can be scanned by special software that remove the
ITF transactions.

5. Parallel Simulation
• The auditor uses auditor-controlled software (simulated program) to perform parallel
operations to the client’s software by using the same data files.
• The results obtained from the simulation are reconciled with the results of the original
production run to establish a basis for making inferences about the quality of
application processes and controls
• Is the best technique for verification of calculations (depreciation, interest, taxes,
payroll, etc.)
Steps in performing Parallel Simulation:
1. The auditor must first gain a thorough understanding of the application under review.
2. The auditor must then identify those processes and controls in the application that are
critical to the audit. These are the processes to be simulated.
3. The auditor creates the simulation using a 4GL or generalized audit software (GAS).
4. The auditor runs the simulation program using selected production transactions and
master files to produce a set of results.
5. Finally, the auditor evaluates and reconciles the test results with the production results
produced in a previous run.

Simulation programs are usually less complex than the production applications they represent.
Because simulations contain only the application processes, calculations, and controls
relevant to specific audit objectives, the auditor must carefully evaluate differences between
test results and production results.

Differences in output results occur for two reasons:


• the inherent crudeness of the simulation program
• real deficiencies in the application’s processes or controls, which are made apparent
by the simulation program.

You might also like