Testing Plan 1
Testing Plan 1
Ojo Podo
<Iteration/ Master> Test Plan
Version <1.0>
[Note: The following template is provided for use with the Rational Unified Process. Text enclosed in square
brackets and displayed in blue italics (style=InfoBlue) is included to provide guidance to the author and should
be deleted before publishing the document. A paragraph entered following this style will automatically be set to
normal (style=Body Text).]
[To customize automatic fields in Microsoft Word (which display a gray background when selected), select
File>Properties and replace the Title, Subject and Company fields with the appropriate information for this
document. After closing the dialog, automatic fields may be updated throughout the document by selecting
Edit>Select All (or Ctrl-A) and pressing F9, or simply click on the field and press F9. This must be done
separately for Headers and Footers. Alt-F9 will toggle between displaying the field names and the field
contents. See Word help for more information on working with fields.]
<Project Name> Version: <1.0>
<Iteration/ Master> Test Plan Date: <dd/mmm/yy>
<document identifier>
Revision History
Date Version Description Author
24/3/2012 1.0 Draft pertama Christian C.
Table of Contents
1. Introduction 3
1.1 Purpose 3
1.2 Scope 3
1.3 Intended Audience 3
1.4 Document Terminology and Acronyms 3
1.5 References 3
1.6 Document Structure 3
5. Test Approach 3
5.1 Initial Test-Idea Catalogs and other reference sources 3
5.2 Testing Techniques and Types 3
5.2.1 Data and Database Integrity Testing 3
5.2.2 Function Testing 3
5.2.3 Business Cycle Testing 3
5.2.4 User Interface Testing 3
5.2.5 Performance Profiling 3
5.2.6 Load Testing 3
5.2.7 Stress Testing 3
5.2.8 Volume Testing 3
5.2.9 Security and Access Control Testing 3
5.2.10 Failover and Recovery Testing 3
5.2.11 Configuration Testing 3
5.2.12 Installation Testing 3
7. Deliverables 3
7.1 Test Evaluation Summaries 3
8. Testing Workflow 3
9. Environmental Needs 3
9.1 Base System Hardware 3
9.2 Base Software Elements in the Test Environment 3
9.3 Productivity and Support Tools 3
9.4 Test Environment Configurations 3
1. Introduction
1.1 Purpose
Ojo Podo merupakan aplikasi yang akan digunakan untuk mengecek kesamaan dokumen makalah yang berguna
untuk membantu dalam proses pengecekan plagiarism. Departemen software test yang akan melakukan
pengetesan terhadpat software ini
Sistem akan melakukan hal-hal seperti di bawah ini:
Menyediakan user dengan menu, pentunjuk, dan pesan error kepada penggunadengan pilihan
Menangani upload dokumen yang akan dicek
Menangani pengkategorian
Menyimpan file
Menangani penambahan atau pengurangan kategori pencarian
Mengarahkan link download
Back up secara rutin
Berjalan pada IIS
1.2 Scope
Sistem ini dapat menangani inputan user dari input box maupun secara upload file atau bundle file . Sistem ini
akan menyediakan pengaturan menu dan menyediakan pesan error untuk memberi petunjuk pada pengguna.
[Provide a brief description of the audience for whom you are writing the Test Plan. This helps readers of your
document identify whether it is a document intended for their use, and helps prevent the document from being
used inappropriately.
Note: Document style and content often alters in relation to the intended audience.
This section should only be about three to five paragraphs in length.]
1.3 Document Terminology and Acronyms
Acronim telah terdapat pada glossary
[This subsection provides the definitions of any terms, acronyms, and abbreviations required to properly
interpret the Test Plan. Avoid listing items that are generally applicable to the project as a whole and that are
already defined in the project’s Glossary. Include a reference to the project’s Glossary in the References
section.]
1.4 References
[This subsection provides a list of the documents referenced elsewhere within the Test Plan. Identify each
document by title, version (or report number if applicable), date, and publishing organization or original
author. Avoid listing documents that are influential but not directly referenced. Specify the sources from which
the “official versions” of the references can be obtained, such as intranet UNC names or document reference
codes. This information may be provided by reference to an appendix or to another document.]
1.5 Document Structure
[This subsection outlines what the rest of the Test Plan contains and gives an introduction to how the rest of the
document is organized. This section may be eliminated if a Table of Contents is used.]
2. Evaluation Mission and Test Motivation
[Provide an overview of the mission and motivation for the testing that will be conducted in this iteration.]
2.1 Background
[Provide a brief description of the background surrounding why the test effort defined by this Test Plan will be
undertaken. Include information such as the key problem being solved, the major benefits of the solution, the
planned architecture of the solution, and a brief history of the project. Where this information is defined in
other documents, you can include references to those other more detailed documents if appropriate. This
section should only be about three to five paragraphs in length.]
2.2 Evaluation Mission
[Provide a brief statement that defines the mission for the evaluation effort in the current iteration. This
statement might incorporate one or more concerns including:
find as many bugs as possible
find important problems, assess perceived quality risks
advise about perceived project risks
certify to a standard
verify a specification (requirements, design or claims)
advise about product quality, satisfy stakeholders
advise about testing
fulfill process mandates
and so forth
Each mission provides a different context to the test effort and alters the way in which testing should be
approached.]
2.3 Test Motivators
[Provide an outline of the key elements that will motivate the testing effort in this iteration. Testing will be
motivated by many thingsquality risks, technical risks, project risks, use cases, functional requirements, non-
functional requirements, design elements, suspected failures or faults, change requests, and so forth.]
3. Target Test Items
The listing below identifies those test itemssoftware, hardware, and supporting product elements that have
been identified as targets for testing. This list represents what items will be tested.
[Provide a high level list of the major target test items. This list should include both items produced directly by
the project development team, and items that those products rely on; for example, basic processor hardware,
peripheral devices, operating systems, third-party products or components, and so forth. Consider grouping the
list by category and assigning relative importance to each motivator.]
4. Outline of Planned Tests
[This section provides a high-level outline of the testing that will be performed. The outline in this section
represents a high level overview of both the tests that will be performed and those that will not.]
4.1 Outline of Test Inclusions
[Provide a high level outline of the major testing planned for the current iteration. Note what will be included
in the plan and record what will explicitly not be included in the section titled Outline of Test Exclusions.]
4.2 Outline of Other Candidates for Potential Inclusion
[Separately outline test areas you suspect might be useful to investigate and evaluate, but that have not been
sufficiently researched to know if they are important to pursue.]
4.3 Outline of Test Exclusions
[Provide a high level outline of the potential tests that might have been conducted but that have been explicitly
excluded from this plan. If a type of test will not be implemented and executed, indicate this in a sentence
stating the test will not be implemented or executed and stating the justification, such as:
“These tests do not help achieve the evaluation mission.”
“There are insufficient resources to conduct these tests.”
“These tests are unnecessary due to the testing conducted by xxxx.”
As a heuristic, if you think it would be reasonable for one of your audience members to expect a certain aspect
of testing to be included that you will not or cannot address, you should note it’s exclusion: If the team agrees
the exclusion is obvious, you probably don’t need to list it.]
5. Test Approach
[The Test Approach presents the recommended strategy for designing and implementing the required tests.
Sections 3, Target Test Items, and 4, Outline of Planned Tests, identified what items will be tested and what
types of tests would be performed. This section describes how the tests will be realized.
One aspect to consider for the test approach is the techniques to be used. This should include an outline of how
each technique can be implemented, both from a manual and/or an automated perspective, and the criterion for
knowing that the technique is useful and successful. For each technique, provide a description of the technique
and define why it is an important part of the test approach by briefly outlining how it helps achieve the
Evaluation Mission or addresses the Test Motivators.
Another aspect to discuss in this section is the Fault or Failure models that are applicable and ways to
approach evaluating them.
As you define each aspect of the approach, you should update Section 10, Responsibilities, Staffing, and
Training Needs, to document the test environment configuration and other resources that will be needed to
implement each aspect.]
5.1 Initial Test-Idea Catalogs and Other Reference Sources
[Provide a listing of existing resources that will be referenced to stimulate the identification and selection of
specific tests to be conducted. An example Test-Ideas Catalog is provided in the examples section of RUP.]
5.2 Testing Techniques and Types
5.2.1 Data and Database Integrity Testing
[The databases and the database processes should be tested as an independent subsystem. This testing should
test the subsystems without the target-of-test’s User Interface as the interface to the data. Additional research
into the DataBase Management System (DBMS) needs to be performed to identify the tools and techniques that
may exist to support the testing identified in the following table.]
Technique Objective: [Exercise database access methods and processes independent of the UI so
you can observe and log incorrect functioning target behavior or data
corruption.]
Technique: [Invoke each database access method and process, seeding each with
valid and invalid data or requests for data.
Inspect the database to ensure the data has been populated as
intended and all database events have occurred properly, or review the
returned data to ensure that the correct data was retrieved for the
correct reasons.]
Oracles: [Outline one or more strategies that can be used by the technique to
accurately observe the outcomes of the test. The oracle combines elements
of both the method by which the observation can be made and the
characteristics of specific outcome that indicate probable success or
failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to
mitigate the risks inherent in automated results determination.]
Required Tools: [The technique requires the following tools:
Test Script Automation Tool
base configuration imager and restorer
backup and recovery tools
installation-monitoring tools (registry, hard disk, CPU, memory, and
so forth)
database SQL utilities and tools
Data-generation tools]
Success Criteria: [The technique supports the testing of all key database access methods and
processes.]
Special Considerations: [Testing may require a DBMS development environment or drivers to
enter or modify data directly in the databases.
Processes should be invoked manually.
Small or minimally sized databases (limited number of records) should
be used to increase the visibility of any non-acceptable events.]
Technique Objective: [Exercise the following to observe and log standards conformance and
target behavior:
Navigation through the target-of-test reflecting business functions and
requirements, including window-to-window, field-to- field, and use of
access methods (tab keys, mouse movements, accelerator keys).
Window objects and characteristics can be exercised–such as menus,
size, position, state, and focus.]
Technique: [Create or modify tests for each window to verify proper navigation and
object states for each application window and object.]
Oracles: [Outline one or more strategies that can be used by the technique to
accurately observe the outcomes of the test. The oracle combines elements
of both the method by which the observation can be made and the
characteristics of specific outcome that indicate probable success or
failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to
mitigate the risks inherent in automated results determination.]
Required Tools: [The technique requires the Test Script Automation Tool.]
Success Criteria: [The technique supports the testing of each major screen or window that
will be used extensively by the end user.]
Special Considerations: [Not all properties for custom and third-party objects can be accessed.]
Technique Objective: [Exercise the target-of-test under the following high volume scenarios to
observe and log target behavior:
Maximum (actual or physically-capable) number of clients connected,
or simulated, all performing the same, worst case (performance)
business function for an extended period.
Maximum database size has been reached (actual or scaled) and
multiple queries or report transactions are executed simultaneously.]
Technique: [Use tests developed for Performance Profiling or Load Testing.
Multiple clients should be used, either running the same tests or
complementary tests to produce the worst-case transaction volume or
mix (see Stress Testing) for an extended period.
Maximum database size is created (actual, scaled, or filled with
representative data) and multiple clients are used to run queries and
report transactions simultaneously for extended periods.]
Oracles: [Outline one or more strategies that can be used by the technique to
accurately observe the outcomes of the test. The oracle combines elements
of both the method by which the observation can be made and the
characteristics of specific outcome that indicate probable success or
failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to
mitigate the risks inherent in automated results determination.]
Required Tools: [The technique requires the following tools:
Test Script Automation Tool
Transaction Load Scheduling and control tool
installation-monitoring tools (registry, hard disk, CPU, memory, and
so on)
resource-constraining tools (for example, Canned Heat)
Data-generation tools]
Success Criteria: [The technique supports the testing of Volume Emulation. Large quantities
of users, data, transactions, or other aspects of the system use under
volume can be successfully emulated and an observation of the system
state changes over the duration of the volume test can be captured.]
Special Considerations: [What period of time would be considered an acceptable time for high
volume conditions, as noted above?]
Special Considerations: [Access to the system must be reviewed or discussed with the appropriate
network or systems administrator. This testing may not be required as it
may be a function of network or systems administration.]
Technique Objective: [Simulate the failure conditions and exercise the recovery processes
(manual and automated) to restore the database, applications, and system
to a desired, known, state. The following types of conditions are included in
the testing to observe and log target behavior after recovery:
power interruption to the client
power interruption to the server
communication interruption via network servers
interruption, communication, or power loss to DASD (Dynamic Access
Storage Devices) and DASD controllers
incomplete cycles (data filter processes interrupted, data
synchronization processes interrupted)
invalid database pointers or keys
invalid or corrupted data elements in database]
Technique: [The tests already created for Function and Business Cycle testing can be
used as a basis for creating a series of transactions to support failover and
recovery testing, primarily to define the tests to be run to test that recovery
was successful.
Power interruption to the client: power the PC down.
Power interruption to the server: simulate or initiate power down
procedures for the server.
Interruption via network servers: simulate or initiate
communication loss with the network (physically disconnect
communication wires or power down network servers or routers).
Interruption, communication, or power loss to DASD and DASD
controllers: simulate or physically eliminate communication with one or
more DASDs or controllers.
Once the above conditions or simulated conditions are achieved, additional
transactions should be executed and, upon reaching this second test point
state, recovery procedures should be invoked.
Testing for incomplete cycles uses the same technique as described above
except that the database processes themselves should be aborted or
prematurely terminated.
Testing for the following conditions requires that a known database state
be achieved.
Several database fields, pointers, and keys should be corrupted manually
and directly within the database (via database tools). Additional
transactions should be executed using the tests from Application Function
and Business Cycle Testing and full cycles executed.]
Oracles: [Outline one or more strategies that can be used by the technique to
accurately observe the outcomes of the test. The oracle combines elements
of both the method by which the observation can be made and the
characteristics of specific outcome that indicate probable success or
failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to
mitigate the risks inherent in automated results determination.]
Required Tools: [The technique requires the following tools:
base configuration imager and restorer
installation monitoring tools (registry, hard disk, CPU, memory, and
so on)
backup and recovery tools]
Success Criteria: The technique supports the testing of:
One or more simulated disasters involving one or more combinations
of the application, database, and system.
One or more simulated recoveries involving one or more combinations
of the application, database, and system to a known desired state.]
Technique Objective: [Exercise the target-of-test on the required hardware and software
configurations to observe and log target behavior under different
configurations and identify changes in configuration state.]
Technique: [Use Function Test scripts.
Open and close various non-target-of-test related software, such as
Microsoft Excel and Word applications, either as part of the test or
prior to the start of the test.
Execute selected transactions to simulate actors interacting with the
target-of-test and the non-target-of-test software.
Repeat the above process, minimizing the available conventional
memory on the client workstation.]
Oracles: [Outline one or more strategies that can be used by the technique to
accurately observe the outcomes of the test. The oracle combines elements
of both the method by which the observation can be made and the
characteristics of specific outcome that indicate probable success or
failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to
mitigate the risks inherent in automated results determination.]
Required Tools: [The technique requires the following tools:
base configuration imager and restore
installation monitoring tools (registry, hard disk, CPU, memory, and
so on)]
Success Criteria: [The technique supports the testing of one or more combinations of the
target test items running in expected, supported deployment environments.]
Technique Objective: [Exercise the installation of the target-of-test onto each required hardware
configuration under the following conditions to observe and log installation
behavior and configuration state changes:
new installation: a new machine, never installed previously with
<Project Name>
update: a machine previously installed <Project Name>, same version
update: a machine previously installed <Project Name>, older
version]
Technique: [Develop automated or manual scripts to validate the condition of the
target machine.
o new: never installed
o same or older version already installed
Launch or perform installation.
Using a predetermined subset of Function Test scripts, run the
transactions.]
Oracles: [Outline one or more strategies that can be used by the technique to
accurately observe the outcomes of the test. The oracle combines elements
of both the method by which the observation can be made and the
characteristics of specific outcome that indicate probable success or
failure. Ideally, oracles will be self-verifying, allowing automated tests to
make an initial assessment of test pass or failure, however, be careful to
mitigate the risks inherent in automated results determination.]
Required Tools: [The technique requires the following tools:
base configuration imager and restorer
installation monitoring tools (registry, hard disk, CPU, memory, and
so on)]
Success Criteria: [The technique supports the testing of the installation of the developed
product in one or more installation configurations.]
Special Considerations: [What <Project Name> transactions should be selected to comprise a
confidence test that <Project Name> application has been successfully
installed and no major software components are missing?]
phases and the number of iterations, and give an indication of what types of testing are generally planned for
each Phase or Iteration.
Note: Where process and detailed planning information is recorded centrally and separately from this Test
Plan, you will have to manage the issues that will arise from having duplicate copies of the same information.
To avoid team members referencing out-of-date information, we suggest that in this situation you place the
minimum amount of process and planning information within the Test Plan to make ongoing maintenance
easier and simply reference the "Master" source material.]
9. Environmental Needs
[This section presents the non-human resources required for the Test Plan.]
9.1 Base System Hardware
The following table sets forth the system resources for the test effort presented in this Test Plan.
[The specific elements of the test system may not be fully understood in early iterations, so expect this section to
be completed over time. We recommend that the system simulates the production environment, scaling down the
concurrent access and database size, and so forth, if and where appropriate.]
[Note: Add or delete items as appropriate.]
System Resources
Resource Quantity Name and Type
Database Server
—Network or Subnet TBD
—Server Name TBD
—Database Name TBD
Client Test PCs
—Include special configuration TBD
requirements
Test Repository
—Network or Subnet TBD
—Server Name TBD
Test Development PCs TBD
Human Resources
Role Minimum Resources Specific Responsibilities or Comments
Recommended
(number of full-time roles allocated)
Human Resources
Role Minimum Resources Specific Responsibilities or Comments
Recommended
(number of full-time roles allocated)
The test team often requires the support and skills of other team members not directly part of the test team.
Make sure you arrange in your plan for appropriate availability of System Administrators, Database
Administrators, and Developers who are required to enable the test effort.]
11. Iteration Milestones
[Identify the key schedule milestones that set the context for the Testing effort. Avoid repeating too much detail
that is documented elsewhere in plans that address the entire project.]
Contingency (Risk is
Risk Mitigation Strategy realized)
Test data proves to be <Customer> will ensure a full set of suitable and Redefine test data
inadequate. protected test data is available. Review Test Plan and modify
components (that is, scripts)
<Tester> will indicate what is required and will Consider Load Test Failure
verify the suitability of test data.
Database requires <System Admin> will endeavor to ensure the Restore data and restart
refresh. Database is regularly refreshed as required by Clear Database
<Tester>.
[List any dependencies identified during the development of this Test Plan that may affect its successful
execution if those dependencies are not honored. Typically these dependencies relate to activities on the critical
path that are prerequisites or post-requisites to one or more preceding (or subsequent) activities You should
consider responsibilities you are relying on other teams or staff members external to the test effort completing,
timing and dependencies of other planned tasks, the reliance on certain work products being produced.]
Dependency between Potential Impact of Dependency Owners
[List any assumptions made during the development of this Test Plan that may affect its successful execution if
those assumptions are proven incorrect. Assumptions might relate to work you assume other teams are doing,
expectations that certain aspects of the product or environment are stable, and so forth].
Impact of Assumption being
Assumption to be proven incorrect Owners
[List any constraints placed on the test effort that have had a negative effect on the way in which this Test Plan
has been approached.]
Constraint on Impact Constraint has on test effort Owners