USEDqTest FULL Lesson Transcript 1
USEDqTest FULL Lesson Transcript 1
Level 1
LESSON TRANSCRIPT
SURVEY
SURVEY
qTest Specialist Level 1
Full Lesson Transcript
▪ Version 2019_09
▪ Designed to be used with qTest Manager and qTest Explorer
Lesson Transcript
This Lesson Transcript provides the scripts used during the lesson videos for the qTest Specialist
Level 1 training.
Legal Notice
Tricentis GmbH
Leonard-Bernstein-Straße 10
1220 Vienna
Austria
Information in this document is subject to change without notice. No part of this document may be
reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose
without the express written permission of Tricentis GmbH.
© 2019 by Tricentis Gmb
2
CONTENTS
Survey .................................................................................................................................... 2
PREFACE ................................................................................................................................. 4
PREFACE
About this workbook
This transcript is specifically designed to supplement training of the qTest Specialist Level 1 course.
This transcript is divided into 7 Lesson sections and supplies the script used for the lesson videos.
4
LESSONS
LESSONS
Lesson 01 Test Plan
Let’s first look at our example. As outlined in the Introduction, we are testers working for a web services
company and have developed a mobile app for online shopping. Our company is constantly adding new
features and customizations to our online products.
Our test manager comes to us requesting two requirements to be added to the mobile phone product.
Customers should be able to choose phone covers between 2 model types: iPhone and Samsung.
Additionally, the mobile phone covers were previously only available in black. Now, there will be 4 colors
to choose from (black, white, blue and yellow).
How are we going to plan out our test? What do we need to define first before we can start working on
these two functionalities? Finally, how will we track our progress or how can we use qTest in conjunction
with an ALM, if our team is using one, to keep track of the project?
In this lesson, you will gain an overview of the Test Plan section in qTest Manager as well as how to
organize your Test Plan. You will also learn how to create Releases, as well as link your Requirements to
the Release scope once they have been created. Additionally, you will look at integrating your existing
Test Plan from an ALM to qTest Manager, saving time in Test Plan creation.
Test planning and developing Release criteria are the first steps in test management. Teams gather to
define the scheduled timeframe of Releases, the Requirements and the scope of the Release. Most
testing teams use ALMs to structure their Releases and track their workflow.
qTest allows you to do just that: structure your Releases, link them to Requirements and Test Execution
and also integrate directly with your ALM, saving you time in your test planning.
The Test Plan section in qTest Manager provides you with an overview of the overall project structure
and test objectives as well as all the necessary information needed to perform the tests.
Each project within qTest Manager has its own Test Plan in which you can define the high-level milestones
and testing objectives. Inside the Test Plan, you can both define the start and end dates for each Release
and Build. You also have the option of creating Builds to define in further detail what testing goals may
be involved.
Once you create and define your Releases, you can link your specific Requirements to the Release scope.
If your Requirements are already linked to your releases in Jira, they will automatically stay linked when
you import them into qTest Manager.
Let’s go back to our scenario: Our first step is to define the Releases and subsequent Builds in order to
incorporate these two features into the mobile app. We also need to specify our scheduled timeframes
around the testing.
As we need to test a new functionality that was now added to the WebShop app, (namely, color and
model types for mobile phone covers), we will create one new Release.
6
Changing the status of the Release is important as it helps the user find Releases easily as the number
of objects increases in the tree view – this is best practice.
Let’s first define what a Release is in qTest Manager. Releases are high-level milestones that specify any
testing goals. A Release also represents an internal or external distribution of software once a
development loop within a product’s lifecycle has been completed. In qTest, we will learn how the
importance of Release notes and how to incorporate them. The Status of a Release is also an additional
function that needs to be regularly updated. As the Release progresses, it is advisable to update the
status for clear overview.
Let’s go on and add a Release to our testing project: You may be wondering, what about Builds? Do we
need them? Builds are optional and are used depending on your testing project and how many
Requirements are involved. Moreover, Builds depends on the preferred method of work in the team -
different software development teams use different methodologies for their development.
Before we go any further, let’s first define what Builds are in qTest Manager. Builds function like an
organizational structure for Requirements in qTest. A Build is another milestone within the Test Plan,
nested within a Release. A Release can consist of multiple builds. As mentioned previously, you can
configure qTest to fit your team’s preferred method of working.
Throughout this course, we will not be using any builds as the scope and size of the testing that we will
cover does not require them. However, for the sake of demonstration, if you wanted to use Builds in
your project, it’s as easy as this:
As you are working, the team may delay or not meet the deadline of a Release. Perhaps you had team
absences or certain issues raised during the building phase of the two functionalities.
Suddenly, you notice an error message occurring on one of your Releases as it has surpassed the
deadline.
During your project creation phase, it is important that no Releases, Builds, Test Suites or Test Runs
should go past the end date of the project. If you see this issue, it can be fixed very easily by removing or
adjusting the end date of the project.
This is how it’s done. As you know, Releases go hand in hand with Requirements.
We won’t cover Requirements until the next section. However, it’s important to keep in mind that
Requirements can be linked to Releases in qTest very easily. Additionally, it is important to do this for
reporting and tracking purposes, as well as ensuring that the test objectives are covered in the Release
scope.
What if we are using an ALM like Jira and have already linked our Releases to our Requirements there? In
this case, if using Jira, you can easily integrate your Releases in qTest, and linked objects will automatically
stay linked – you will not need to do it manually.
After Requirements have been created in qTest Manager, they can easily be linked to the scope of the
Release. This is how it’s done.
Now, imagine you already have the outline of your entire test plan in your ALM. After all, most testing
teams work with different ALM tools to predefine their testing plans for organizational purposes. To save
you time in having to reconstruct your test overviews in qTest Manager, you can easily integrate your
existing test plan from your ALM. Usually this integration is set up by a company’s IT or Admin team.
The result of the integration into qTest Manager also depends on the type of ALM used. We won’t be
covering this topic in this course. If you would like further information on how it’s done and the setup
procedures involved, please visit
Finally, please note that whatever you construct in Test Plan is automatically linked to the Test Execution.
These two sections mirror each other with respect to Releases. However, Test Cycles, Suites and Test
Runs must be created manually in the Test Plan section. Lesson 4 will cover this in more depth.
Nonetheless, this linkage will save you time and streamlines your process quickly, so you can easily
execute the tests that are linked to these Releases and Builds.
Additionally, if objects are brought in from Jira and are linked, they will automatically be linked in qTest.
The integrations results vary depending on the ALM used, the integration level chosen and the type of
project your team is currently working on. For further details please visit the QA Symphony support
platform: https://support.qasymphony.com/hc/en-us
8
Lesson 02 Requirements
In the previous lesson, we created our first Release. However, we have not yet defined our test objectives.
It would be impossible for us to start testing without having a clear set of defined Requirements.
Requirements help us define what functionality of the application we should test, and what are the
expected results and behaviors the SUT should generate.
By the end of this lesson, you will develop a better understanding of the use and functionality of
Requirements in qTest Manager. You will learn how to create and import Requirements. How to edit
them and link them to your existing Releases and Test Cases for better traceability.
In qTest, Requirements are used to define in detail what objectives the System Under Test (SUT) should
meet for the Release to be successful. Requirements help us focus on the main goals of the Software’s
functionality. Once they are identified, this functionality can be tested.
In our example, 2 new options have been added when purchasing a phone cover. We can now choose
between 2 phone brands and 4 different colors. However, it is not enough to know whether the
functionality works. There are several combinations possible, and we need to specify what is the
expected outcome from each.
Consider two examples. If I choose a yellow phone cover for an Apple model, I would expect the same
product to be present in my shopping cart. Additionally, I would expect the app to work properly
throughout the entire ordering process, without freezing or crashing.
So, there are different levels of the application’s functionality that need to be considered and defined.
Requirements can also act as a kind of reference point for our tests. By knowing the priority of certain
Requirements and the risks involved should those functionalities fail, we can easily structure the priority
of our test runs too.
In qTest, Requirements can be organized to effectively manage our Test Runs, set expectations for the
test and provide a basis against which we can the weigh the results of our Test Runs. Setting
Requirements is therefore one of the major activities of testing and also constitute the highest level for
reporting.
Let’s recap our objectives for this test. We need to record these goals in a clear way for all testers to be
well-informed on the testing project.
In our case, we will create 2 separate requirements. The first will cover the ability for a user to successfully
select and order 2 phone case types, Samsung and Apple. The second will cover choosing and ordering
one of the four available color options, Black, White, Blue and Yellow.
In qTest, Requirements can be created either from scratch or by importing them through an ALM. The
latter option is used when the Requirements have already been defined within an ALM.
In qTest Manager, navigate to the Requirements tab in the menu. First, select a Module or, in this case,
the Root folder. Select the “New Requirement” button from the menu above the Requirements tree.
We will now have to enter the Requirements: Name, Status, Priority and Type.
Finally, we will assign the Requirement to ourselves and save our work. We will repeat this process for
the Requirement covering phone case colors.
As mentioned before, Requirements can also be imported from an ALM. This would only be possible
once integration between your ALM and qTest has been set up. Managing this integration is usually an
Admin task and will not be covered in this course.
However, it is important to note that the structure and type of Requirements imported will vary
depending on the ALM and type of integration put in place. In some cases, the full test project will be
imported, whilst in others, only Releases can be imported and not the Requirements.
Some testing teams also may use Excel for managing their Requirements. Should this be the case,
Requirements can easily be imported into qTest Manager through Excel. To do this, navigate to the
Requirement tree, and click on “Import Excel”.
In the pop-up window that appears, you will also see a “Sample Import template”. This provides you with
a mandatory predefined structure for organizing your Requirements so that qTest can import them. We
can now either browse to our Excel file or simply drag and drop the file over to the pop-up.
It is important to remember the size limit, 10MB, as any file exceeding this size will not be accepted by
the system. And that’s it! Your Requirements have now been successfully created.
Often an application will undergo changes and will evolve over time, with new features or specifications
being added. Requirements can easily be edited in qTest Manager to reflect the new goals of the
software. To edit manually created Requirements in qTest, you would first need to have full access and
entitlement to edit Requirements.
If Requirements have been imported from an ALM, their properties will be read-only and must therefore
be edited in the ALM itself. The changes will then be automatically synchronized to qTest. For our
example, we will demonstrate how to edit our previously manually created Requirements. In our “Phone
Make” Requirement, we will specify what phone models we can choose from, and in the Case Color
Requirement, we will add what colors are available. It is useful to know that any changes made to
requirements can easily be seen in qTest Manager from the history section.
Creating Requirements is an important task, however, results will not be reported correctly if we don’t
link the Requirements back to a Release. This would otherwise make it very hard to track our testing
goals and see whether they have been met.
To link a Requirement to a Release, we will need to navigate to the Test Plan section of qTest and select
the relevant Release from the Project tree. In the Resources menu, under the Release Scope section, the
list of linked Requirements can be found. To add a new one, we simply need to click on “Add” and search
for our newly created Requirements. Remember to save once they are linked.
It is also important to link Requirements to the relevant Test Cases. We need to know for example which
goals specified in the Requirements were met, or if a Requirement had associated failed Tests.
Linking Requirements to both Releases and Test Cases ensures a proper reporting chain of Bugs or
Defects found. This provides developers with a clear overview and enables them to easily tackle the
problems still present in the SUT.
10
To link a Test Case to a Requirement, select the relevant Requirement. Then, navigate to the Resources
section and under Linked Test Cases, we will find the list of Test Cases linked to this specific Requirement.
We can add new ones by clicking on Add and select the relevant Test Cases from the list.
To save you time from having to link Test Cases to Requirements manually, there is also the option of
creating Test Cases directly from within the Requirement. This will automatically link the Test Case to the
Requirement it was created from, saving you time. To do that, select the relevant Requirement. Then,
navigate to the Create Associated Test Cases section. Fill in the details: Name, Description, Type and
Precondition and click on Create. The new Test Case will appear in the Linked Test Cases Section.
Different teams will use different development methodologies to work on their project, for example Agile,
waterfall, etc…”. Organizing your Requirements according to the methodology used in your project is
crucial. In qTest, the structure and naming of your folders and Requirements can be managed to fit the
development methodology you use.
This is possible thanks to qTest’s flexible organizational options, meant to fit as many use cases as
possible. This is an example of a project structured according to an Agile development methodology.
Here is an example using a waterfall methodology.
Lesson 03a Test Cases
We have just finished creating our Release and our Requirements. This means we have a good idea of
what the testing objectives are for our software. However, we still have not defined how we will proceed
and test our application. For this, we need to create Test Cases.
Test Cases define how to test a specific area of an application in order to meet the quality objectives and
detail the exact steps involved.
To cover our previously created Requirements we will need to create 2 Test Cases in total: one for our
phone cover models and one for our phone cover colors.
By end of this lesson, you will gain a better overview of how Test Cases are used in qTest.
More specifically, you will learn how to create, organize and edit Test Cases as well as understand how
to add Test Steps. We will also see how and when to call on previously created Test Cases and approve
Test Cases. Additionally, we will cover the importance of Test Case versioning and managing history and
comments when working in a team environment. Last but not least, we will touch on how to import Test
Cases from Excel.
There are two ways of creating Test Cases in qTest Manager: directly from the Requirements, or from the
Test Design section in qTest Manager. We’ve already covered in the previous lesson creating Test Cases
directly from the Requirements section. This method gives you the benefit of automatically linking both
elements.
As seen already, to create a Test Case form a Requirement we simply need to navigate to the
Requirement and fill in the details under the “Create Associate Test Case” section.
We will focus on the first way as this is the recommended way of working in qTest. Essentially, creating
Test Cases from requirements provides easier maintenance as Requirements are likely to change as a
project develops.
When creating a Test Case from a Requirement you will be able to fill in only a limited amount of fields,
namely: Name, Description, Type and Precondition. In the Description field we will describe the Test Case
itself, what’s covered and how. For example, as we are focusing on the Phone Model Cases, we will add
the description: Samsung or iPhone. One of the model cases should be selected, not both at the same
time. This field is not mandatory and depends on your project specifications. However, in a team setting,
adding descriptions creates better visibility and a good overview of our test specifications.
The type of Test Case can be chosen from: Manual, Automation, Performance or Scenario. This field
indicates the type of test you are about to design. In our example we will select Manual. The Precondition
states all the necessary pre-steps or existing prerequisites to be satisfied before starting the test. In this
case the Precondition would be navigating to the right WebShop page to initiate the ordering process.
To add all the other needed details, we will have to access the Test Case itself. This can be done by clicking
on it from the Requirement, under the “Linked Test Cases” section. It is also possible to search for each
Test Case in the Test Design section, from the module tree in the right side of the page. For Both Test
Cases we will set the priority to Medium, the Status as Active and we will assign them to ourselves.
We now need to detail the Test Steps for our Test Case to make it fully complete.
In qTest, Test Steps describe the execution steps to take. These steps are then compared against the
expected results documented immediately next to them.
12
Once a Test Run is executed, each Test Step is marked with a pass or fail based on this comparison. Steps
therefore provide a clear overview and document the results.
As our test objective for this Test Case is to be able to select the appropriate phone cover type we will
have to set up the Test Steps accordingly. The same will be done with the “Phone Cover Color” Test Case.
The process of creating Test Cases directly from the Test Design section itself does not differ too much.
Firstly, we will have to create the Module folder that will contain our new Test Cases. Then we will click
on “New Test Case”. We now fill in the details of the Test Case as we did before.
Important to remember is that Test Cases created in this way won’t be linked to any Requirement. This
can be done either from the Test Case itself, by clicking on the add button in the “linked Requirements”
section of the Resources menu. As described in the previous lesson the link can be created from the
Requirement as well. After we have added the phone cover model to our cart, it’s only normal that we
would also verify that it has been added correctly.
As mentioned earlier, this testing project is not new to us: we already have pre-built Test Cases on the
entire ordering process of products in our online shopping app, during previous testing iterations.
Therefore, instead of creating these Test Steps from scratch each time, we can recycle and reuse pre-
existing ones. qTest enables you to do just that: we can call on Test Cases to reduce the manual effort in
creating them afresh and enhance efficiency.
Calling Test Cases also enables you to use them in numerous other tests within the same project, which
saves time. Let’s see how we can call Test Cases in qTest Manager.
Navigate to Test Design and to your relevant Test Case, in our case, the Phone Cover Model Test Case.
In the last column of the Test Case, click on the button ‘Call Test Case’. From the list, we will select the
Test Case “Verify the item in shopping cart”. We can then add the Description, the Expected results as
well as eventual Attachments to both the Test Cases. These details are important as later, when we will
report findings we can easily compare our expected results versus our actual results after the Test Runs
have been executed.
Imagine you have a team of 5 testers working on the same project. It is commonly the case that various
changes will be made to Test Cases by different people. Keeping track of this could be a major
impediment without having a system in place to easily view those changes and know which version of a
Test Case you are working on.
Versioning in qTest solves this problem as it enables you to quickly check how many times a Test Case
has been updated, when it was done and by whom. Importantly, it creates versions of the Test Case by
increasing the version number by 0.1 every time an update is made. This allows you to instantly identify
how many updates have been made on a Test Case.
Additionally, the version of the item will be rounded up to the next whole number (1.0, 2.0, 3.0…)
whenever that item is getting approved. Test Case versioning is also useful as it enables you to access
previous versions of a Test Case that are still relevant for your current testing conditions. The Version
number of the Test Case can be seen in the Details column within the Test Case.
You may be wondering, how is Test Case versioning different to viewing the History section of Test Cases?
Versioning will give you information about how many times an artifact was changed and whether it has
been approved or not. History will let you access to the details of those changes.
You can view the History of a Test Case by navigating to the Sub Tool bar, which also contains other tabs
like Details and Comments. By clicking on the History button, you will able to see the changes as well as
the version.
Please be aware, the History functionality in qTest is also available for Requirements, Test Cases, Test
Suites, Test Runs, Defects for clear overview throughout your project.
Sometimes it could be the case that you want to have a ‘clean slate’ and do not want to see the full
detailed History from a previously created Test Case. To do this, you can simply use the copy and paste
functionality for Test Cases in qTest, this will create a copy of the previous Test Case without importing
the version number and the history.
The Comments section is an additional functionality in qTest that enables testers to add any necessary
information to a Test Case that should be alerted to the whole team. Comments can be seen within the
specific Test Case in the Sub Toolbar.
One crucial activity of managing Test Cases is the Approval process. Without approving a Test Case, it
would not be possible to create a Test Run. If your Test Case is ready for a Test Run, it can be approved
by clicking on Approve. To approve a TestCase higher rights are needed and it’s usually a task reserved
for the Test Manager. As you can see, the Version has changed to the next whole number.
Lastly, it could be the case that testers have already constructed their Test Cases in Excel with full details
included. qTest allows you to easily import Test Cases from Excel to save you time and manual effort.
Bear in mind, there is a Sample Excel Template already available for download which you can use as a
standard structure.
It is not mandatory to use this Sample Template, so long as the Test Cases and Test Steps contain
individual rows within the spreadsheet. In qTest, the import wizard enables the importing of Test Cases
directly into specific modules. This is done by altering the Excel file's individual sheet names to align to
the modules within qTest Manager. By altering the Excel file's individual sheet names to align to the
modules within qTest Manager.
When creating your own Excel sheet, certain fields should also be filled in like the Test Case name,
Precondition, Description, Expected Results, Test Case Type, Status, Priority, Assigned to, and custom
fields (if any).
After those fields have been filled in, qTest automatically maps the same fields, that is, it imports them
as columns. Once you have checked that they are correct, click on Import and refresh the Page. A new
Test Case Module folder will now be created with the name Module Name. You can easily rename it.
Expand the Folder and you will now see the Test Cases present there.
14
Lesson 03b Parameters
We have created our two Test Cases for the Phone Cover Model and Phone Cover Colors. However, as
per our Requirements, we have two phone models and four different colors to choose from. This means
we would have to create 8 Test Cases to ensure the functionalities’ combinations are properly tested and
working as expected.
It is often the case that when new features are added into an application, it will create multiple variables
for our testing conditions. Creating separate Test Cases for each of these variables would be time
consuming and repetitive. In qTest, Parameters are used precisely for this reason: they help save time
and the number of Test Cases by centralizing all the variables that need to be considered for the test.
By the end of this Lesson, you will be able to create Parameters in qTest. We will also learn how to manage
them and assign Parameters to your projects as well as how to update and delete existing ones. Finally,
we will see how to call Parameters from Test Steps in the Test Cases we’ve created before.
Parameters are the variables defined in Test Steps that help manage Test Data. They can be thought of
as a placeholder for Values. Parameters are included in Test Cases and the Parameter Values are defined
in in order to create Test Runs.
This will essentially generate multiple Test Runs, from a single Test Case, in one go. This will lower the
amount of Test Cases that have to be created manually. qTest also offers you the additional ability to
edit, archive and delete Parameters. This is important as it allows them to be flexible and adapt to the
changes in the application
Firstly, we will demonstrate how to create a Parameter in qTest, focusing on the Test Case Phone Cover
Colors. To create a Parameter in qTest, navigate to the nine-box icon at the right upper hand corner and
select Parameter. In the dialog that opens, click on the “+CREATE” button to create a new Parameter
There are two important fields that should be defined when creating Parameters in qTest for a clear test
overview: Identifiers and Description. An Identifier is the unique name assigned to a Parameter so that
we can easily identify it while using it in a Test Case. For our Identifier we will use: ‘Colors’ the Description
field provides information about the Parameter itself. Writing a detailed Description is important as
Parameters are likely to be reused in different projects. A detailed Description will inform future testers
and teams, of the purpose behind the Parameter and what its values relate to. ‘These are the four options
available for the colors of the mobile phone cover.”
Our Parameters are not finished yet. We still need to add values. To do that, navigate to the Parameter
and you will see the “+VALUE” button. Click on it to add the Parameter values. In our example we will add
the 4 available colors separated by a new line and click on ‘OK’. Please note, values cannot be separated
by the following symbols: To add multiple values, new ones must be separated by a new line or created
each time by clicking the “+ VALUE” button.
It may be the case that some of our Parameters may not be required for the time being but may be useful
at some later point for an upcoming project. The Status function in qTest is useful as it enables you to
temporarily make Parameters inactive should they not be needed right now. In qTest, the Status of
Parameters is set to Active by default when creating them, and you will not be able to change them before
saving
The Status of a Parameter can be updated by selecting the Parameter from the list and expanding the
drop-down menu from the button “Actions”. Here we can choose to deactivate or activate the Parameter.
Please note that before deactivating a Parameter we must make sure it is not in use in any Test Case.
It is very likely that you would be working on multiple projects on qTest or have several in the pipeline. It
is therefore crucial to assign Parameters to specific projects once they have been created. If Parameters
are not assigned to any project, they will be hard to trace for later use. To assign a Parameter, you will
see the Projects tab next to the Values tab. Click on it. Then click on the ‘+PROJECT’ button to assign the
project to a Parameter. Select All from the list and click ‘OK’.
Once you have finished assigning and creating the parameter, close the Pop-up window. Note that, if not
specified otherwise, a Parameter will be assigned to all Projects by default.
Some testing teams may have already created and structured their Parameters in Excel. In this case, re-
creating them from scratch in qTest would be time-consuming and redundant. You can therefore directly
import Parameters from Excel to save you manual effort and time.
To do this, navigate to the qTest Manager | Parameters page at right upper corner and below the user
name you will see the IMPORT button. Click on it. The Import Parameters dialog will pop up. Here, select
the project to which you want to Import the parameters. You can either browse or drag and drop the
Excel file inside the blue dotted box or click on the link to choose a file from folder. Once you select the
file and click open, all the Parameters will be imported. A sample template is also available, this gives you
an example of structure for parameters to be imported correctly.
As we’ve highlighted, Requirements frequently change throughout a testing project and it could be the
case that additional functionalities have been added. For instance, our Test Manager comes to us
requesting that an additional color (Yellow) has been added as a Phone Cover color type. To manage
such changes and requests, it is important to know how to make modifications to Parameters and their
Values.
In qTest, you can update the information of Parameters, such as Identifier, Status, and Description. It is
also possible to edit the Parameter’s project association. Navigate to the Parameters and select the
parameter ‘Colors’ from the grid. Here you have to click on ‘+VALUE’ button to add the color Value ‘Yellow.
Write the Value Yellow and click on OK. Close the Parameter detail dialog
Sometimes a Parameter may have been falsely created, or we simply no longer require them. There are
2 available options: You can archive Parameters should you want to store them for future use. To do so,
select the Parameter you want to archive by checking the checkbox. Next to the Import button at the top
right corner, you will see an ACTIONS dropdown button enabled. Select “Archive”. Note that only unused
Parameters can be archived. If the Parameter you are trying to archive is in use, you will receive an error
message asking you to firstly delete the Parameter from the Test Cases and Data Sets that utilize it.
In qTest it is possible to also delete Parameters. Note that a Parameter can be deleted only after it has
been archived first. If you try to delete it directly you will get a message saying that no Parameter was
possible to be deleted. The process of deleting a Parameter is very similar to the one for archiving it.
Select the Parameter form the list and chose “Delete” from the drop-down Actions menu.
We still have not defined one crucial process: it is not enough to simply create Parameters, we must
define how we will use them and link them to our Test Cases. In qTest, from your Test Steps, you can call
on Parameters.
As mentioned before this will generate multiple Test Runs. Bear in mind, multiple Parameters can be
used in a single Test Case. However, for the scope of this example we will use only one parameter in our
16
Test Case. The number of resulting Test Runs will depend on how many Values are contained in the
Parameter and how many Parameters are used in the Test Case. For our example, we will use the
Parameters in our Phone Cover Color Test Case for the Test Step Select phone case color, we will change
the value color to Colors and save it.
Please be aware that, as we made some changes to our Test Case and saved it, this will generate a new
version of the Test Case. This version will need to be approved before we can use it to generate Test
Runs.
Lesson 03c Data Query
In the previous lesson, we have created and used Parameters to reduce the Effort and Time, as well as
to lower the number of Test Cases. In particular, we have used Parameters in the Test Step of the Phone
Colors Covers Test case. However, both of our Test Cases are not approved Yet. To confirm whether our
Test Cases match the new Requirements they will need to be approved.
The first step in approving a Test Case, is finding it. When dealing with projects containing a high number
of Test Cases this could be challenging. We have to search for the Test Cases we created and then
approve them, to then be able to generate Test Runs. To do this, in an efficient way, we will use Data
Queries.
By the end of this lesson, you will be able to understand and use Data Queries in qTest. You will learn
how to create a custom Data Query and see how to use its results. You will also see how to save and
delete existing Data Queries. Finally, we will cover how we to export Test Cases using results of Data
Queries.
In order to reduce the time and effort in performing operations like Approve, Edit, Delete and Export Test
Cases, we’ll perform a search and do these operations in bulk. In qTest, Data Query is the most robust
search feature available. With Data Queries you can search the Test cases based on your own criteria.
Bear in mind that Data Queries are used not only to search for Test Cases. Data Queries can be
performed on different Objects within qTest Manager such as - Requirements, Test Design and Test
Execution elements, or Reports
Let’s see how we can create a customized Query to find our not-yet -approved Test Cases. To write a
robust Query we first have to write Optimized criteria and GroupBy clauses. Also, we need to name the
Query, if we plan to save it for reusability.
To create a Query, navigate to the Test Design Section of qTest. Here, in the navigation pane we can see
the Data Query Icon above the navigation tree, in the tool bar Click on the Icon to start a New Query
window. We want to save our Query for reusability in the folder: My Queries. Firstly, we name our Query
‘Search for not Approved Test Cases’. We will then proceed to the “Clause Section”. Each Clause must be
linked to other Clauses by logical operators or options, in the Group column, including "OR”, “AND”, ), (".
As for this specific Query we don’t want to use the GroupBy Clause, we keep the Group option empty, in
the Group Column. In the Criteria column we select the Value ‘Is Approved’. We select an Operator to add
relationships between each clause line. The lists of selectable Criteria and Values depends on the object
you are querying. In our example, a Requirement Query would contain the “Type” Criteria and the Value
”Functional” but the Test Case Query wouldn’t. For this specific Query we select the Value “No” To execute
the Query and get the result, click on Run Query button.
Now that we have received the result of our Query, that is, Test Cases which have not yet been approved,
we will now demonstrate how to perform operations on them. qTest allows you to perform Bulk
operations such as Batch Edit and Batch Approve. You are also able to update Status, Type, Assigned To,
and Edit the Description of multiple Test Cases at once. On top of this, is also possible for you to export
the Test Cases to an Excel sheet. As an additional benefit, you can send the notification emails to all
applicable users. However, this is optional.
For our example, we will do a batch Approval of both “Phone Cover Model” and “Phone Cover Color” Test
Cases. We just need to select both Test Cases and Click on the ‘Edit’ drop down button. We then Choose
Batch Approve from the drop-down menu.
18
This time we keep the option ‘Send out Notification emails unchecked, as we don’t want to send
notification emails. To finalize, we click on “Confirm” and once the Test Case is approved.
It’s always a good idea to save our work and have it available in a repository for everyone to access and
view. This allows other team members to reuse our work, saving time in having to perform updates on
those artifacts from scratch. We will now see how to save and then share our custom Queries. First, we
save the Data Query we just created. In qTest we can save our Data Query in either the “My Queries” or
“Team Queries” folders. When you save your query in the “My Queries” folder it will only be
visible/accessible to you. When you save your Query in the “Team Queries” folder it will be
visible/accessible to all other team members.
Bear in mind, you can also edit and save the same Query in both folders, essentially creating a copy of it.
Going back to our example, once we have approved the Test Cases we will have to save the Query in the
“Team Queries” folder. To do so, simply check the box for Team Queries. After clicking on save the Query
will appear in the “Team Queries” folder. Now we will also save the same Query in the “My Queries” folder.
To do so we select “Search for not Approved Test Cases” Query from “Team Queries” folder. To save it
into the “My Queries” we need to select the “Save As” Button. The Query can be renamed at any point of
the process if needed.
Some Queries might become obsolete and have no more relevance for us. In this case it’s a good idea to
delete them, to keep the workspace as clean and up to date as possible. qTest Allows you to delete the
Queries from your “My Queries” and “Team Queries” folders.
It’s important to remember that Queries stored in the “Team Queries” folder can be deleted only by users
who have the Project Admin profile. After you delete a Team Query the Query will not be visible to all
other users. In our example, after we’ve copied our Query to the “My Queries” folder, we proceed to
delete it from the “Team Queries” folder. To do so, we navigate to the folder, select ‘Search for not
Approved Test Cases’ Query from Team Queries folder and click on the “Delete Query” button, to delete
this Query.
Once we have Approved our Test Cases, we may want to use them elsewhere or automate the tests in
other automation tools. Instead of copy-pasting each and every Test Case separately, it is possible to
export them, in bulk to an Excel sheet. For this, we will search for the approved Test Cases and Export
them to an Excel sheet. To do so, we will select the “My Queries” folder and write a new Query.
We’ll name the Query ‘‘Search for Approved Test Cases’’. Next, keep the GROUP Value empty because we
are not using any group in our search criteria. Now that the Query is completed, we are ready to run it.
This is the list of all approved Test Cases resulting from our Query. To export them select all the Test
Cases and Click on the “Export XSL” button and then “Save”, to save the Excel sheet.
Lesson 04a Test Execution
So far, we have created our Releases, linked our Requirements, created Test Cases and our Test Cases
have been approved by the Test Manager. However, we have not yet executed our Test Runs. The Test
Cases we have created are not useful if we don’t create and execute the related Test Runs.
By the end of this lesson you will understand the purpose of Test Runs within a testing project as well as
the use of Test Cycles and Suites. You will learn how to generate Test Runs in qTest Manager as well as
executing them using both TestPad and Quick Run features. Finally, we will cover how to view the
statistical data generated by the Test Runs via Reports.
There are multiple ways to create Test Runs in qTest. For our example, we will create Test Runs directly
from the Test Design section of qTest Manager. To recap: we’ve created 2 Test Cases in the earlier lessons,
that will test the new WebShop mobile app functionality. One of them, namely the “Phone Cover Color”
Test Case, contains a Parameter which will create multiple Test Runs once triggered.
To Create the Test Runs, first, we will navigate to the Test Design section of qTest Manager. Then, select
the “Phone Cover Model” Test Case. From here, simply click on the “Create Test Run” button.
Save the Test Runs in the Sprint 4 Folder. Now, we will repeat the same process for the “Phone Cover
Color” Test Case.
As in this Test Case a Parameter is present, we are prompted with the request for generating the data
from it. Here we select Randomize Data and add the number of combinations we want to generate.
Finally, we click on the “Add” button. To view and edit the Test Runs we have just generated, navigate to
the Test Execution section of qTest Manager. Bear in mind that in qTest manager, all Test Runs are added
and housed in the Test Execution regardless of where you create them.
At this point, we can specify all the Test Runs’ properties like Execution Type, Planned Start and End dates,
Assigned To, Priority and most importantly Status. First, we assign the Test Run to ourselves. Then, we
need to make sure that the Execution Type is set to “Manual”, the Version is “0.1” and the Status is “Ready
for Baseline”. Finally, let’s add an appropriate description for all Test Runs we generated before.
Test Runs can exist independently, but Best Practice is to house them within Test Cycles and Test Suites.
Test Cycles and Test Suites are used for organizational purposes. They can be used to organize your tests
across different type of testing type: for example, Functional and Regression testing.
It is important to have a clear overview of the qTest Hierarchy for Execution objects: On the top we have
our Release. The Release is followed by Cycles. Cycles are formed by different Suites. Finally, Test Runs
are the base element of the pyramid. A Test Cycle is a container that shows a high-level summary of its
underlying Test Suites and Test Runs, including the execution results of these tests and any Defects
found. To Create a Test Cycle we will navigate to the Test Execution section of qTest Manager. Here we
will select our Project and click on the “New Test Cycle Folder” button. We will then name it and add a
Description to it. Remember to link the Test Cycle to the Release we want it to fall under.
A Test Suite can be regarded as the lowest level container to organize Test Runs. Test Suites are meant
for a more granular grouping of Test Runs. To create a new Test Suite, we need to select our Test Cycle
folder from the Project tree in the Test Execution section of qTest. Click on the “New Test Suite” button.
We now name it and add an appropriate Description.
20
After, we set our scheduled execution date and assign the Suite to a tester. For Test Suites it is important
to specify which type of testing will be covered. Once a Test Suite is created, we can select it and add all
its relevant Test Runs.
The Testing hierarchy is important as it enables users to easily find what they are testing and use the
containers to accomplish any configuration needed. This means that it is best practice to organize your
Test Cycles and Test Suites in a clear, readable way, according to Test Stage or Testing Environment.
We will not cover the specifics of how to organize Test Suites and Cycles according to the type of testing
used in this course. However, an example of Cycles and Suites use can be seen when approaching a
transition between functional and regression testing. In this case, we would have to schedule a different
set of Test Runs and change the ‘Type’ field in the Test Cycle, specifying what type of test is to be
conducted. Please visit our support page for more detailed examples.
Now that we have generated and organized our Test Runs, we are ready to execute them. There are
several ways to execute Test Runs in qTest Manager. We can use the TestPad. This is the recommended
approach as it allows the tester to view and submit as much detail as possible. To execute a Test Run via
the TestPad, select the Test Run from the list and click on “Run”, or Expand the drop-down menu and
select “TestPad Only”.
The TestPad will display in a separate window. Here, you have separate tabs where you can see the
steps to execute, add details to your Test Run and access the details of the original Test Case. Additionally,
you can add attachments, notes or log Defects. TestPad can be used in conjunction with both Web
Explorer and Desktop Explorer, we will explore these 2 instances in later lessons.
Sometimes we will need to change the Status of a Test Run or log specific Defects. In these cases, we can
choose to avoid running the whole Test Run from scratch via the TestPad. This is possible using the Quick
Run option. Quick Run is a fast way to mark the overall execution of a Test Run without filling in further
details. It allows you to set the Status, submit time-tracking details and submit Defects in your Test Run.
To execute a Test Run via the Quick Run option select the Test Run from the list and click on “Quick Run”.
A new Window will open and here we can set the quick Result Status, enter a new Defect or link an
existing one.
We will cover how to submit Defects, in both TestPad and Quick Run, in later lessons.
You might have noticed that in neither of the methods used do we execute the tests through qTest itself.
During these runs, both the TestPad and the Quick Run interfaces, act as “notepads”, allowing us to note
down details and problems encountered while executing our tests, outside of qTest Manager. Defects
are identified through the 'independent activity' of running the tests. In qTest we simply report on or log
those Defects after they have been identified. Both, TestPad and Quick Run, functionalities provide a
structure and environment, to log our notes, keep track of our defects and set our priorities.
After executing our tests, we want to be able to immediately view our test results. In qTest, we can
immediately view the data on our runs and Defects. We can see which Test Runs were executed
successfully, and in case of failed execution, which Test Cases, and Requirements were affected by the
logged Defects.
To generate the report, navigate to Test Execution section of qTest and select to the relevant Sprint or
Test Cycle. At the top you will see four tabs: Statistics, Properties, Defect Summary, Execution Summary.
If we click on the Defect Summary tab, we will see our logged Defects, the Test Runs that generated them
as well as the Requirements affected.
Lesson 04b Submitting Defects
In the previous lesson we learnt how to Execute our Test Runs. What if, during one of the executions we
were to encounter an issue? In this case, it is important to take note of the nature of the issue we have
encountered, its impact and the steps to follow to re-create it. This information will help the development
team to find and fix the defects of the application we are testing.
By the end of this Lesson, you will understand and use the functionality of Defects within qTest. In
particular, we will see how to submit a Defect during and after test execution, how to link existing Defects
to new ones and how to manage our existing Defects. We will also see how to create Early Defects using
the Defects tab in qTest and perform batch operations on existing Defects. For the scope of this lesson
we will cover these objectives in a non-integrated qTest environment. Meaning, that the instance of qTest
we will be showing has no integration in place with any ALM. We will cover how to manage your Defects
in an integrated environment in the next lesson.
First, let’s highlight where we are in our example. In the previous lesson we have been executing our Test
Runs. During the execution of one of the tests we have run into an issue. The mobile application freezes
every time we select the color “Blue” for our phone cover. We will need to log a Defect in qTest to notify
the development team of this problem.
We have 2 options when logging defects: The first option is to log Defects during test execution. This
option is recommended, as it allows for better documentation and detailing of the steps that were taken
in real time. The second option is to log Defects after Test Execution. In qTest you can submit Defects
from the appropriate Test Run’s log.
First, we will cover the recommended method: logging Defects during test execution This can be done
during both types of test execution, TestPad or Quick Run. Let’s see how to log Defects via TestPad first.
From the TestPad you can submit Defects from an individual Test Step Log, or, from an entire Test Log.
In this case we will submit a Defect from a Test Step. To do this we select the test step that triggered the
bug, the selection of the color. We then click on “Submit Defect”.
We have to specify that we want to create a brand-new Defect (choose “new on screen”). Here we make
sure that the Project selected by default is correct. Now we fill in the details. It’s important to be as
detailed as possible when filling in the Summary and Description of Defects. The details listed here will
be crucial for the developer who will then fix the issue. As this issue happens only with one out of 4 colors,
we set its Severity to “Average” and Priority to “Medium”. We will assign this defect to a developer. Last,
we need to select the Defect’s status and Type, in this case, “Bug”. Once saved, the system will generate
a unique Defect ID.
To submit a Defect during a Quick Run click on the Defect column, in the cell corresponding to the failed
Test Run. The window that will appear is very similar to the one that opened from the TestPad. Here we
can choose whether to log a new Defect or link an existing one. The interface that opens when clicking
on “new” is that same that opened before. Here we can fill in the details like we did when logging a defect
from the TestPad. Remember to save your work at the end.
To log a defect after the test execution has finished we navigate to the Test Execution section. We select
the affected Test Run and go to the “Execution Summary” tab. Here we need the Test Log related to the
execution where we encountered the issue. From here the process of submitting a Defect is the same as
covered before.
If the Defect we found was linked to an existing one we would have to report this information in qTest.
22
Linking Defects can be very helpful for the development team, as they can pinpoint the problem better.
This can be done from the first window that opens when logging in new Defects, both during and after
test execution. From this window we click on the “Link” button and search for the Defect we want to link
our new one with. To do this, we require the Defect ID.
So far, we’ve covered how to log Defects during and after test execution. However, sometimes we are
aware of pre-existing issues with the application, even before starting the test. In such cases, qTest allows
you to submit an Early Defect, even if you don’t have the Test Cases to execute. This is particularly needed
when the pre-existing Defects are forecasted to affect the test execution.
Early Defects can be submitted only through the Defect tab. This tab appears only if no integration is in
place with any ALM. You can see here the Defects tab. To create a Defect, simply click on “new”. The
available fields are the same as the ones we would have to fill in when submitting a Defect during or after
test execution.
The Defects that we have been logging in the previous lessons had their priority set to “Average” However,
priorities can shift, based on the impact the Defect has on the overall application and functionality. In our
example, we’ve been asked to take the previously logged Defects and upgrade their priority to “High”.
This can be quickly done manually when working on 1 or 2 Defects, but it can be incredibly time
consuming when dealing with dozens or hundreds of Defects. To avoid the repetition of manual work,
qTest allows you to perform Queries and batch actions on Defects as well.
However, this is only possible from the Defect tab. For example, we could filter for all our Defects with
priority set as “average” and then upgrade them to “High” for all of them at once. Obviously, this is only
an example and basically any of the Defects’ fields can be used to build a Query.
The results of your search and work can also be directly shared with developers. This works in a similar
way to the Data Queries seen in previous lessons. In this case we will navigate to the Defects tab and click
on the “Mail” button. We can here add the recipient and text of the email and send it.
Lesson 04c Logging Defects with ALM
In the previous lesson we have seen how to log Defects during and after test execution. However so far,
we have worked using a qTest instance not integrated with any ALM. How do things change in case this
integration is put in place?
By the end of this lesson you will be able to: Use the Integration Browser Plugin to submit Defects directly
to your ALM. Submit Defects during and after test execution in an integrated qTest Project. Use the Auto-
Populating function for Defect Data during Defect submission.
After the Integration has been put in place from the Admin team, on your personal machine, it’s best
practice to install the Integration Browser Plugin. With the Plugin, Defects can be integrated with the ALM.
The browser plugin allows for a native submission experience to the ALM, so you are not continuously
required to map system fields. Overall the plugin enhances the defect submission process to different
ALMs.
This plugin requires no settings and works immediately upon installation. It can be downloaded on the
resources page. Bear in mind that plugins vary based on what ALM you use for your project. Additionally,
the entire ALM setup will change based on what limitations, features and rights are decided during the
integration phase.
In the previous lesson you have become familiar with logging Defects during and after test execution.
When an ALM is integrated to qTest, processes and interface change slightly. When logging a Defect using
the TestPad, we will click on the button “search” to search for the Defect from the ALM catalog. Once
found we fill in data like the reporter, Summary, Description and Assignee.
The same additional button will appear when adding a Defect form the Quick Run option. In both TestPad
and Quick Run we search for the previously created Jira Defect and add the details to it in qTest. The
Integration Browser Plugin, then, synchronizes the 2 systems.
Thanks to this plugin is also possible to add new defects to the ALM directly from qTest Manager. The
process for it is very similar to the one seen in the previous lesson. However, in this case, the Defects will
be imported to the ALM in real time via the Integration Plugin.
When logging Defects, before, you might have noticed that some of the fields of a new Defect are
populated automatically. This is a qTest Manager feature and is possible to configure it. When used
efficiently, Auto-Populating, can help save time and assure the correctness of the data reported. By
default, your Defect's Description field will auto-fill with the Test Steps’ Description only when you submit
a Defect linked to an individual Test Step. To configure more fields with Auto-Populating function, access
the Fields settings under the Defect Artifact.
If using an ALM, for example JIRA, this can be set up in the project's Integration Settings area. When
creating a new Defect, it’s important to remember to check the “include all Test Steps Details” checkbox.
All of the steps leading up to where the test failed will copy from the Test Run to the Description area of
the New Defect page. It’s recommended to select this check box so that when a developer reviews this
defect, they will understand the steps to reproduce it.
24
Lesson 05 Scenario
So far, we have learnt how to set up a testing project from scratch in qTest and have gained a broader
understanding of how we can manage our tests most effectively. In this lesson, we will apply the
knowledge gained until now to modify our project, from start to finish.
By the end of this lesson you will be able to apply the knowledge from previous lessons to a more
complex use case. We will create a new Release, set the Requirements, Create the Test Cases to cover
them, generate the new Test Runs and finally log the new Defects. We will also introduce the concept of
Data Sets in qTest and how to use them to combine different Parameters.
For this new example we are asked to test a login functionality, before proceeding to order our phone
cover. We will need to test the functionality as a whole because the login process, happening before the
ordering of the phone cover, might affect the final outcome of the order. As before, the test will be
performed on the mobile app. However, before navigating to the phone cover or selecting any option,
we have to log into the application, using a valid username and password.
Let’s start by creating a brand-new Release, making sure to select the appropriate Status, before moving
forward. As the login process is new to us, we have to set a new Requirement too. Our new Requirement
covers the ability of users to log into the application when using a valid set of credentials. As mentioned
already, it is important to be detailed when describing Requirements as our final test results will be
compared against them. Now let’s link our new Requirement to the Release we created before.
At the same time, we must link the previous Requirements we created in lesson 2 to our new Release in
order to test the Phone cover functionality. Once the new Requirement has been created and linked, we
can proceed and create the relevant Test Cases. To save time, we create the new Test Case directly from
our Requirement to ensure they are automatically linked.
The Test Case we have created is still empty. Let’s now add the relevant Test Steps to make sure the Login
functionality is properly tested.
So far, we have created our Test Steps, however we have still not defined any data, namely, the relevant
usernames or passwords required for our test. We have seen before how Parameters help us reduce the
number of Test Cases we write. However, until now, we have been working with a simpler case. Now we
need 2 Parameters, one for usernames and another for passwords.
Developers have sent us a set of 4 unique usernames and passwords to test the application. All username
and passwords are different from each other. Additionally, a given username will work only with the
password associated to it. This means that we have to combine our 2 Parameters and ensure we
generate only valid pairs. This is possible thanks to Data Sets which allows users to quickly view, create
and edit data for large projects.
Additionally, Data Sets allow for combinations of Parameters to be created. To create a new Data Set,
navigate to the Parameter section from 9 Box menu in qTest Manager. Select the “Datasets” tab and
create a new Data Set. From the Data Set, call the 2 Parameters we created before. Now, to combine
them, we can either write the combination ourselves, manually, or select “Generate” and then “Unique
Values”.
In the latter case, we would have to verify that the automatically generated combinations are correct. In
a Data Set, is also possible to Generate Unique Combinations. This generates all mathematically possible
combinations between the Values of the 2 Parameters.
If we choose this option with our 2 Parameters, the result would be 16 unique combinations. This
function can be useful if our goal is to combine different Parameters within the same Test Case.
Parameters can also be combined using to create all possible combinations. This generates a high
volume of resulting Test Runs. Doing this manually would, of course, be time consuming as well as
inaccurate due to data dependencies. However, this method does not fit our use case, because of the
16 combinations generated only 4 are the right ones that we need, as only those 4 contain a valid pair of
username and password.
It is possible to import Data Sets directly from Excel files. This function is particularly useful, given the
potential complexity of Data Sets. Additionally, when working in distributed environments, with regular
Data Sets updates, using Excel is strongly advised as it reduces working times and the possibility of
human errors. To import Data Sets form Excel just click on the “import” button. A Template is also
available to guide you on how to structure the Excel file for it to be application readable.
As the login process always occurs before ordering any product, we will have to position the new Test
Case accordingly. To do so, we navigate to the Phone Cover Make Test Case. Here we create a new test
Step and position it at the top of the list. From the Test Step we created we can now call our Login Test
Case. The next step is to save our Test Cases and proceed to approve them.
We are now ready to generate our Test Runs. When generating Test Runs from Test Cases using Data
Sets we need to select the option to “create run data” from a “Data Set”. We are then selecting the
appropriate Data Set and the rage of data to use from it. Before finalizing, we can access the preview of
the final data, so to make sure we have selected the right ones. This time the number of Test Runs
generated increased significantly. This is because our original 8 Test Runs generated from the previous
lesson are now multiplied by the 4 possible combinations of usernames and passwords.
An example of one of our new Test Runs is: We will log into the application with username1 and
password1, navigate to the phone cover, select the make Samsung and the color Black. The successive
Test Run would use again Username1 and Password1, navigate to the phone cover, select Samsung and
in the end choosing the color white.
After double-checking that all Test Runs use the correct set of Values form the Parameters combinations,
we are now ready to start testing.
26
Lesson 06 qTest Explorer Overview
So far, every test we have run was done so without a recording, meaning that we could just log our written
notes into qTest, attaching manually taken screenshots one by one. Even though this is enough for less
complex applications, like our WebShop, it might not be as detailed as needed when the complexity
increases.
Now it is time for us to test the Web version of the application. We will have to test whether we can
reproduce the same Defect found in the mobile application here as well. Additionally, we want to test
whether the added features work on the website version as well.
By the end of this Lesson, you will understand the process of downloading the Desktop Explorer
application and Web Explorer extensions.
We will first cover the Desktop Explorer application. Desktop Explorer is a lightweight application that
houses an intelligent capturing engine. It automatically captures all screens and user actions across your
web, or client application to create detailed, step by step Test Case documentation and Defect reporting.
In our scenario, we will use our machine with windows OS and record each step of our test.
eXplorer is especially useful when our aim is to: Create Test Cases by recording a session, submit Defects
found through exploratory testing, create documents from your exploratory testing scenario or create
an automated script for your testing.
To be able to install and effectively use Desktop Explorer on our machine, we must meet certain system
requirements first. System requirements may change often as a newer version of the application is
released. You can check here the specifications required.
Let’s start by installing Desktop Explorer on our machine. We start from qTest Manager. Navigate to the
Resource section and select the Desktop Explorer Link. We are now redirected to the installation Guide.
After downloading the installer, we save it in our downloads folder and start the installation. Once
finished, it is best to restart the system.
As for our scenario, we will test the web application using Google Chrome. We also need to download
the browser extension. Browser extensions are available for Google Chrome and Mozilla Firefox. It is
highly recommended that you install these extensions to your Chrome and/or Firefox browser in order
for Desktop Explorer to capture these browsers properly. You can download the Browser extension from
the installation guide. Once downloaded, we need to click on “Add the extension” in Chrome.
As said before, Desktop Explorer is only available for Windows OS. What if our testers use non-Windows
OS, such as MacOS or Linux? In this case, we would want to use Web Explorer. qTest Web Explorer is a
simple browser plugin that brings all the powerful capture abilities of Desktop Explorer across different
OS. The plugin is available for all most popular Browsers including Google Chrome, Safari and Firefox. It
is important to remember that Web Explorer is available only for web-based application use. This means
that we cannot record what happens on applications installed on our machine using Web Explorer.
To record a session with Web Explorer we first need to install it. For our example we will install the Google
Chrome plugin from the Resources section in qTest Manager. Guides on how to install the plugin for
different browsers can be found in our support page. Once the installation is completed, the web
explorer icon will appear on top of the browser ribbon.
Lesson 07a qTest Web Explorer
As mentioned in the previous lessons, our goal now is to test the web version of our WebShop
application.
We will test the same added features as did before, however we will not do that not from a mobile device.
In the previous lessons, we tested the mobile version of our online shopping application. In this lesson,
our goal is to test the web version and identify if the same issues are present here as well. To perform
the test, and generate the report, we will use Web Explorer that we installed previously. Now we will see
how to do Session Session-based testing with Web Explorer.
By the end of this Lesson, you will be able to understand and use the functionalities of Web Explorer. You
will learn how to record a Test Session and Submit Defects from Web Explorer.
eXplorer can be used to perform various functions, such as: Create Test Cases by Recording a session.
Log notes and Defects found through Exploratory Testing Sessions Generate the Report Document,
which will further be used as an Attachment in any of the components in qTest Manager.
Additionally, eXplorer contains the automated script generator functionality. This is a useful feature
used by teams to speed up their transition to test automation from Exploratory and manual Test Cases.
Teams can use the Sessions module to automatically generate Web automation test scripts after their
qTest Web Explorer or Desktop Explorer sessions have been completed. The Automated Script
functionality will not be covered in this course.
Web eXplorer itself, is designed particularly suited for testers who are not operating a machine with a
Windows OS installed or have no direct admin rights to their PCs. As testing of Web Applications across
different OS is very important, it becomes is crucial, for the tester, to have a good effective tool to perform
manual and/or Exploratory testing from MacOS or Linux machines. qTest Web Explorer brings all the
powerful capture abilities of Desktop Explorer across different OS types through a browser plugin.
However, it is however important to remember that Web Explorer is suited only used for web applications
testing.
Before starting Web Explorer, we need to make sure that the Web app we want to test is open in the
same Browser as the Web eXplorer. To start Web eXplorer, we click on the plugin icon from the Browser.
To record the Session of our Scenario, first, we need to login into qTest Web Explorer. We can log in
directly with our qTest account using the Single Sign On functionality. To log in with the qTest account,
we will need the valid URL and credentials. The URL filed field should contain the active URL which is used
to access your qTest site from browsers. Credentials to Web eXplorer can be granted by the site
administrator.
The Single Sign On option becomes especially useful when working in restricted environments. This
option only becomes available after logging into qTest for the first time and agreeing to store your
credentials for SSO purposes. In the “SSO token” field, we will need to enter the token that can be
generated from the qTest Manager resource Tab.
eXplorer will not log out automatically, so it is important to remember to log out once the testing session
has been completed.
Before starting to test, we need to select which Test Run we are going to execute. Here, we can either
select planned Runs or create a new one.
28
If we choose to create a new Session. First, we must fill in all necessary fields like title, Description,
Planned duration etc. If we choose an already planned Run, those fields will be automatically filled with
the data entered during the planning session. We are now ready to start recording.
After the recording of a Session has started, the original Web Explorer icon changes to a flashing
Recording icon. This signifies that we are now in recording mode.
As we proceed with our test, Web eXplorer automatically records each user action and screen change.
From this page, you can as well access many useful tools, make notes or, record your actions as a video
or add an audio commentary to what’s happening at screen.
The “Capture Visible Screen” button allows you to instantly capture what is directly on screen in real -time
at the moment.
The “Annotate Last Screen” button allows you to add notes or call Defects on the last taken screenshot.
This function can be used very well together with the “Capture Visible Screen” button.
In this way, we can make sure that a page containing some defect or issue is captured and then, in the
editor window, add comments on the affected part.
Sometimes, the procedures and steps on screen might be too complex for a screenshot to represent
them correctly. As well you might also need to capture dynamic events, such as playback of audio/video
sources or report on performance issues. eXplorer offers the possibility to capture your test in the form
of video and record the audio as well. Under the “Capture Visible Screen” menu, you can choose to record
either the video or audio.
When testing an application, it’s important to know how much time was spent. With eXplorer, we can
log the time spent on the different phases of testing so that the data can be used in later reports and
session scheduling. The time logging is very particularly important as well when our goal is to estimate
how much the time a user spends a potential user will spend on our application to performing the same
tasks we are testing now, in the future. With Web Explorer, you can log your time and notes at the same
time. By Adding Notes, you can also report Bugs (if any), Notes, questions, concerns or feature requests.
To complete the testing session, we choose the “Complete Session” button. Once completed, the test
session, we are will be instantly sent to the Session Module Session Module, where we can review, edit,
and share our web test session with others.
Lesson 07b qTest Desktop Explorer
In the previous lesson, we explained how to run an Exploratory or manual test session using Web
eXplorer. However, not all applications are web-based, and for those, we would need a program capable
of capturing what happens on our PC, not only on a browser. qTest Desktop eXplorer is available for PCs
running Windows.
By the end of this Lesson, you will understand and use the functionalities of Desktop eXplorer, as well as
record Sessions and submit Defects through it. We will also learn how to create Test Cases from the
execution results and save the result of our Test sessions.
In this lesson, we will use Desktop Explorer to record the testing sessions and generate the report
document. qTest Desktop eXplorer is a rapid test execution recorder that supports exploratory and
scripted testing while providing seamless integration of your agile development tools and Defect
trackers. Desktop Explorer is used mostly for Desktop Applications. Different from Web eXplorer,
Desktop eXplorer allows you to record across any application open on your pc, not only your browser.
When using Desktop eXplorer, the first step is to log into the application to ensure that the recording of
the session will be added to the correct domain. Like for Web eXplorer, Desktop eXplorer has 2 options
for logging in: using the qTest Account or via Single Sign On. Both methods work in the same way as for
Web eXplorer. For the qTest Account login, you will need the qTest environment URL and a valid set of
credentials. For the Single Sign On option, you will need the qTest environment URL and SSO token
generated from the Resources tab within qTest Manager.
After login, we have different ways to launch Desktop eXplorer. We can launch it directly from the
desktop, as a normal program or from the TestPad directly in qTest Manager. Additionally, we can launch
it as well from the Sessions Module.
After launching Desktop eXplorer we choose what type of testing session we will perform. If we choose
to create a new Session, first, we will have to fill in all the necessary fields like the Project we want to save
the session to, Title and Description of the Session and the Executor. The Planned Duration will be filled
automatically with a standard value if you do not enter anything. If we choose an already planned
Session, those fields will be automatically filled with the data entered during the planning session.
In the “Application” field we need to specify what application we will be recording during the session.
Multiple applications can be selected at once. Make sure whichever app you want to record is open on
your system. If we are using multiple monitors at once, it is best to select the option to capture all
monitors.
As a final step, select the recording mode. “Auto” will capture a screenshot automatically after every
keystroke of either mouse of keyboard. “Manual” will not capture any screenshot automatically. Meaning,
you will have to take screenshots manually each time. “Time Interval” allows you to set a time interval
between automatically taken screenshots. The minimum interval between screenshots available is 5
seconds. By default, qTest Explorer's recording mode is set to Auto.
During test execution, it is usual to find issues or to realize that some feature is missing. In such cases,
we have to share it with our Team as a bug, opportunity, or feature suggestion. Desktop Explorer allows
you to make a note of these, detailing your submission. When we click on the “Add Note” button, the
recording pauses. We can now enter annotation mode to start detailing what’s happening on screen.
You can Add notes using tools like Select. This enables creating basic Shapes like rectangle or ellipse, and
basic text options.
30
The shortcut to add Notes is Alt+Ctrl+N.
The Audio and Video recording functionalities for Desktop eXplorer work very similarly to the Web
eXplorer ones. Desktop eXplorer offers the possibility to capture video and record the audio as well.
Under the “Capture Visible Screen” menu you can choose to record either the video or audio.
When testing an application, it’s important to know how much time was spent, and doing what. For
example, we might log in more time than expected on investigating opportunities and not have enough
time left to test Bugs and Defects.
Desktop Explorer allows you to log on which activity you are spending time at the moment. Activities like
Setup, Test, Report, Opportunity are available and on which you can switch in between. With Desktop
eXplore, you can log your time and notes at the same time. By Adding Notes, you can also report Bugs
(if any), Notes, questions, concerns or feature requests.
The possibility to capture screenshots manually is very useful for when we are dealing with dynamic
screens or similar. Desktop Explorer force screen capture mode allows you to select either part or the
entire screen and take a screenshot. There are different capture types available: Full screen, Active
window, Custom Region, Selected Region and Scrolling window.
To complete the testing session, we choose the “Stop” button. After stopping the session, a confirmation
popup appears. This asks us to either complete the session or continue adjusting the content using the
editor. When selecting the “Edit session” button, we can review our Session on Desktop. When selecting
the “Complete Session” button, we can review our session from the Session Module.
We will now see how to review our session from on the Desktop. After stopping the test session, the
editor mode opens on the desktop. From here, we can edit and review all data recorded during the test
execution. While reviewing our Test session, there are several actions available. View Screen allows us
to view and analyze whether all required screens are captured or not.
the Environment details tab allows us to check the details of your environment which your session was
recorded on like OS, Browser, Project Name, etc. We can also “Annotate” while reviewing the session’s
screen if anything incorrect like a bug or a wrong message is found. “Submit Defect” allows us to submit
any identified Defect. The “Complete Session” button will close the Session from the Desktop view and
open it in the Session Module.
We can also Create a Test Case out of the test session you recorded. Once completing the review of your
session, you can create the Test case from here. They will appear in the Test Design section of the
respective project in qTest Manager. All taken screenshots will be converted to Test Steps. Finally, it is
possible to save our Test Session as a document in formats like Trace, Text, Word, PDF, JPG.
An additional way to review our Test Session is to do so from the Session Module. We can access the
Session Module by clicking on “Complete Session” from the dialog window. The Session Module will open
from the browser. Here you can manage and edit your test Sessions. It is also possible to export the
Session in PDF, MS Word, JPEG, HTML format. Additionally, we can generate Scripts for our Session. Bear
in mind you can only generate scripts for Test Sessions conducted for web applications.
It is possible to log Defects during both of the review processes we have demonstrated so far.
Additionally, Desktop eXplorer allows you to log Defects quickly also after completing the Test Session.
As mentioned, it is possible to log Defects during the review from Desktop as well as the review from the
Session Module.
It is recommended to submit Defects as soon as possible after Test Execution, via the Desktop reviewer.
When submitting a new Defect, we need to fill in the mandatory fields like Title, Description, Severity etc.
Additionally, we will have to link the Defect to the screenshot showcasing it. We can also update existing
Defects by linking them to one of the screenshots taken and updating its information.
Once finished, we can use the Session that we just logged, and create a Test Case from it. Desktop
Explorer allows you to create manual or Automated Test scripts for the Test sessions you recorded.
Manual test scripts will then be stored in the Test Design module of qTest Manager. Additionally, we can
save our Test Cases in an Excel sheet, so that they can be imported at any later stage.
It is best practice to save our work so that we can refer to it at a later point. Additionally, we can share it
with our team members if needed. Desktop Explorer allows you to save a Session’s results in multiple
formats and to share it with your team. These results can then be attached to Test Runs within qTest
Manager or saved to another central location for your team. You can save the results in Trace, Text, Word,
PDF, JPG format.
32
Lesson 07c qTest Explorer within Test Execution
So far, we have used Web and Desktop eXplorer from outside the qTest Manager environment. It is,
however, possible to execute Test Runs with eXplorer directly from the TestPad.
By the end of this Lesson, you will be able to execute Test Runs in Test Pad using Web Explorer and
Desktop Explorer. We will also learn about the Execution History & Session Tabs within qTest Manager.
Our Test Runs are ready to be executed. The scenario to test, is the same as in the previous lessons. First,
we will execute the Test Runs using Web Explorer. Before starting, it’s important to remember, that to
use Web eXplorer, we need to have the Browser Plugin installed. If the Plugin is missing, you will get an
error message upon starting Web eXplorer. Additionally, the Web application we want to test, has to be
open in the same browser as the one we start Web eXplorer from.
Once setup, we are ready to start.
To launch Web eXplorer from qTest Manager, we simply need to expand the “Run” menu and select the
“ TestPad + Web Explorer“ option. The Web eXplorer recording window will open together with the
TestPad window. As we are starting the session through qTest Manager, we notice that both the Project
and the Session Titles info are automatically filled in. The reason is that as we opened eXplorer directly
from qTest Manager it is able to connect immediately with our data.
If we were to open it from the plugin icon we would not have seen these fields automatically filled. The
information we see can be changed as we proceed. Now we just have to provide the Description for this
particular Session and we are ready to start. To stop the session recording, select Stop. A new tab
automatically opens showing the screenshots captured during the recorded Session. We can now save
the recording and close the Session page.
The last step is to fill all relevant information in the TestPad, including whether the Test execution passed
or failed. Remember to save your work before closing.
As mentioned before, it is also possible to start Desktop eXplorer from TestPad. As a reminder, Web
eXplorer is suitable only for Web Applications while Desktop eXplorer is most suited for Desktop
Applications. Additionally, remember that Desktop eXplorer is compatible only with Windows OS.
Before starting the Session, we need to make sure that Desktop eXplorer and the appropriate browser
extension are installed on the PC.
To launch Desktop eXplorer from qTest Manager we simply need to expand the “Run” menu and select
the “ TestPad + Desktop Explorer “ option. When prompted for security permission. Select “Allow”. If
either the application or browser extension are missing, you will get an error message upon starting
selecting this option.
The Desktop eXplorer application will open together with the TestPad window. In the Desktop Explorer
pop up window, we can modify the Session name and Description before starting the recording. To stop
the session recording, select “Stop”. A new tab automatically opens, showing the screenshots captured
during the session. Save the recording and close the Session page.
Like we did for the Web eXplorer, we now need to fill all relevant information in the TestPad, including
whether the Test execution passed or failed. Remember to save the work before closing.
In a project, it is particularly important to keep track of the changes and executions of all Test Runs. This
can be done via the Execution History Tab in qTest Manager.
All test log details from the TestPad, including Status of the Test Run and Test Steps, are stored here. To
access the recorded session from Web eXplorer as well as Desktop Explorer we navigate to the Session
Tab in qTest Manager.
The Session ID hyperlink directs us to the Session page. Here we can see the recorded steps, screenshots,
and environment.
From this page, navigating to the Linked Objects section, we can see the Test Run, as well as the
associated Test Case, any linked Requirements and Defects.
34