RusosAutomationBible Java
RusosAutomationBible Java
Test Runners 16
Picking the right test runner 16
Data Driven Tests 17
TestNG 18
Execution diagram 20
Data Driven Testing 22
JUnit 24
Execution diagram 24
Data Driven Testing 25
Selenium Grid 63
What is Selenium Grid? 63
When to use it 63
How It Works 65
Required hardware per node 68
Master Node 68
Slave Nodes 69
Number of nodes 70
Other optimizations to be done 70
Frequently Asked Questions 71
Building a laboratory for testing with Selenium Grid 76
Grid’s Hub config file 77
Grid’s Node config file 78
Grid’s Hub Batch file (Windows) 79
Grid’s Node Batch file (Windows) 79
Gathered Metrics 80
Hub Specific Metrics 81
Node 1 Specific Metrics 83
Node 2 Specific Metrics 85
API/Service Testing 98
API testing coverage 99
HTTP Protocol 100
Extract string literals and magic numbers to a constants class 101
Choose the right library for the job 102
Use POJOs to communicate with APIs 107
Entities (de)serialization 107
Introduce DSL object factories 108
Test Scripts 109
Test Reports 109
Authorization & Authentication 110
Database testing - Beyond the API endpoint 111
Simple Hibernate stand alone usage 113
Service Testing with Spring 116
Externalize configuration values and constants 116
Access databases via Spring JPA 117
About HATEOAS 118
Appendix 159
Glossary 161
This document will focus on providing a little technical background on Java programming
language; but the root concepts detailed here can be extrapolated to different software
development areas and, possibly, languages.
The main goal of this document is to define certain “rules” that make up, what I consider, good
practices. This should enable Quality Engineers to:
If you find yourself lacking on any of these areas (or if you have trouble understanding any
terminology used in this document) I encourage you to first read the document about Java and
OOP best practices.
ϫ Do not just work on “creating test scripts”; create or employ whatever tool you can get
your hands on to make testing easier for the whole team:
ʊ Write tasks/targets for the build system (Maven, Gradle, Grunt, etc) that your
team is using and instruct teammates on how to use them.
ʊ Propose ideas to your team like adding pre-commit hooks to your VCS to run
tests before pushing new code on certain branches.
It is pointless and very frustrating when you are the only one that knows about “testing
stuff” in your team. Leaving feelings aside, it is your job to do that.
✓ Include the possibility for developers to run automated test in their boxes before pushing
new changes to version control system.
✓ Be fluent with Java generics, lambdas, varargs, and this reference. For test framework
development, you will probably need to consider knowing about annotations and
reflection library.
Documentation
✓ Include a brief documentation for every test script. Mention its purpose and reference the
corresponding manual test case.
Logging
✓ Prefer the Simple Logging Facade for Java (SLF4J) which serves as a simple facade or
abstraction for various logging frameworks (e.g. logback, log4j) allowing you to plug in
the desired logging framework at deployment time. Also, using a logging framework you
can have a single line of code (ex: log.info(something);) and have it output in the
console, a file, a network location, etc altogether.
✓ Use a private static Logger class defined in base POM class and in the testing
framework itself to log to a file, console, etc.
✓ If you are using Java 8, an alternative may be to create an interface with a default
implementation for the Logger. It “feels” like a single method abstract class, but it is
different: it does not force you to extend from a class (given that Java does not support
multiple inheritance) and you can easily add it to any class that requires logging without
breaking existing code. Here is an example:
package ruso.testing.common.logging;
rg.slf4j.Logger;
import o
import o rg.slf4j.LoggerFactory;
✓ When you define assertions to check for expected conditions, be very descriptive on the
error message you will log. This will save you a lot of time during the analysis of the
failures.
ϫ Do not log actions manually during test scripts, this render tests longer and unreadable.
ϫ Do not use System.out. You cannot easily change log levels, turn it off, customize it, etc.
Also, it is an IO-operation and, therefore, it is time consuming. The problem with using it
is that your program will wait until the println has finished. This may not be a problem
with small programs but as soon as you get load or many iterations, you'll feel the pain.
ϫ Mind the log type. If it is not really an error do not log it as such.
Configuration
Externalize configuration
ϫ Do not have a public class with public static fields manually mapped to properties. This
implies that the more the properties you have, the more this class needs to be manually
changed.
✓ Use external files to define properties that can change the behaviour of your code. The
format can be, in order of preference:
○ YAML
○ JSON
○ Simple Java properties file
○ XML
✓ Model your configuration as a class, and deserialize the file into it. Use Jackson library
preferably, which is clean, well known and supports both JSON and YAML marshalling,
or any other library for the same purpose that you like.
Auto detect configuration when possible
ϫ For settings that can be autodetected within Java code, do not request them as
properties. If you need your code to behave differently depending if you are on Windows
or Linux, have a helper class to figure it out. Do not request users to put another property
in your configuration file. No one likes having to deal with a config file with 1000
parameters.
private Environment() {}
Java Proxy Settings
Many times, a Java app needs to connect to the Internet. If you are having connectivity troubles
because you are behind a corporate proxy, set the following JVM flags accordingly when
starting your JVM on the command line. This is usually done in a shell script (in Unix) or bat file
(in Windows), but you can also define them in your IDE settings for each test run configuration.
Or, if you have your system settings already configured, you can try:
JVM variables are defined prepending a -D when calling the java command from your system’s
terminal.
System.setProperty("java.net.useSystemProxies", "true");
Test Scripts
ϫ Do not hardcode strings, numbers or any other kind of literals (known as “magic values”)
inside the tests. Use constants and/or external files/sources to hold them.
ϫ Do not hard code dependencies on external accounts or data. Development and testing
environments can change significantly in the time between the writing of your test scripts
and when they run, especially if you have a standard set of tests that you run as part of
your overall testing cycle. Instead, use API requests to dynamically provide the external
inputs you need for your tests.
ϫ Avoid dependencies between tests. Dependencies between tests prevent tests from
being able to run in parallel. Running tests in parallel is by far the best way to speed up
the execution of your entire test suite. It's much easier to add a virtual machine than to
try to figure out how to squeeze out another second of performance from a single test.
✓ Use Setup and Teardown. If there are are "prerequisite" tasks that need to be taken care
of before your test runs, you should include a setup section in your script that executes
them before the actual testing begins. For example, you may need to to log in to the
application, or dismiss an introductory dialog that pops up before getting into the
application functionality that you want to test. Similarly, if there are "post requisite" tasks
that need to occur, like logging out, or terminating the remote session, you should have
a teardown section that takes care of them for you.
✓ Have the ability to tag the tests and/or suites to be able to run a sub set of the tests you
have. Most current test runners supports this feature. Nobody likes to spend an hour
waiting for a whole suite of unrelated tests to finish running just because they changed
an error message.
✓ There is no rule saying that there should be a 1:1 relation between the manual test case
steps and its automated version, if it keeps the tests cleaner / more readable, split a
single manual test case into many automated tests.
✓ Tests should be synchronous and deterministic. This means, that:
✓ Each and every step performed in the test script should translate to a reaction on
the SUT
If this cannot be done, you will have to find a way to make it happen. This may be
achievable by adjusting SUT configuration on demand and/or via mock objects. You
can’t wait for the SUT to actually deliver that email to your test account till midnight
because there were network congestions or the scheduled job wasn’t triggered till later
on.
✓ Organize/structure your tests in a coherent manner. For example, group same web page
related tests into the same class, so each class represents a suite for a particular web
page.
⚠ You can use retry rules in your tests. If waiting patiently doesn’t resolve your test
flakiness, you can use JUnit TestRule class or TestNG IRetryAnalyzer interface.
These will rerun tests that have failed without interrupting your test flow. However, you
should use these only as a last resort, and very carefully, as rerunning failed tests can
mask both flaky tests and flaky product features. Examples can be found here and here,
along with their respective tests: here and here.
⚠ If you are developing a new test script and you start mumbling about “how difficult is to
do this!” or “Every time I create a new test script I have to do this, or declare that, or copy
and paste this block of code”. Stop right there, because you are probably right and there
is something more to be done for the sake of simplifying the script development. Ask a
co-worker for advice and/or insights on simplifying usage, different people think/see
things differently.
Use assertions correctly
✓ Try to use Hamcrest, which is a framework for writing matcher objects allowing 'match'
rules to be defined declaratively. It works with JUnit and TestNG. You can include it in
your Maven project by simply adding its dependency in your pom.xml file:
<dependency>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-all</artifactId>
<version>1.3</version>
</dependency>
ϫ Do not evaluate/assert everything with the same assertion method. There are more
methods than just assertTrue, a
ssertNull and assertNotNull:
assertTrue(service.id >
0, "Service id should be assigned")
;
ϫ Do not repeat the content of the assertions over and over again. Cache the response
once and then use it many times.
ϫ Do not perform assertions in methods that perform the API call or inside the framework.
When dealing with APIs you want to know if the response code is the adequate one for
different queries. Not everything will return a 200/201 code. Leave validations and
assertions to tests.
Test Runners
One very important thing to take into account at the beginning of an automation project initiative
is the test runner itself. Pick one that supports parallel test executions.
● JUnit, which is awesome, specially in its latest version. But I’d advise you to stick to
version 4.x until version 5 gets full IDE support.
Obviously, there are many more out there, but these two are the most well known. If you are
reading this document with the intention of designing your framework, POMs and tests in a
parallel way for a language other than Java, you will have to invest some time researching
what’s in the market for your platform.
It is also important to know how these tests runners run your tests. The behaviour between
them is quite different, despite the end result for both will be “oh good, yeah, my tests are been
ran”.
Data Driven Tests
Referred to as DDT, is a term that describes testing done using different sets of inputs along
with their verifiable output for a particular test logic. This way tests can be parameterized with
one or more arguments, avoiding having to write the same test for different values or to
hardcode values inside it.
Tests runners are the ones responsible for running your tests in this fashion, when specified to
do so. Both Junit and TestNG support this feature, although they differ in naming and usage.
There are several sources of test data but, at the end, they will depend on the size and amount
of data your tests need. Ideal sources are:
✓ Use an in-memory database such as hsqldb so that you can pre-populate it with a
"production-style" set of data from a .sql file
✓ JPA can be used to access real databases to retrieve values in real time. Try to rely
on well known frameworks to ease this operations. For example, Spring.
ϫ If you opt for a database storage, do not perform SQL queries manually.
TestNG
I think of it as the wizard’s apprentice test runner. I mean, everybody loves (and advices to use)
TestNG when first writing their tests. Maybe it is due to the fact that it looks simpler, or maybe
due to the fact that it came out after JUnit, learning from its initial failures and, taking advantage
of that, brought to the table annotations, data providers, factories and much more that made
everything looks nicer and cleaner. At that time, JUnit had some nasty ways of declaring test
methods and lacked many useful features such as parameterization. So I think its popularity is
due to the bad fame that JUnit had back then. Now, things have changed.
TestNG makes running tests in parallel a breeze, just change a word in here and a number over
there in an XML config file (an XML...really?), and BAM! your tests run in parallel. But, ease of
use comes with a hidden price: it makes it difficult to write tests and frameworks that work
correctly (as expected), due to the way TestNG runs the tests (as in, test methods) in parallel.
Specifically, you can set TestNG XML suite parallel attribute to:
methods TestNG will run all your test methods ● This case is the only real way
in separate threads. Dependent of achieving fully parallel test
methods will also run in separate runs in TestNG.
threads but they will respect the ● It does not spawns a different
order that you specified.
instance of the test class for
each thread.
● You better have a thread safe
class when you run your tests
or everything will fail.
classes TestNG will run all the methods in ● Will use same thread for the
the same class in the same thread, whole class.
but each class will be run in a ● Parallelism happens at "suite
separate thread. level", not at "test level". If you
have a large amount of tests
per suite (class), it won't help to
speed up test execution that
much.
● No thread safety issues at class
level are to be expected here.
tests TestNG will run all the methods in ● Good luck maintaining your
the same <test> tag in the same XML
thread, but each <test> tag will be in ● You actually loose running tests
a separate thread. Will use as many (methods) in parallel
threads as possible to run the tests.
● Non thread safe suite classes
will run fine
instances TestNG will run all the methods in ● No one has ever seen more
the same instance in the same than one test instance
thread, but two methods on two ● You actually loose running tests
different instances will be running in (methods) in parallel
different threads.
● Non thread safe suite classes
will run fine
Similarly, thread-count attribute allows you to specify a positive integer to define how many
threads you want to spawn.
Following there are examples of a typical TestNG XML file, in both single and multi threaded
versions:
Same XML file definition with multithreaded (per test method) support
This is what really happens when you run your tests methods in parallel with TestNG:
1. It creates a single instance of the test class in one thread (let's call it T1).
2. According to a numeric value in its XML config, it creates a thread pool of X threads: T2
to Tn.
3. It runs each method marked with a @Test within the test class using the defined threads
against the test class created in step 1.
Execution diagram
This means that any variable defined outside methods annotated with @Test (this is, any non
local variables) must comply with one of the following things:
● To be Thread-Safe
● Make it Thread-Safe. For example, hold it inside a ThreadLocal.
● Not to exist. Find a way to not have to define the variable there, and achieve the same
expected behaviour by doing things in a different way.
If you fail to meet this requirements, then you will run into weird and unpredictable issues,
typical problems in the realm of multi-threading.
In short, the problem with TestNG is not the number of threads it can create or how it creates them, the problem
resides in the number of class instances that created threads run against: It is always many threads against a
single class instance.
You can see some official mentions (as Cédric Beust, author of TestNG, provides the answers) to this issue in the
ʼ following pages:
It is worth mentioning also that TestNG, does not always run its @AfterXXXX and @BeforeXXXX
annotations when those are defined higher in the test class hierarchy. It is not wise to assume
that the code that you put in there will be executed. You might want to consider declaring
attribute a lwaysRun = true in your annotations if your setup/teardown methods are not being
called.
Data Driven Testing
Implements DDT with DataProviders, which allows to supply the values you need to test. A Data
Provider is simply a method annotated with @DataProvider. This method can reside in the
same class or in a different one than the tests that use it reside.
The Data Provider method can return one of the following two types:
● An array of array of objects (Object[][]) where the first dimension's size is the
number of times the test method will be invoked and the second dimension size contains
an array of objects that must be compatible with the parameter types of the test method.
@DataProvider(name = "UsernameProvider")
public Object[][] createUsernames() {
return new Object[][]{
{"Ruso"},
{"I'm gonna fail with this one"},
};
}
@Test(dataProvider = "UsernameProvider")
public void parameterizedTest(String username) {
assertEquals(username, "Ruso");
}
@DataProvider(name = "UsernameProvider")
public Iterator<Object[]> createData() {
return getDataFromSomewhere()
.stream()
.map(s -> new Object[]{s})
.collect(toList())
.iterator();
}
ϫ Do not use TestNG’s feature for providing arguments from its testng.xml config file. You
might end up having several places from where data comes from and, either way, that
XML cannot represent complex structures or objects that your tests may need.
JUnit
My personal favourite. Associated mostly with unit testing in Java, it is a great tool to write your
everyday integration tests with it.
Execution diagram
JUnit favors running tests in isolation. Making it a little bit more error resilient when dealing with
multiple threads. This does not mean that I’m telling you to write crappy code that does not care
about thread safety.
In its later versions, JUnit has included Watcher, Categories, Rules, Theories and many other
really powerful notions for testing that will be the pillars of next generation testing tools.
˿ Link: Take a look at a this simple class implementation of a test runner in JUnit that
provides parallel test execution as well as parameterized tests.
Data Driven Testing
Here is where a clever and modularized design in the test runner shines bright. At first look
JUnit does not seem to have such fancy DDT feature. Truth is that not only it does but it also
allows for people to create and use different implementations of them.
Before moving forward, I must clarify that JUnit calls “test runners” to sub-modules that extends
from a Runner abstract class provided by the framework. You can think of JUnit as an engine
that runs runners, which in turn runs tests. Having this in mind, the following paragraphs will
mention different test runners that you can use in JUnit by marking your base test class with the
@RunWith(TestRunnerOfChoice.class) annotation.
You can use the Parameterized test runner to create data driven tests. It's not terribly well
documented, but the basic idea is to create a public static method (annotated with
@Parameters) that returns a Collection of Object arrays. Each of these arrays are used as the
arguments for the test class constructor (not the tests), and then the usual test methods can be
run using fields set in the constructor. It is also possible to inject data values directly into public
fields without needing a constructor in the test class by using the @Parameter annotation.
@Test
public void testSimple() throws Exception {
out.println(format("Running test using %s / %s", browser, version));
}
For completeness sake, you can see the complete test sample here.
Just in case you do not like this particular test runner, you can try out any of the following ones:
● JUnit-Dataprovider
● JUnitParams
● Zohhak
● Keep Googling...
● Build your own Ͱ
With both test runners, you should assume that there is a “main” thread which sets up/tears
down tests, launches listeners and then spawns other threads that will be the ones that actually
execute the test code...or not , it actually depends on which other
frameworks/extensions/listeners you may be using together with your test runner. But it’s good
to be always suspicious.
Web UI Testing - Selenium/WebDriver
What is WebDriver?
WebDriver is a remote control interface that enables introspection and control of browsers.
It provides is a set of interfaces to discover and manipulate DOM elements in web documents
and to control the behaviour of a user agent. It is primarily intended to allow us to write tests that
automate a user agent (browser) from a separate controlling process (tests).
It is platform and language neutral, meaning that you can find implementations for all major
platforms (Windows, Linux, OS X, etc) and programming languages (Java, C#, Ruby, JS, etc).
In practical simpler terms, WebDriver is a library for remotely controlling browser applications
such as Chrome, Firefox and Internet Explorer, among others.
Along this document, you will see terms that map to a correlative term in WebDriver API.
Some of them are:
Driver WebDriver
Plural List<WebElement>
Locator By (and its subclasses)
WebDriver Architecture Overview
WebDriver has a server/client architecture, the client is what we call the driver instance (just a
RemoteWebDriver subclass / WebDriver implementation) in our Java code, and the server
is an external binary process that controls the browser.
Client
Server
(WebDriver implementation)
EdgeDriver MicrosoftWebDriver.exe
AndroidDriver
Embedded into Appium server
IOSDriver
Communication between clients and servers is done in a stateful session using HTTP protocol
carrying JSON payloads. A session begins when a client is instantiated (and connects to the
server) and is closed by the server upon one of the following conditions:
Not all server implementations support every WebDriver feature. Therefore, the client and
server use a particular JSON object with properties describing which features a user requests
from a session to support. This JSON object is known as Capabilities, although people also
calls it DesiredCapabilities, but that is actually the name of the implementation of this object in
Java. It simply describes a series of key/value pairs.
A list of the commonly used capabilities that a session may support can be found here.
Take into account that every server implementation can support capabilities that are not in the
WebDriver standard. Here you have some more capabilities for Chrome driver, as an example.
If a session cannot support a capability that is requested in the desired capabilities, no error is
thrown; a read-only capabilities object is returned that indicates the capabilities the session
actually supports.
As you can see, there are multiple “servers”. So, how does Selenium knows which one to use
when you have tests running against multiple browsers? It takes a look at the values present in
the Capabilities object (specifically browserName, platform and version) being sent as part of
the JSON payload in the communication. This way, a server is aware if it should answer it.
Despite there are many servers, one of them (specially) is not like the others. Most of these
server are, as I call it, “leaf servers” because once they start running they:
That’s it. If you request a capability that they are not supposed to handle they just report an
error, they are always the “leaf node” in the tree call.
Let me try to come up with a diagram that may show this a little bit better:
On the other hand, selenium-server not only does the same for Firefox and Safari browsers, but
it also acts as a “router”. Meaning that, if a desired capability is not within itself to handle, it will
try to launch the process of the server that can handle such request and, if successful, it will
route the connections to it.
Recommended way of using WebDriver
Understanding WebDriver architecture allows you to greatly simplify the way your framework
could create instances of WebDriver. I propose to implement things in a way similar to this one:
According to the diagram above, the only client that we need to take care of is
RemoteWebDriver!. Now we can focus only on the Capabilities we want it to have, but not
particular implementations.
GeckoDriver webdriver.gecko.driver
ChromeDriver webdriver.chrome.driver
InternetExplorerDriver webdriver.ie.driver
EdgeDriver webdriver.edge.driver
OperaDriver webdriver.opera.driver
PhantomJsDriver phantomjs.binary.path
This means that you can set these values via JVM arguments to the Selenium Server at launch
time:
System.setProperty("webdriver.chrome.driver", "/absolute/path/to/chromedriver");
System.setProperty("webdriver.opera.driver", "/absolute/path/to/operadriver");
System.setProperty("webdriver.ie.driver", "C:/absolute/path/to/IEDriverServer.exe");
System.setProperty("webdriver.edge.driver", "C:/absolute/path/to/MicrosoftWebDriver.exe");
System.setProperty("phantomjs.binary.path", "/absolute/path/to/phantomjs");
System.setProperty("webdriver.gecko.driver", "/absolute/path/to/geckodriver");
Locators
A locator is an statement or d
efinition that allows to identify/locate a single (or many) web
element(s) in a web page, hence the name “locator”.
○ Id
○ Name
○ CSS
○ XPath
○ ClassName
○ TagName
○ LinkText
○ PartialLinkText
Each strategy represents a way of finding elements in a web page. Their names derive from the
actual pattern they look for in a web element to consider it found.
Id strategy will try to find the element by looking at the id attribute of a node in the DOM tree of
the tested web page:
Name, ClassName, TagName, LinkText and PartialLinkText strategies behave the same way
as Id does but attempt to match elements by their name, class, tag and text() attributes
respectively.
CSS and XPath strategies are the only ones that find elements by matching an expression (not
just a literal string that you can find in the actual DOM) against nodes of the DOM tree.
ʼ The preferred selector order should be: id > name > css > xpath
There are a few things you should consider when deciding the locator to use:
ϫ Do not use brittle locators. It's easy and tempting to use complex XPath expressions like
//body/div/div/*[@class="someClass"] or CSS selectors like #content .wrapper
.main but while these might work when you are developing your tests, they will almost
certainly break when changes to the web application’s HTML output are made.
⚠ name attributes are not guaranteed to be unique, but usually it tends to be.
⚠ Be smart with XPath usage. Avoid using full paths. This type of locator should be as
short, simple and straightforward as possible.
⚠ Keep track of flaky locators and, whenever possible, talk to the development team and
have them include a unique Id attribute for the troublesome WebElements.
Why did the locator fail?
Before you define a better locator for the failed web element, it is important to first understand
why the locator failed. In some cases, the problem might not even be with the locator, so
changing the locator would not help.
Sometimes, although we are using a really specific and good locator to find our web element,
test keeps failing because the “element is not visible” or “another element would receive the
click”. This is due to the way HTML is structured, causing to be “invisible layers” on top of our
target element.
Consider this real life example of a simple checkbox for a “Remember Me” functionality in a
login page:
d="remember-li">
<li i
<div class="checker" id="uniform-remember1">
<span class="">
<input id="remember1" name="remember" type="checkbox" value="true">
</span>
</div>
</li>
The “logical” assumption would be to choose the input’s id attribute as selector for our
checkbox. The thing is, there is a div wrapping it with a kind of suspicious class that seems to
give the checkbox some “format” (“checker” class) therefore, something is rendering on top of
the checkbox to make it more “visually appealing”, but rendering the original input invisible. Now
you click on a div, which, in turn, will forward the event to the underneath input. This screws up
our direct interaction with the input tag from WebDriver tests with exceptions and error
messages similar to “Element is not visible” or “Another element would receive the click”.
In short, sometimes things are not what they look like and, as in this case, you need to fallback
to using another locator rather than the most obvious one: id=”uniform-remember1”.
Model your pages as objects - Page Object pattern
Page Object is the name given to a pattern typically used in web front end automated testing.
Despite the “cool” name, it is nothing more than saying “let’s write all our web automated test in
an OOP way, not procedurally”. This means modelling the web pages that we are automating
as objects. Therefore, Page Object Model - POM, for short.
✓ Those private references may have accessor methods of protected scope to let
subclasses of the POM access the references.
✓ Will expose public methods for tests to deal with those private web elements reference
Under no circumstances, the web elements are allowed to exit the Page
Object scope.
If tomorrow you want to replace WebDriver with the “next new cool technology” you will need
to rewrite every test, despite test cases did not change at all.
The end goal situation should be:
For the purposes of this document, the underlying technology is WebDriver, but that’s
something that the tests should never be aware of.
The reasons for doing all this should be obvious, but just in case, I’ll state them:
✓ Reduces the amount of duplicated code (100 tests going through the same page would
not define 100 times the same elements/actions)
✓ Centralizes related code; whatever needs fixing, the fixing will (and should) take place at
a single place
✓ Encapsulates the mechanics required to find and manipulate the elements/data in the UI
✓ Provides an interface that's easy to program with and hides the underlying widgetry in
the window
Structure
✓ Page Object models System Under Test (SUT) pages and its actions and behaviours.
That’s it. Anything else, should be done by a test framework (preferably) and/or tests.
✓ Prefer composition over inheritance. Java does not support multiple inheritance either
way. This way you can model a module (section of a page) of the SUT in a separated
class, and if such module appears in different pages you can still compose it in all your
POMs.
✓ Move functionality (methods) and state (class fields) common to all POMs to an abstract
superclass. Make fields private and create accessors for them with protected
visibility.
✓ Each POM should be responsible of initializing its own internals, even WebElements. It
is also acceptable to delegate initialization to an abstract superclass.
ϫ Do not have constructor arguments for Page Object classes. At least, not constructor
arguments of WebDriver related types. I know that you will probably argue something
like “If I don’t have a constructor that accepts, at least, one argument for my POMs how I
am supposed to provide the WebDriver instance to it?”. My answer would be: “You
don’t...or you shouldn’t”.
Why? Because doing that would imply that t ests (which usually are the ones that create
POM instances) would also have to create an instance of WebDriver to pass it to the
POM. As stated before, this contradicts the holy commandment of “Do not expose
WebDriver nor WebElement instances to tests.”.
Take a look at this section of the document to know more about possible workarounds.
✓ Page navigation operations such as: back(), forward(), refresh(), stop() and
maybe even deleteCookies() should be present in the abstract superclass common to
all Page Object classes. Again, composed via appropriate holders like: Navigation and
Cookies.
✓ Always try to model your objects using interfaces. Not only because they allow you to
really design your framework in an OOP way, but also because interfaces in Java
provide a great deal of flexibility. You can easily create decorators and dynamic
proxies for their instances. You could do things like “whenever accessing a web element,
do this before or do that after”. You can think of it as Aspect Oriented Programming for
free!
✓ Expose operations (methods) over web elements, not the web elements themselves.
Tests should never handle web elements by themselves. Tests should not be aware of
existence of WebDriver behind the scene. Public getter methods return primitives and
objects but not WebElements.
✓ If an operation/action method in POM causes the transition from the current web page to
another, it must return an instance of the destination’s page POM. If the operation does
not cause a page transition, return the instance (this) of the same POM. This will allow
you to chain calls to the POM. Despite you can do this, do not invoke long chained
operations just because you can. Keep the test readable!
✓ WebElements should be in a correct state before POM interacts with it. Wait explicitly for
conditions to be fulfilled before operating on a web element. Example: POM exposes
method bookRoom() that (internally) clicks on a button named “Book”. Then, that method
is also responsible for making sure that such button exists and that it is in a “clickable”
state.
✓ Since tests must not deal with WebElements or any other WebDriver specific class,
make sure your framework/POM required classes have the appropriate scope. This
means, take some time to guarantee that methods that deal with WebDriver related stuff
have a package level, protected or private scopes. They should not be required from
tests, only from POMs. The lesser the scope, the better.
✓ Use Domain Specific Language (DSL) when possible, if really needed, pass around
locators as By objects instead of Strings.
ϫ Avoid defining many static methods (so called helpers), try to see and design your tests,
POMs and frameworks with your OOP hat on.
✓ Name public methods after meaningful business oriented actions or events. Meaning
that, instead of having public methods like:
θ typeUsername(String username)
θ typePassword(String password)
θ clickLoginButton()
ϫ Do not perform assertions in POMs. Tests should decide what, when and where
something should fail, not the POMs.
✓ A POM does not necessarily needs to represent an e ntire page. This means that you
can create a POM for a portion of a page (a reusable module, like a Navigation bar or a
menu) and then compose it inside other POMs.
✓ It is always better not to have huge Page Object classes. Again, you should refactor
them into several smaller POMs and then use composition inside other POMs.
How do I define a locator in a POM?
I said that locators are an statement/definition because, in Java, locators are mostly defined
using annotations, which are a form of metadata (complementary information, mark or
decoration for a type, field or method definition). Specifically, the annotation name is @FindBy:
@FindBy(how = How.XPATH, u
sing = "//div[@someAttr='value']")
private List<WebElement> s everalElements;
You can also use a shorter version of @FindBy that skips the definition of the how and using
arguments like this: @FindBy(id = "anId").
In case of need, know that you can group several @FindBy annotations inside a single
@FindBys or @FindAll annotation. If possible a
void using this feature, as it will make the
POM brittle, less readable and slower.
● @FindBys will find all DOM elements that matches each of the locators, in sequence.
● @FindAll will search for all elements that match any of the @FindBy criteria. Note that
elements are not guaranteed to be in document order.
@FindBys({@FindBy(name = "
name"), @FindBy(css = ".another")})
private List<WebElement> e lements;
✓ Use @CacheLookup on elements that you k now for sure will always be there.
Sometimes, when you have locators that change depending on some known circumstance it
may be helpful to define a class like:
// Message that varies the text depending on who is logging in.
public static By WelcomeMessage(String user) {
return new By.ByLinkText(format("Welcome back %s!", user));
}
}
}
class POM_X {
public POM_X doStuff() {
FindElement(DynamicLocators.Login.WelcomeMessage("Ruso"));
...
}
}
Again, use it only on required situations. Do not store every locator this way!
As a final word for this section, you should know that you can create your own location strategy
if you really have a need to do so.
How do I initialize all elements defined in my POM?
Once you have a POM class defined, you can initialize all WebElement and
List<WebElement> declarations by “processing” your POM instance or class as:
PageFactory.initElements(webDriverInstance, aPomInstance);
PageFactory.initElements(webDriverInstance, MyPOM.class);
When creating a web automation framework you are most likely willing to define a “base”
PageObject superclass that will contain common reusable functionality and every POM will be
extending from it. Its constructor is, therefore, a good location to place the POM initialization
code.
This way, every time you create a POM instance, your elements will be initialized from the
beginning.
⚠ Warning
PageFactory always initializes fields as long as its type are WebElement or List<WebElement>.
Do not assume that because you have not annotated them with @FindBy, @FindBys, or @FindAll it won’t
process them.
In case there are no such annotations present in the fields PageFactory will set a Java proxy with a locator
of type “ID or NAME” equal to the field name. The field will never be null.
Documentation
✓ Documentation is tough but necessary. At least, document every public method and, if
possible, provide a really brief example of how to use it.
Interactions between non public types
Several times I’ve mentioned that the scope (visibility) of a type should be as reduced as
possible. Specially when dealing with WebDriver instances. The whole idea is for your POMs
and test superclasses to have access to the WebDriver factory, but not the client (test) code.
The goal here is to design packages and types distribution within your framework in a way that
results impossible to gain access to critical types anyhow other than from where intended.
For this goal, you can consider not only the design of the package structure but also some of
the following ideas:
ʊ Use an Inversion of Control (IoC) / Dependency Injection (DI) container, such as:
○ PicoContainer
○ Petite Jodd
○ Dagger
○ As a last resort, Spring (would be an overkill to use it just for this purpose)
Some words on element staleness
A StaleElementException is thrown when the element you were interacting with gets destroyed
and then recreated. Most complex web pages these days will move things about on the fly as
the user interacts with it and this requires elements in the DOM to be destroyed and recreated.
When this happens, the reference to the element in the DOM that you previously had becomes
stale and you are no longer able to use this reference to interact with the element in the DOM.
You will need to refresh your reference or, in real world terms, find the element again.
If you have a similar DOM element on the page, you can obtain it with findElement,
and Selenium will create a new WebElement object (another Java dynamic proxy) for this new
DOM element.
Your old WebElement object will remain stale because underlying DOM object is destroyed
and can never be restored. Similarity is not equality.
Handling synchronization
WebDriver performs some pauses after receiving instructions like “navigate to this site”
[.get()], “click this element” [.click()] and “find this element in the page” [.findElement()].
Those waits are known as implicit waiting, it is something that happens behind the scene and
we do not invoke ourselves, although we can (and should) control their timeouts.
driver.manage().timeouts().implicitlyWait(10, SECONDS);
driver.manage().timeouts().setScriptTimeout(5, SECONDS);
driver.manage().timeouts().pageLoadTimeout(30, SECONDS);
Some may look similar but they are not the same!. Be sure to set them up in your test
framework:
● pageLoadTimeout: Time to wait for a page load to complete before throwing an error
When these waiting times are not set, are not enough or need to vary from one POM
method to another because an action takes longer to fulfill than another, I’ve seen people do
a lot of mediocre stuff to cause the program to “wait” or “be delayed” using statements like
Thread.sleep() or loops with conditions and counters that ultimately relies on it.
}
This is wrong!. Do not use Thread.sleep(), ever….just...don’t.
We refer to synchronization as the act of actively waiting for something to happen before or
after performing an action. It is not limited to only WebElement interactions. It is also called
explicit wait.
What’s the difference you say? Well, here is a quick rundown on the differences between
explicit and implicit waits:
Explicit wait
Implicit wait
So, basically, implicit waits sucks, and I would even suggest you to simply stop using them.
How?
Why?
Because it will force you to define and use explicit waits before and/or after operating on
elements. Your tests will fail so fast that you will notice that you forgot to add synchronization
in some of your POMs before you have time to say “what the hell?”.
For quick reference, here is how you define a basic instance of WebDriver explicit wait:
ϫ You do not need to create an instance of WebDriverWait each time you want to wait.
Create it once in your base POM and reuse it.
ϫ I’ve hardcoded the timeout value for this example. You are not allowed to do that
✓ ExpectedConditions is a “conditions” provider class, you will find pretty much whatever
condition you need in it, operating over driver, element(s) and/or By locators. The
complete list of conditions, at the time of writing this document, is:
titleIs elementToBeSelected
titleContains elementSelectionStateToBe
urlToBe numberOfElementsToBeMoreThan
urlContains numberOfElementsToBeLessThan
urlMatches numberOfElementsToBe
presenceOfElementLocated attributeToBe
visibilityOfElementLocated attributeContains
visibilityOfAllElementsLocatedBy attributeContains
visibilityOfAllElements attributeToBeNotEmpty
visibilityOf visibilityOfNestedElementsLocatedBy
elementIfVisible visibilityOfNestedElementsLocatedBy
presenceOfAllElementsLocatedBy presenceOfNestedElementLocatedBy
textToBePresentInElement presenceOfNestedElementLocatedBy
textToBePresentInElement presenceOfNestedElementsLocatedBy
textToBePresentInElementLocated invisibilityOfAllElements
textToBePresentInElementValue or
textToBePresentInElementValue and
frameToBeAvailableAndSwitchToIt javaScriptThrowsNoExceptions
frameToBeAvailableAndSwitchToIt jsReturnsValue
frameToBeAvailableAndSwitchToIt alertIsPresent
frameToBeAvailableAndSwitchToIt numberOfwindowsToBe
invisibilityOfElementLocated numberOfWindowsToBe
invisibilityOfElementWithText not
elementToBeClickable findElement
elementToBeClickable findElements
stalenessOf attributeToBe
refreshed textToBe
elementToBeSelected textMatches
elementSelectionStateToBe
Do not mix WebDriverWait with another wait implementation Pick one and stick to it.
If you choose to implement your own custom wait implementation, do not mix Selenium’s
} ExpectedConditions methods with the ones you are going to write for your own wait. Just use the ones you
define so you have control over them.
Take advantage of Java 8 syntax with lambdas and streams to define them almost in line.
Logging
✓ Another way to extend your logging is to add a listener to the test harness framework
that you may be already using (TestNG/jUnit). You’ll gain access to event from test
suites & methods. At those times, it is usually a good idea to attach a listener to get
screenshots and/or HTML source dumps from driver upon test failures.
Test Scripts
ϫ Do not have a unique web browser instance to run all your test suites as it may run out
of memory due to factors external to your design and implementation.
✓ Use ErrorCollector rule (JUnit) or S oftAssert (TestNG) when writing your tests.
Whenever a regular assertion is not fulfilled, the test is aborted and the remaining of the
test won’t be run. If your test have a long functional UI test with several assertions and
there is an error during the first check then you won’t know what happens with the
remaining verifications.
✓ Use the latest version of Selenium client bindings. The Selenium Project is always
working to improve the functionality and performance of its client drivers for supported
languages, so you should always be using the latest stable version.
✓ Create your web browser instance once per test class (suite), and destroy it after the
suite is finished.
✓ Opening and closing a browser instance per test is fine too. Slower, but fine.
Documentation
✓ Include a brief documentation for every test script. Mention its purpose and reference the
corresponding manual test case.
Things to avoid when testing front end
ϫ Do not test for images, videos or any other ‘multimedia’ content. You can guarantee that
such element’s container is there, has the correct size, tooltip, and maybe even that the
src/href/url is the correct one. That’s it. If you really need to check for something else
(that would mean, if you have a requirement to do so), talk about it with your designated
TL & client.
ϫ Do not test for CAPTCHAs. Third party components like CAPTCHA or TinyMCE can be
extraordinarily difficult to automate. There’s no sense in writing tests around third party
components or services that have already been tested by that tool’s creators.
ϫ Do not perform environment setup and teardown from the UI. If your tests need a certain
user or permission to exist before or after running, then set those conditions directly
against the backend. I don’t care if your system has a nice UI console for user
management. Create a backing infrastructure of helper classes and methods that call
into appropriate web services, internal APIs or even directly to the database. Do not
have slow and error prone UI based fixtures. If something goes wrong there, all
dependant tests will fail/be skipped, it’ll just take more time for someone to notice.
ϫ I’m within “common sense” ground here but, do not take the manual Test Case
specification as if it were the Holy Bible. People can mess up, including QC Analysts. If
the Test Case does not make sense and/or you figure there is a better way to test the
same thing, let your team know about that. Have the QC update it.
ϫ Avoid creating automated tests around low-value features or features that are relatively
stable. To have 1000 automated tests is not something good just because. We’d rather
have 100 good automated tests than 1000 tests that serves little purpose and take 90%
of your time due to maintenance.
On prioritizing tests candidates for automation
✓ Focus your UI automation efforts on high-value features. Talk with your stakeholders and
product owners. Find out what keeps them awake at night. I’ll bet it’s not whether or not
the right shade of grey is applied to your contact form. I’ll bet it’s whether customer’s
orders are being billed correctly. Automate tests around critical business value
cases, not around shiny look-and-feel aspects of your application.
✓ Manual tests that are mandatory to exercise and are run often (known as smoke tests)
are the best candidates during the first stages of every project. Next in our priority list
would be the sanity tests suite.
✓ Decide what test cases to automate based on Return Of Investment (ROI). Formally, it is
often described as ROI = GainsInvestment
− Investment Cost
Cost , but extrapolating this into a more practical
meaning, you can read it as something between one or all of the following sentences:
? “If you have to spend a lot of time and effort to develop a non-flaky test script
when running it manually takes almost no time (example: validate that an image
is correctly displayed)”.
? “The testing framework need major changes to support what the test would need
to do and making those changes is out the scope right now”
? “If the automation can be done quickly but its maintenance will have to keep
going on due to ‘X’ circumstance”
If any of these reasons applies to your current test case, just don’t automate it, as its
ROI will not be convenient.
Handling Authentication
If you haven’t encounter this issue before, then have not been doing automated testing long
enough.
If you’ve never seen this dialogs before, then you have not
been using a computer long enough...or you still live in the
80’s.
There are several types of authentication mechanisms to secure web sites (Basic, Digest, NTLM
and other types of hash based challenges). They all usually require the user to type in their
credentials but the way in which those credentials are sent to the server differs. Then the
browser takes care of re-sending them every time you request something from the same site,
until you close the session or it expires. Credentials are not sent in plain text. Generally, they
are sent as part of the HTTP headers. Since WebDriver does not know about HTTP headers,
mangling them is impossible from WebDriver itself. Also, authentication dialogs are not the
same as JS alerts, so Selenium can’t handle/dismiss them...and even if you dismiss them, you
won’t gain access to the site you are supposed to test.
About usage of AutoIt to bypass browser modal dialogs
You will never hear good things from me regarding the usage of AutoIt during professional
automation practices. That’s it. I do not consider this a solution, and I don’t care if it saved you
once already in a previous automation project. Also, it’s usage is restricted to Windows only.
Bypassing Authentication
You can try to send the credentials in the URL to provide a standard HTTP "Authorization"
header. You supply a username and password in the URL, such as
http://test:[email protected]/password-ok.php. And pray for your browser to support this
feature. :)
Chrome Supported
Firefox Supported, but Firefox will display a prompt asking you to confirm
Safari Unsupported
Because browser support for basic HTTP authentication is limited to less than 50% of major
browsers, maybe you get lucky but chances are that this method is not going to help you bypass
the Basic authentication of your test site.
There are workarounds for some of the unsupported browsers, but they are so convoluted and
flaky that it is kind of pointless to even suggest them as a real solution.
Cookies Injection
Another solution is to have WebDriver inject cookies that will let you bypass authentication by
setting an authenticated state beforehand for the application or site you're testing.
If the site accepts authentication via cookies you will see at least one new cookie in the list. Its
name is something that the developers usually assign so there is no single name for a cookie
that I can tell you about.
Having this info, now you can try to inject the same cookie using WebDriver:
This can also be tricky, since you may need to make a change in the source code of the
application so that the cookie is acknowledged, but your tests will be able to run without the
need for user authentication credentials.
ʼ You must be on the same domain that the cookie is valid for in order for this to work
ʼ You can try accessing a smaller page, like the 404 page, for the purpose of injecting the
cookie before you access the home page
Authentication Proxy
So, this would the the preferred way. When implemented correctly, this solution works in 100%
of cases for every platform. The main idea here is to set up an authentication proxy that will
perform the credentials exchange on behalf of WebDriver, which has no way of doing so by
itself, given that the HTTP client is embedded inside the Selenium bindings and we operate on
them at a much higher level.
As stated before, there are different types of authentication mechanisms (Basic, Digest, NTLM,
etc) so you will have to look for a proxy capable of handling your authentication scheme.
˿ NTLMAps
˿ Cntlm
˿ Java NTLM Proxy
˿ Building you own ;)
Once you have decided on your proxy solution, the following steps are quite straightforward:
1. Run your proxy on the machine that will run the tests
2. Specify the WebDriver capabilities for proxy usage, and point it to the proxy you
deployed
3. Run your tests and change/tune proxy settings till everything works as expected
After you have all this working, you can make as fancy as you want:
ʊ If the proxy is written in the same programming language than your testing solution, you
may want to embed it in your framework (if proxy code is Open Source, of course) and
launch it in a separate thread
ʊ You can have it in binary form as part of your project structure and execute as an
external process from your test runner (before any test)
ʊ Simply have it always deployed and running in all host machines where WebDriver test
run
Multithreading and parallelism
Before getting our hands dirty with code, designs and ideas, you need to know that there is a
difference between “running tests against multiple browsers” and “running tests in parallel
against multiple browsers”.
Sounds silly, but know that because you’ve just finished deploying a Selenium Grid with 50
top-notch nodes you won’t get your tests to run faster nor in a shorter period of time than before.
Selenium Grid takes care of distributing (and balancing) test executions to different nodes that
may be running different browsers/versions but, if test executions are send one after another
(sequentially) then there is absolutely no gain in terms of speed.
● Test runner does not support parallel test runs or it is not configured to do so.
● Test runner supports parallel test runs but framework/POMs/test code did not
contemplate the idea of multiple threads accessing it so it randomly fails with weird
exceptions followed by the classic “It works on my machine” phrase.
When building a test framework from scratch, always design it considering the ability to run tests
in parallel. Nowadays it is a must. Fast feedback is the cornerstone of automated testing, and
parallel test execution supposes reduced tests run times.
Of course, the more shared state you include in your framework classes and POMs the more
you will have to look out for thread safety. Take a look at the document “Java OOP & Good
Practices” to learn more about that.
In practice, this means that you never have to define a simple variable or a singleton to contain
the WebDriver instance. As I said, you should consider having a WebDriver instance per thread.
How many threads you may ask? The optimal number of threads lurks around
One of the problems that arise when using ThreadLocal pattern is that you cannot view nor
access (not even to dispose of) the total of objects that the ThreadLocal instance may be
holding. There is no way to cleanup ThreadLocal values except from within the thread that
} put them in there in the first place (or when the thread is garbage collected - not the case
with worker threads). This means you must take care to clean up your ThreadLocal
when a test is finished, because after that point you may never get a chance to enter that
specific worker thread, and hence, will leak memory.
So in practical terms, to set and r emove instances in a ThreadLocal you will, usually, have to
take advantage of the test runner you are using. Specifically, of its before and after test/suite
annotations.
Issues related to parallel test executions
Symptoms
ϑ Tests “randomly” pass and fail each run, or throw transient exceptions
ϑ The number of unexpected failures is related to the number of concurrent threads
running
ϑ When run in a single thread (not parallel), the tests pass or show expected behavior
Possible Fixes
̏ Repeat the process with increasing parallelism (increasing the number of concurrent
threads) until you see consistent failures
̏ Do a thorough review of static components in your classes and avoid using the static
keyword where possible, and use ThreadLocal class members when needed
̏ Look for resource contention and race conditions by printing resource references to
standard output along with thread IDs: Thread.currentThread().getId();
̏ Separate test framework from test content by reducing tests to simple actions with
randomized data, and check for test thread isolation. For example, run a Selenium test
where you enter randomized text input into a field, delay, and read back to confirm. If
there’s interference from other tests the test will fail.
̏ Add exception handling code around critical sections that present the errors. Going
through your stack trace will greatly help with this.
Variable Test Performance and Performance Related Failures
Symptoms
ϑ Test execution times vary drastically from test run to test run
ϑ Unexpected failures are observed, followed by poor test performance
ϑ Performance improves with reduced thread counts
ϑ Tests fail due to timeouts
ϑ Test runner freezes and becomes unresponsive
Possible Fixes
̏ Run tests with increasing parallelism (increasing the number of concurrent threads) and
monitor resources like memory, CPU utilization, network utilization, and disk space (if
applicable) on the test host. If any of these resources reach critical levels, adjust test
volume accordingly or add resources.
✓ For your thread count, you can start with “1.5 x # number of CPU cores” as a
starting point, and experiment to find the number that would work for your setup.
✓ Augment memory allocation for your build process, depending on your needs.
For example: mvn -Xms256m -Xmx1024m <your build command>
̏ Monitor resources like memory, CPU utilization, network utilization, and disk space (if
applicable) on the test target to make sure that the test target can support the traffic
generated, and scale as needed.
̏ Do a thorough review of static components in your classes and avoid using the static
keyword where possible. These are not garbage collected, and they consume significant
amounts of memory, even when not in use.
Selenium Grid
Organizations are adopting virtualization and cloud-based technologies to reduce costs and
scale test automation. Using these technologies, web applications can be tested on a variety of
browser and operating system combinations. Selenium WebDriver has unmatched support for
testing applications in different environments and executing tests in parallel, reducing costs
while increasing speed and coverage. This section of the document will cover recipes to
configure and execute Selenium WebDriver tests for parallel or distributed execution.
Selenium-Grid allows you run your tests on different machines against different browsers in
parallel. That is, running multiple tests at the same time against different machines running
different browsers and operating systems. Essentially, Selenium-Grid support distributed test
execution.
When to use it
Generally speaking, there are two reasons why you might want to use Selenium-Grid.
● To run your tests against multiple browsers, multiple versions of browser, and browsers
running on different operating systems.
● To reduce the time it takes for the test suite to complete a test pass.
Selenium-Grid is used to speed up the execution of a test pass by using multiple machines to
run tests in parallel. For example, if you have a suite of 100 tests, but you set up Selenium-Grid
to support 4 different machines (VMs or separate physical machines) to run those tests, your
test suite will complete in (roughly) one-fourth the time as it would if you ran your tests
sequentially on a single machine. For large test suites, and long-running test suite such as
those performing large amounts of data-validation, this can be a significant time-saver. Some
test suites can take hours to run.
Another reason to boost the time spent running the suite is to shorten the turnaround time for
test results after developers check-in code for the AUT. Increasingly software teams practicing
Agile software development want test feedback as immediately as possible as opposed to wait
overnight for an overnight test pass.
Selenium-Grid is also used to support running tests against multiple runtime environments,
specifically, against different browsers at the same time.
For example, a ‘grid’ of virtual machines can be setup with each supporting a different browser
that the application to be tested must support. So, machine 1 has Internet Explorer 8, machine
2, Internet Explorer 9, machine 3 the latest Chrome, and machine 4 the latest Firefox. When the
test suite is run, Selenium-Grid receives each test-browser combination and assigns each test
to run against its required browser. In addition, one can have a grid of all the same browser,
type and version.
For instance, one could have a grid of 4 machines each running 3 instances of Firefox, allowing
for a ‘server-farm’ (in a sense) of available Firefox instances. When the suite runs, each test is
passed to Selenium-Grid which assigns the test to the next available Firefox instance. In this
manner one gets test pass where conceivably 12 tests are all running at the same time in
parallel, significantly reducing the time required to complete a test pass.
Selenium-Grid is very flexible. These two examples can be combined to allow multiple instances
of each browser type and version. A configuration such as this would provide both, parallel
execution for fast test pass completion and support for multiple browser types and versions
simultaneously.
How It Works
Selenium Grid builds on the traditional Selenium setup, taking advantage of the following
properties:
● The Selenium test, the application under test, and the remote control/browser pair do not
have to be co-located. They communicate through HTTP, so they can all live on different
machines.
● The Selenium tests and the web application under test are obviously specific to a
particular project. Nevertheless, neither the Selenium remote control nor the browser is
tied to a specific application. As a matter of fact, they provide a capacity that can easily
be shared by multiple applications and multiple projects.
Consequently, if only we could build a distributed grid of Selenium Remote Controls, we could
easily share it across builds, applications, projects - even potentially across organizations. Of
course we would also need to address the scalability issues as described earlier when covering
the traditional Selenium setup. This is why we need a component in charge of:
The Hub exposes an external interface that is exactly the same as the one of a traditional
Remote Control. This means that a test suite can transparently target a regular Remote Control
or a Selenium Hub with no code change. It just needs to target a different IP address. This is
important as it shields the tests from the grid infrastructure (which you can scale transparently).
This also makes the developer’s life easier. The same test can be run locally on a developer
machine, or run on a heavy duty distributed grid as part of a build – without ever changing a line
of code.
The Hub allocates Selenium Remote Controls to each test. The Hub is also in charge of routing
the Selenese requests from the tests to the appropriate Remote Control as well as keeping
track of testing sessions.
When a new test starts, the Hub puts its first request on hold if there is no available Remote
Control in the grid providing the appropriate capabilities. As soon as a suitable Remote Control
becomes available, the Hub will serve the request. For the whole time, the tests do not have to
be aware of what is happening within the grid; it is just waiting for an HTTP response to come
back.
Following you can find a diagram of the Selenium Grid architecture having capabilities to run
tests on (but not limited to) Linux, Windows and Mac environments. Mobile platforms, such as
iOS and Android, are supported as well using Appium as integration framework.
Required hardware per node
There are a couple of notions to take into account before giving away hardware requirements
and amount of nodes for a Selenium Grid, since it heavily depends on the type of tests being
run and the type of usage you are going to give to the Grid.
As I’ve mention before, there is a difference between running tests on multiple runtime
environments & browsers at the same time to test for homogeneity, than to intend to use the
Grid to minimize tests run time while focusing on a single browser.
Sadly, IE browser does not scale as well vertically as it does horizontally. Meaning that if many
instances of IE are stacked up in a single machine/node, instabilities will be noticed sooner than
later. In IE’s case it is better to keep a low number of instances per node, and to have as many
nodes as possible.
For Firefox and (specially) Chrome browsers that is not the case. They scale well vertically to
the point of being able to have up to 10 instances of a browser running at the same time in a low
end machine hardware with 512MB of RAM.
Master Node
The requirements for the Hub node are minimal after its initialization (almost no CPU usage and
about 20 MB of RAM). Upon client requests, the load grows a little bit, mostly due to creation
and processing of network handlers by the servlets. Running test suites with 10 threads (10
simultaneous users/consumers) against the hub during the experiments did not raise the RAM
usage above 55 MB of RAM. CPU usage did not raise above 5%, and only during the
connections exchange period.
The test runner, which is downloaded from the VCS along with the tests or as a dependency by
the build system, generally consumes very little memory (around 20 MB) but to run tests
concurrently it spawns a lot of threads on which it runs the tests. It consumes all the CPU cycles
available if the amount of threads is high enough. For large suite runs, take into account that the
Test Runner listener is gathering the results of the test runs in an XML, dumped at the end of
the run. This can make it consume more memory than it usually does.
To summarize:
● Any single core CPU would suffice for it. If the CPU can be a dual-core it would be
better, to allow Hub and Test Runner to run independently from each other
● 1 GB of RAM should suffice to cover the requirements expressed by Hub + Test Runner.
If large suites are going to be ran, I would raise the RAM amount to 2 GB.
Slave Nodes
They are only supposed to have desired browsers installed and the Selenium Grid slave node
up and running.
The Grid node consumes about 10% of CPU and around 75 MB of RAM when under load of 10
concurrent users.
Know that despite Firefox consumes a little bit more memory than Chrome, Firefox does not
need an extra Selenium server instance to handle it as IE and Chrome do, since it is embedded.
With a single core CPU and 512MB of RAM a node can spawn up to: 2 IE instances or up to 10
Chrome instances or up to 10 Firefox instances before crashing or having erratic behaviours.
Heavy swapping took place during this test runs, meaning that this extreme case should be
avoided and that extra memory should be considered.
Having a dual-core CPU makes the tests run more “freely” given that each IE instance can
benefit with a dedicated core, and having extra RAM does not force the system into a swapping
state so easily.
● A dual-core CPU
Number of nodes
As explained earlier in this chapter, IE browser does not scale as well vertically as it does
horizontally:
● 1 slave running 2 IE instances would reduce the time run time by (roughly) half.
● 2 slaves running 2 IE instances each would reduce the time run time by (roughly) fourth.
● 3 slaves running 2 IE instances each would reduce the time run time by (roughly) eighth.
T0
bn ⇒ T1
Where:
● T0 is total suite time not parallelized
● b is number of browsers instances
● n number of nodes (slaves) in Grid
● T1 is new total time (parallelized)
We need to take into account also that some tests are not possible to be run in parallel with
others. Consider, for example, a test that needs special permissions to be granted/modified for
the logged in user in order to test the required scenario. A different test (running in parallel)
could log into the application only to find that the section on which it needs to work on is
disabled/hidden because of its current permissions (defined by the other test).
Consider having a “category” or group for these tests to force running those type of tests
sequentially after all others tests are done running in parallel. Further optimization could be
done, since as the minimalistic example above shows that tests need to be grouped together by
functionality, permission levels, access to resources or simply by not reusing the same user in
every test.
Frequently Asked Questions
I need a Selenium Grid right now, I don’t care about all this technical crap
Just download and execute Selenium Grid Extras. Follow the onscreen instructions. Hakuna
matata...
How do I setup Selenium Hub and Nodes to start at boot time on a Windows machine?
Use a simple program called NSSM (The Non-Sucking Service Manager, actual name, not my
fault) to install (register) and remove (unregister) programs as Windows services.
When creating a new service, I suggest using a single word name (without spaces) such as
“SeleniumHub” so it is easy to invoke nssm.exe remove SeleniumHub in case you screw
something up while defining the service settings.
You can, for example, use the Selenium Hub/Node batch files (defined below) as starting
commands.
⚠ Warning
Take into account that programs/scripts linked to Windows Services must not exit immediately
(under 1500 ms) or NSSM will think the Service startup failed and you won’t be able to start
the Service.
This means, remove the START /B (move to background) command if you are planning on
using the batch scripts defined below as a Windows Service. Windows Services run in
background either way so it really does not matter.
How do we kill orphan processes left over before any new runs in Selenium Grid?
Short answer is: you shouldn’t. Selenium Grid includes a “browser reaper feature” and tries its
best to keep each node as tidy as possible. The hub will cycle between the slave nodes on a
configurable timely basis and attempt to close any session unlinked browser.
If even then you find issues with leftover browsers, there are still plenty of solutions for this:
● First thing would be to implement a more resilient configuration for the hub/nodes that
conform the Grid. Avoid node dying and not coming back to life due to OutOfMemory
exceptions. Use mechanisms to respawn the Java hub/client node. One option is
JARSpawner. This is a standalone java application that can continuously monitor a
another JVM and if the JVM crashes or exits it can re-spawn it again.
● Avoid OutOfMemory exceptions from really long runs (after days) by recycling your
nodes periodically. Selenium Grid is highly extendable, so one or more proxy/servlets
can be added to it upon initialization to deal with any kind of issue that we may be
having. This proxy when injected into the Grid, would starts counting unique test
sessions. After "n" test sessions, the proxy unhooks the node gracefully from the grid
and self terminates gracefully.
When we test the application with Selenium Grid, we get nondeterministic results
Most likely some tests are timing out in a non-deterministic manner because the CPU or
Network is over-utilized. Monitor CPU and Network activity on all the machines involved. Once
you find the bottleneck launch fewer processes. For instance, if the load average is way higher
than the number of CPUs on the machine running the remote controls, cut the number of
remote controls you launch by two until you get to a sustainable machine load.
Make sure you spend some time figuring out the optimal number of concurrent test runners and
remote controls to run in parallel on each machine, before deploying a Selenium Grid
infrastructure your organization is going to depend on.
Machines/VMs given by the organization run Windows Server 2012 OS. What's the
version of IE there? Can it be downgraded? Updated?
Windows Server 2012 comes with Internet Explorer 10 and does not support earlier versions.
You can only install IE9 in the OS versions mentioned in the list below.
In order to have an earlier version of IE running on a Windows Server 2012 machine you would
need to virtualize a guest Windows 7 with it.
Portabilization of an Internet Explorer version is not a solution given that it is illegal to
re-package versions of IE.
MaxInstances represents how many instances of same version of a browser can be run on a
given node.
{
"browserName": "firefox",
"maxInstances": 5,
"seleniumProtocol": "WebDriver"
},
{
"browserName": "internet explorer",
"maxInstances": 5,
"seleniumProtocol": "WebDriver"
}
Then I can run 5 instances of Firefox as well as 5 instances of IE at the same time in a remote
machine. So a user can run a total of 10 instances of different browsers (FF & IE) in parallel.
MaxSession represents how many browsers (any browser and any version) can be run in
parallel at a time in the node. This overrides the MaxInstances settings and can restrict the
number of browser instances that can run in parallel.
For above example, when maxSession=1 it forces Grid to never have more than 1 browser
running at any given moment.
With MaxSession=2 you can have 2 Firefox tests at the same time, or 1 Internet Explorer and 1
Firefox test), irrespective of what MaxInstances you have defined.
Please refer to the following table for the error condition and its possible solution:
Reason Cause/fix
TIMEOUT The session timed out because the client did not access
it within the timeout. If the client has been somehow
suspended, this may happen when it wakes up
BROWSER_TIMEOUT The node timed out the browser because it was hanging
for too long (parameter browserTimeout)
CREATIONFAILED The node failed to create the browser. This can typically
happen when there are environmental/configuration
problems on the node. Try using the node directly to
track problem.
PROXY_REREGISTRATION The session has been discarded because the node has
re-registered on the grid (in mid-test)
Can you give some tips for running tests against a Selenium Grid?
● If your tests are running in parallel, make sure that each thread deallocates its webdriver
resource independently of any other tests running on other threads.
● Starting 1 browser per thread at the start of the test-run and deallocating all browsers at
the end is not a good idea. If one test-case decides to consume abnormal amounts of
time you may get timeouts on all the other tests because they're waiting for the slow test.
Also, a browser consumes more RAM the longer it is traversing a web application.
Building a laboratory for testing with Selenium Grid
In order to backup any suggestions made in this document, some demo exercises were run in a
sample laboratory that I improvised. The idea was to run real tests under the same conditions
that we will face in reality, but data gathered from this experiment is all I could collect when at
the time of writing this document.
In order to create the laboratory on which the tests were made, the following hardware was
used:
This single machine was the host for another 3 VirtualBox based Windows 7 guest VMs, freely
distributed for 30 days by Microsoft from its virtualization site.
This VM comes with a Windows 7 Enterprise edition and Internet Explorer version 9. Other
combinations of domestic Windows versions/IE can be found there.
This downloaded VM was imported into VirtualBox with its default settings. Required software to
carry on with the tests was installed on it.
Once the VM was a desired, I made a snapshot of it to be able to reproduce the laboratory in
the future (without limiting its usage to 30 days) and cloned it two more times to create a total of
3 VMs running on a single host machine.
Hardware specifications were as follows:
Grid’s Hub config file
{
"host": null,
"port": 4444,
"newSessionWaitTimeout": -1,
"servlets" : [],
"prioritizer": null,
"capabilityMatcher": "org.openqa.grid.internal.utils.DefaultCapabilityMatcher",
"throwOnCapabilityNotPresent": true,
"nodePolling": 5000,
"cleanUpCycle": 5000,
"timeout": 30000,
"browserTimeout": 90000,
"maxSession": 10,
"jettyMaxThreads":-1
}
Grid’s Node config file
{
"capabilities":
[
{
"browserName": "firefox",
"maxInstances": 5,
"seleniumProtocol": "WebDriver"
},
{
"browserName": "chrome",
"maxInstances": 5,
"seleniumProtocol": "WebDriver"
},
{
"browserName": "internet explorer",
"maxInstances": 1,
"seleniumProtocol": "WebDriver"
}
],
"configuration":
{
"role": "node",
"port": 5555,
"hubPort": 4444,
"hubHost": "<MASTER HUB NODE IP>",
"proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"maxSession": 10,
"register": true,
"registerCycle": 5000,
"cleanUpCycle": 2000,
"timeout": 30000
}
}
Grid’s Hub Batch file (Windows)
@ECHO OFF
Grid’s Node Batch file (Windows)
@ECHO OFF
Gathered Metrics
Hub Specific Metrics
CPU - Single Core VM
Memory - 1024 MB RAM
Network - Bridged VM NIC
Swap - VFS
Node 1 Specific Metrics
Swap -VFS
Node 2 Specific Metrics
Swap - VFS
Mobile Testing - Appium
Appium enables iOS and Android automation using Selenium WebDriver. The same WebDriver
bindings can be used across web and mobile.
⚠ Warning
At the moment, Appium does not support/handle multiple sessions with a single server. If you
open multiple sessions against a single Appium server you will get an error message similar
to this one:
info: [debug] Responding to client with error: {"status":33,"value":{"message":"A new session could not
be created. (Original error: Requested a new session but one was in progress)","origValue":"Requested a
new session but one was in progress"},"sessionId":"904d4e59-4c1e-4519-ad09-7a3d2b1edfaa"}
Take this into consideration especially if you are planning on using Appium as part of a
Selenium Grid. If you want to run two mobile tests concurrently, then you will need two
Appium nodes.
From now on, you will need an Android or iOS device to play around with. In case you do not
have a physical device, you can use an emulator. There is only one, as far as I know, for iOS. It
is included with XCode IDE in Macintosh and since I do not have (nor want) a Mac, that’s the
end of the explanation, sorry.
You can use the Android Virtual Device (AVD) Manager (included in Android SDK Tools) to
create as many different emulators for as many platform versions as needed.
● Download Appium for your platform and run the installer. Or, if you have npm already
installed in your machine, you can just do:
$ npm install -g appium
$ appium
● Download the Android SDK command line tools (not Android Studio) for your platform.
○ After installing the Android SDK package you will have access to a tool named
Android SDK Manager.
○ Use it to install Android build tools and any specific API that you may require. If
you do not have a specific requirement, just leave it the default selections and
click “Install X packages” button.
● You may need to close all consoles/terminals and start them again for changes to
environment variables to take effect. Alternatively, you can also restart your computer.
Differences with WebDriver
Appium implementation for Java behaves similarly to Selenium one but it also brings to the table
several types to ease the pain of dealing with mobile based elements. But there are some things
that need to be taken into account exclusively while creating Appium based tests:
● If it is necessary to use the same Page Object in desktop browser as well as cross
platform mobile apps then it is possible to combine different annotations:
@FindBy(css = "someBrowserCss")
@iOSFindBy(uiAutomator = ".elements()[0]")
@AndroidFindBy(className = "android.widget.TextView")
public List<WebElement> androidOrIosTextViews;
● In “How do I initialize all elements defined in my POM?” section we saw how to initialize
WebElements inside Page Objects instances with Selenium. Appium provides a new
AppiumFieldDecorator class that takes care of initializing its own MobileElements and
other fields decorated by its own annotations. In this new context, POMs should be
initialized now with a statement like:
Optionally, you can provide time out related arguments to its constructor.
Appium in Selenium Grid
Appium Node configuration
Appium has its own way of contacting the Selenium Grid's Hub node, so you don't have to start
another selenium-server.jar process as a client node.
You can launch Appium server from command line in different ways:
{
"capabilities":[
{
"browserName":"Safari",
"version":"7.1",
"maxInstances":1,
"platform":"MAC"
},
{
"browserName":"Browser",
"version":"4.4",
"maxInstances":1,
"platform":"ANDROID"
}
],
"configuration":{
"cleanUpCycle":2000,
"timeout":30000,
"proxy":"org.openqa.grid.selenium.proxy.DefaultRemoteProxy",
"url":"http://<ip_or_hostname_of_machine_running_appium_server>:4723/wd/hub",
"host":"ip_or_hostname_of_machine_running_appium_server",
"port":4723,
"maxSession":2,
"register":true,
"registerCycle":5000,
"hubPort":4444,
"hubHost":"10.136.104.99"
}
}
If you are launching Appium from GUI, you can define the nodeconfig.json usage and location
from the "General Settings" button (a Gear icon).
Known Issues
⚠ Warning
Appium is an open-source tool and, as such, its has limitations and risks. For example, upon
a failure during an iOS test run, Appium can screw up the MobileSafari application present in
the iOS Simulator (comes with XCode), literally deleting it.
Posterior test runs against Mobile iOS will throw an error saying:
This is because of the way Appium instruments the application, moving it to a temp folder and
restoring it after test runs, if there was no fatal error.
Tips & Tricks
How to enable Developer Mode in Android devices
If everything went well, you should get your device ID by typing the command ‘adb devices’ in
your console/shell:
Using Chrome Remote Debugging to inspect web pages on devices
Once your device is recognized by ADB, in your desktop Chrome browser navigate to
chrome://inspect, you should see it under the Devices section.
Now open the default browser in your device. You should see Chrome’s Inspector update with
the device’s current tab info. Click on Inspect to see/debug it in real-time.
Listing and downloading APK files from devices
1. Use adb shell pm list packages to determine the package name of the app, e.g.
"com.example.someapp". Skip this step if you already know the package name.
Look through the list of package names and try to find a match between the app in
question and the package name. This is usually easy, but note that the package
name can be completely unrelated to the app name. If you can't recognize the app
from the list of package names, try finding the app in Google Play using a browser.
The URL for an app in Google Play contains the package name.
2. Get the full path name of the APK file for the desired package with adb shell pm
path com.example.someapp. The output will look something like:
package:/data/app/com.example.someapp.apk.
3. Know that there is also a “shortcut” to list packages installed in the device along with its
app name and path. There is even a filter to locate 3rd party (non Android OS related)
packages also: adb shell pm list packages -f -3.
4. Pull the APK file from the Android device to your machine with a db pull
/data/app/com.example.someapp.apk.
Listing activities and other useful information from APK files
Use aapt dump badging <file.apk> to determine the activity name (and many other
properties) of the application to test. aapt command can be found inside
<ANDROID_HOME>\build-tools\<api version>\ folder.
API/Service Testing
Also referred to as API testing, its purpose is to determine if services meet expectations on
functionality, reliability, performance, and security. APIs do not have a GUI, so testing has to be
performed at the message layer.
Nowadays, API testing is considered a critical part of automated testing because APIs serve as
the primary interface to application logic and because GUI tests are difficult to maintain due to
short release cycles and frequent changes commonly introduced by Agile software development
methodologies.
Most of the time, we will be dealing with HTTP based APIs, but know that there are lots of
different ways to communicate data between endpoints and they are also considered API
testing.
API testing coverage
Must cover:
✓ Negative tests: Invalid (bad) requests should be tested to check that there are handled
properly and won't crash your application.
Could cover:
ɤ Security tests: Check that requests from different clients don't influence each other.
HTTP Protocol
For now, let’s focus on HTTP. As you may already know, an HTTP request message is basically
composed of:
Ȳ A payload: the actual data to send/receive. Its format can vary. Usually it is JSON, XML
or YAML, but it can be anything.
In HTTP responses, the first line of the HTTP response is called the status line and includes a
numeric 3 digits status code (such as "404") and a textual reason phrase (such as "Not Found")
ȯ 1xx Informational
ȶ 2xx Success
ȱ 3xx Redirection
4xx Client Error
ȳ 5xx Server Error
Aside from the HTTP code and reason status line, the response also has headers and a
payload.
Extract string literals and magic numbers to a constants class
When writing automated API tests, it is pretty common to have to deal with numbers, strings and
some other values to check for the expected response values.
✓ Have a public class that groups nested static classes with public static members
representing the constants in order to have a single place to maintain the test values.
@Test
public void aTest() {
// Invoke API and store response
// …
assertEquals("Response code", Constants.Http.InternalServerError, response.code);
⚠ Most HTTP libraries already include an enumeration with HTTP status codes.
Use it whenever possible, so you do not need to define them as constants.
✓ Externalize data into configuration files. Read them once at framework initialization and
store data in immutable structures. Make public getter methods for useful values with
clear and concise names.
Choose the right library for the job
In most projects, API testing relies a lot on HTTP as data transport mechanism. In order for you
to work with it, you will need an HTTP client. There are lots of HTTP clients out there...but some
are better than others.
Ē Easy interception/monitoring/logging
Ē Speed
ϫ Do not use low level classes (such as URL or URLConnection) directly in your
framework. The level of expertise required to correctly use them is high if you do not
know what you are doing. You would have to deal with buffers, bytes, strings, parsing,
(un)marshalling, sockets, etc.
ϫ Avoid directly using Apache’s HttpClient too, for reasons similar to the ones described in
previous item.
My recommendation would be to use one the following libraries:
● Retrofit 2
<dependency>
<groupId>com.squareup.retrofit2</groupId>
<artifactId>retrofit</artifactId>
<version>2.0.2</version>
</dependency>
<dependency>
<groupId>com.squareup.retrofit2</groupId>
<artifactId>converter-jackson</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-client</artifactId>
<version>3.0.17.Final</version>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jaxrs</artifactId>
<version>3.0.17.Final</version>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jackson-provider</artifactId>
<version>3.0.17.Final</version>
</dependency>
Why? Well, both libraries allow you to define the API you are trying to test as Java interfaces,
which is clean, simple and urges you to define entities/POJOs to store and send the data,
enforcing a minimum number of DSL models, which is good. It also allows you to keep working
in an OOP fashion.
The way you work with them is quite similar.
ϫ Do not define all APIs in a single interface!. Each endpoint could have dozens of
parameters and options so your interface would become easily unreadable due to the
amount of code it would contain. It is much better to create one interface file for each
(ex: Group, Users and User) under an api package that groups them.
✓ Define as many interfaces as APIs you have, and declare all related endpoints inside
them.
Then, create a dynamic instance of that interface (automatically embedding an HTTP client in it)
that can be used to perform the request as defined by the contract interface:
ϫ Do not hardcode the base URL in your tests nor framework. You will likely want to run
the same tests against multiple environments.
✓ It is cleaner, uses less lines of code and allows to describe API endpoints almost
completely via annotations decorated interfaces. Please note that there are several
annotations (like @Headers and @ Body) that are not present in RestEasy.
✓ OkHttp (its internal Http client engine) is lightweight and the fastest client currently
available.
✓ It is thread safe
⚠ Warning
RestEasy proxy client uses underneath the Apache Commons Http Client, which by
default relies on SimpleHttpConnectionManager which is not thread safe.
On the other hand, RestEasy does not force you to return a Call<> wrapper object like Retrofit
does. This renders calling of the API and handling of its return type a lot cleaner.
You can mask (as in, hide away) Retrofit cumbersome try/catch block for .execute() calls inside
a protected method in your test base classes.
The reason behind Retrofit decision for this “inconvenience” is that by always wrapping the
result of the API method call, it allows synchronous and asynchronous handling of this Call
instance while still using an interface proxy to model the API. RestEasy cannot handle
asynchronous invocations when using the interface declaration together with its Proxy
framework.
You can still perform async requests using RestEasy, but you loose the nice API modeling via
interfaces:
ClientBuilder.newClient().target("http://www.anUrl.com/api/user/1")
.request()
.async()
.get(new InvocationCallback<User>() {
@Override
public void completed(User user) {
// Async request completed
}
@Override
public void failed(Throwable throwable) {
// Async request failed
}
});
Use POJOs to communicate with APIs
✓ Use DSL POJOs, set them values, and let Retrofit (or whatever serious library you want
to use) marshall them back and forth from POJO ↔ JSON/XML/YAML automatically
while sending/receiving HTTP transfers.
✓ Take advantage of free online tools to generate your initial entities/POJOs from JSON or
XML quickly. Always treat auto-generated code with a grain of salt. Not only humans
screw things up.
˿ http://www.jsonschema2pojo.org/
˿ http://pojo.sodhanalibrary.com/
Entities (de)serialization
✓ Whenever possible, try to work with Jackson or Gson libraries to (de)serialize your data.
✓ Consider having a generic abstract “base entity” class for all your entities. You can put
there common code for all entities: common fields (id?), override methods (toString?),
etc
Introduce DSL object factories
✓ Once you have your POJOs, it is useful to create also a simple static factory method in
them to be able to create generic (random?) instances of them and not pollute tests with
long object instantiation lines.
✓ Factory methods are static methods that return an instance of the native class:
➢ Do not need to create a new object upon each invocation - objects can be
cached and reused, if necessary.
➢ Can return a subtype of their return type - in particular, can return an object
whose implementation class is unknown to the caller. This is a very valuable and
widely used feature in many frameworks which use interfaces as the return type
of static factory methods.
○ LogManager.getLogManager
○ Pattern.compile
○ Collections.unmodifiableCollection, and so on
○ Calendar.getInstance
Test Scripts
ϫ Do not perform many validations per tests. Generally, it is preferable to have 100 tests
with 1 validation each than 1 test with 100 validations in it. Keep it simple. If there is an
error during the first validation then you won’t know what happens with the other 99
remaining validations. API tests are generally fast to run. If you still need to run your
tests with lots of assertions, remember to use either ErrorCollector (JUnit) or
SoftAssert (TestNG) to collect all triggered assertions and fail the test when it's fully
done running, showing all errors altogether.
ϫ Avoid repeating expensive operations in tests if there is no real need to do that. For
example, querying databases on every assertion. Query it once, assert on the result.
ϫ Do not perform assertions in methods that perform the API call or in the framework.
When dealing with APIs you want to know if the response code is the adequate one for
different queries. Not everything will return a 200/201 code. Leave validations and
assertions to tests.
Test Reports
✓ API tests should generate report with information on which requests passed / failed.
✓ In case of failure, the raw request and response should be logged for further
investigation.
Authorization & Authentication
ϫ Do not pass around tokens and/or hashes to your API definition (as in, the interface that
models your API endpoints) via parameters. If you do that, you’ll end up polluting the
readability of the API interface and the tests.
✓ Consider having different API instances for different credentials/privileges levels that the
API may support:
✓ Define and use interceptors for the HTTP client that you chose. Embed the
corresponding Authorization header for every request of a particular API instance.
If the API you are testing only accepts a token/hash as a parameter in the URL or some other
ugly way that forces you to break this rule:
2. Talk with the dev team about the security issues that may arise from sending
credentials and/or sensitive information in the URL (not encrypted). Do not let
them convince you easily.
3. As a last resort, create and expose to tests an adapter for the API instances that
does not take the token/hash as an argument but provides it to the actual API
call.
Database testing - Beyond the API endpoint
So, you just sent a POST request to your shinny API but, since you’ve been reading this
document and you have some pride, you know that you can’t just perform a couple of assertions
on the response and assume everything went fine behind the API endpoint...right? RIGHT?
Now you want to make sure that the newly created resource was actually persisted into the
database (yes...you want to).
There is an underlying testing principle here that I did articulate yet, so here it is: if your system
under test communicates with the outside world (e.g. a database or a network connection), and
you only test the system using its own operations (API), all you have verified is that the system
is consistent with itself.
Your application uses an API to interact with the database. It is possible to write that API in such
a way that it presents correct results to the application and yet still uses the database in the
wrong way.
For example, imagine a database with an EMPLOYEE table and a MANAGER table. The tables
are alike (ex: each contains a first name, last name, company email address, and salary) but the
EMPLOYEE table is only for non-managers and the MANAGER table is only for managers.
Now imagine a database API with two classes, one per table. The programmer writes the
employee code first. The manager code is almost the same, so the programmer makes a copy
of the employee code and then edits it. He changes the name of the class, and he changes the
method names. But he's distracted and forgets to change the name of the table from
EMPLOYEE to MANAGER. Code compiles, unit tests pass and the programmer declares
victory.
Of course, there's a bug in the code: manager records are written to the EMPLOYEE table.
Also, you need to consider that not all fields in the database are exposed to the front end/API.
Calculated fields, timestamps and the like all need to be checked.
Another example is that you need to check for "logical" deletes. Usually, data is not actually
deleted from DB, but marked as deleted. GUI/APIs should no longer retrieve/show this data but
the date should be there in the DB.
Now that you know all this stuff, you may say “Understood, let's test those database entries out”
so, in your tests, you start typing String query = "SELECT * FROM …"; *sighs*
ϫ Do not perform SQL queries manually.
ʒ TAEs (and sometimes devs too) usually are not good at writing SQL statements.
ʒ If you write good queries, then it most likely will be useful for a particular test,
because it will have just the required fields and exact JOINs between tables that
you actually need to work with. So you can’t reuse it in other tests. Leading to
changes in multiple places if queries need to be updated.
ʒ If you write poor queries, then performance and tests run times explode
automatically (SELECT * FROM .. UNION/JOIN <6 other tables>).
ʒ If you are writing all your SQL statements, then I say there is a good chance that
you are also manually:
Despite Hibernate being a mammoth framework - that we mortals will likely never use more than
0.1% of what it does - I suggest using it because:
Simple Hibernate stand alone usage
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>5.1.3.Final</version>
</dependency>
2. You are going to need a driver for the DB you are going to connect to. For simplicity’s
sake, I’ve used H2, an in-memory database, but you can choose whatever you want.
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.4.193</version>
</dependency>
H2 Maven dependency
3. Create a hibernate.properties file in your resource folder that will hold the connection
information for Hibernate:
hibernate.connection.driver_class=org.h2.Driver
hibernate.connection.url=jdbc:h2:file:./tmp_db_test
hibernate.connection.username=sa
hibernate.connection.password=
hibernate.dialect=org.hibernate.dialect.H2Dialect
hibernate.hbm2ddl.auto=create-drop
4. Decorate a little bit any existing POJO that you may have with some Hibernate
annotations:
@Entity
@Table(name = "
POJOS")
private class M yPojo {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE)
@Column(name = "ID")
private Long id;
@Column(name = "
NAME")
private String n ame;
@Temporal(TemporalType.TIMESTAMP)
@Column(name = "DATE")
private Date date;
Transaction tx = session.beginTransaction();
session.flush();
tx.commit();
session.close();
⚠ Warning
Hibernate’s Session is not thread safe.
If you are going to use it in multithreaded environments a possible
solution would be to contain it in a ThreadLocal<T>.
Service Testing with Spring
Externalize configuration values and constants
ϫ Do not have a public class with public static fields manually mapped to properties. This
implies that the more the properties you have, the more this class needs to be manually
changed.
✓ Take advantage of Spring Profiles feature, which allows to define beans and
configurations per profile (environment). This means that Spring will automatically filter
out object instances and configurations that have nothing to do with the current profile
and leave them out. So you can have defined in its context everything you will ever
need, but just use whatever you may need at a time.
In practical teams, this means that you can make Spring load the properties file of your
choice with a @Configuration class such as the following one:
@Configuration
public class Properties {
@Profile({"dev", "default"})
@Configuration
@PropertySource("classpath:dev.properties")
public static class ConfigDev {
}
@Profile("qa")
@Configuration
@PropertySource("classpath:qa.properties")
public static class ConfigQA {
}
@Profile("staging")
@Configuration
@PropertySource("classpath:staging.properties")
public static class ConfigStaging {
}
}
Now, if you run your tests passing a JVM parameter such as
-Dspring.profiles.active=qa, your tests will take the configuration properties from a
qa.properties file, for example.
Access databases via Spring JPA
✓ Use Spring (Core + JPA modules) in your project in order to transparently create, read,
update and delete (CRUD) entities from DBs without any pain.
Automagic Spring JPA repository interface for dealing with User entities
2. In your data source related @Configuration class, make sure to define also
@EntityScan(basePackages = <package_entities>) and
@EnableJpaRepositories(basePackages = <package_repositories>) annotations.
You can also use the class based (type safe) version via their
basePackageClasses attribute.
@EntityScan(basePackageClasses = BaseEntity.class)
@Configuration
public class DataSources {
@EnableJpaRepositories(basePackageClasses = UsersRepository.class)
@Configuration
public static class UsersDataSource {
@Autowired
Environment env;
@Bean
public DataSource getDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName("com.mysql.jdbc.Driver");
dataSource.setUrl(env.getRequiredProperty("db.users.url"));
dataSource.setUsername(env.getRequiredProperty("db.users.user"));
dataSource.setPassword(env.getRequiredProperty("db.users.password"));
return dataSource;
}
}
}
Performance Testing
ϫ Do not declare the GET parameters directly in the Path field, this hinders the readability
of the URL we are trying to test.
ϫ Do not use record and playback through JMeter’s proxy recorder. It is a great feature
and it allows you to devise how browser and site interact between each other. But once
you get the idea of how it works, it is better to actually recreate the step in the controller
manually. The bright side of it is that you get a deeper understanding of how everything
works. Using JMeter’s proxy recorder (while having also a proxy defined to get access to
internet) from a browser causes extra traffic to be recorded by JMeter. This traffic is
related to caches, TTLs and proxies and not meaningful for the API testing since the one
replying to this requests is the web server and not the system backend/API.
✓ Data (ex: from a CSV file) should only be loaded from disk once, at the top of the test
plan, outside any Thread Group.
✓ Usually, you will only need one HTTP Header Manager for all your transactions in a test.
Keep it outside the Thread Group.
✓ When dealing with new APIs, Cookies are hardly used. Remove the HTTP Cookie
Manager component if you don’t need it.
✓ Remember to put assertions after each request to be able to throw an exception (error)
to let JMeter know that a particular step/transaction failed, but do not use too many
either since it will result in higher CPU usage (per thread).
✓ Set the Thread Group’s action after sampler error to “Start Next Thread Loop” to avoid
performing next steps for transactions that already failed. This would only add noise to
your statistics given that, most of the times, one step depends on a value received from
a previous step and does not reflect a real user activity.
ϫ Do not place proxy settings as part of your JMX test plan. When run outside corporate
network (as in, client’s network) it causes UnknownHostException for every request
that every thread tries to perform leading to JMeter crashes by OutOfMemoryError.
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to java_pid8576.hprof ...
Heap dump file created [620949462 bytes in 1.966 secs]
✓ Use one of the following solutions for specifying a proxy configuration and document its
availability and usage:
Æ System wide proxy settings that may affect JMeter’s HTTP connections.
Æ Externalize proxy settings to a data file.
Æ Provide JVM system variables from runner to cause HTTP connections to be
proxied.
WARN - jmeter.reporters.ResultCollector: Error creating directories for
C:\Users\XXXXXX\Documents\QC_Performance\Handerr\Results
2016/05/23 16:39:56 ERROR - kg.apc.jmeter.reporters.LoadosophiaConsolidator: Error setting up
saving java.io.IOException: The system cannot find the path specified
✓ Always store/save results relative to JMX test plan folder by using the ~/ prefix. As a side
note, failing to use this prefix will result in files being saved into JMeter’s bin folder.
ϫ Do not assume existence of data stored in target’s systems. Tests should be self
sufficient. At the beginning of each test (or, at test plan level) perform API calls (usually
POST) to ensure data we’ll be dealing with is going to exist. Remove created data when
done.
Advanced Performance Testing with JMeter
Designing and Running JMeter tests from Java
● You can take advantage of JMeter’s potential without actually using JMeter directly (via
GUI or non-GUI)?
● You can actually run performance tests via JUnit, TestNG or any other test runner of
your choice?
● You can, therefore, launch your performance tests from your CI server using its internal
Java plugins/runners without having to resort to clumsy batch/shells scripts, paths nor
external processes? They will even appear in standard xUnit reports! (Not the graphics,
don’t be greedy! You do that separately :)
● JMeter is open source so you can do whatever the heck you want? :P
I came up with this approach while having to test a personal project where I was unable to run
any commands other than Java tools (Maven). So this is more an academic exercise than an
actual professional practice but, just in case someone else is willing to try it, here you have it.
Maven Dependencies
As I said, I’m using Maven as my build system so the first thing we will need is to define some
dependencies in our pom.xml file:
The required package will retrieve JMeter 3.1 core/engine features (fully functional but without
the samplers). If you are going to create your own samplers then this is all you need but since in
this case I’m demoing this using an HTTP Sampler, I added its dependency too.
Representation of a simple JMX file in Java
Now, let’s move to the fun part. I’ll ask you to take a look at the JUnit sample test located here.
Notice the use of JUnit’s @Test annotation and import. The basic idea is to recreate a “JMeter
project tree” composing objects instead of using JMeter’s GUI. Get used to the names of the
Java classes that makes up ThreadGroups, Controllers and Samplers. You can consult JMeter’s
documentation for a full list.
For brevity’s sake I’ve imported statically all the enums and static methods that the code uses.
As always, double check the imports section to make sure that you are using the corrects
imports.
INFO 2016-06-14 12:15:37.549 [jmeter.s] (): Testplan (JMX) version: 2.2. Testlog (JTL) version: 2.2
INFO 2016-06-14 12:15:37.568 [jmeter.s] (): Using SaveService properties file encoding UTF-8
INFO 2016-06-14 12:15:37.595 [jmeter.s] (): Using SaveService properties version 2.9
INFO 2016-06-14 12:15:37.595 [jmeter.s] (): All converter versions present and correct
Writing log file to: C:\Proyectos\Propios\JMeterTests\jmeter.log
summary = 10 in 00:00:01 = 10,1/s Avg: 41 Min: 34 Max: 48 Err: 0 (0,00%)
Ea! Ea! Our test works!. Of course, for simplicity’s sake I’ve hardcoded all values (which is bad)
but you should get the overall idea. Use your test runner’s parameters/data providers to get
those values from whatever source you may want! Also separate the initialization stuff to a
different file, I don’t like seeing unrelated things along with tests.
Test structure
○ jmeter.properties
○ bin/saveservice.properties
⚠ Warning
JMeter engine does not work without those 2 files defined.
This is kind of a hack to get it working without referencing the whole JMeter
installation dir, I don’t want that.
The whole point of doing this is not to have to worry about having JMeter
installed in a machine.
To solve this, I’ve copied and pasted those files directly from JMeter’s bin folder
and included them in my project.
You don’t need to actually change anything in them if you don’t need to.
Building your own sampler
-- Everybody
I hear ya…I’ve been there...but! Since we are that smart, we can teach JMeter how to deal with
any kind of protocol, technology, message, format and/or shaolin technique that we may need!
First, you must know that a JMeter expects from a Sampler, at least, 3 files:
Sampler structure
Maven Dependencies
<groupId>com.jkrzemien.automation</groupId>
<artifactId>performance</artifactId>
<version>1.0</version>
<dependencies>
<dependency>
<groupId>org.apache.jmeter</groupId>
<artifactId>ApacheJMeter_core</artifactId>
<version>3.1</version>
</dependency>
</dependencies>
<build>
<!-- Important for JMeter Samplers creation -->
<resources>
<resource>
<filtering>false</filtering>
<directory>src/main/java</directory>
<includes>
<include>**/*.properties</include>
</includes>
</resource>
</resources>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.6.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
Please note the extra <resources> section in this pom.xml file. This is important because the
[SamplerName]Resources.properties file(s) that I talked about in the previous section must be
stored along with the other two .class Java compiled files inside the resulting JAR file.
By default, Maven expects resources to be present in src/main/resources folder, and copies
them to the root folder of the target directory (or root folder of package folders in a JAR file).
This is not the desired behaviour here. We are storing the [SamplerName]Resources.properties
file along with its other 2 Java files in a package inside src/main/java folder and instructing
Maven to take whatever properties file it finds and copy it maintaining the folder structure.
Custom sampler implementation
For the main sampler file, just extend AbstractSampler base class and implement
TestBean interface.
JMeter creates an instance of a sampler class for every occurrence of the element in every
thread, although some additional copies may be created before the test run starts.
Thus each sampler is guaranteed to be called by a single thread - there is no need to
synchronize access to instance variables. However, access to class fields must be
synchronized.
Pretty straightforward, I just added some instance members (filename and v ariableName)
to show the capabilities of the next two files we need to create. It’d be boring otherwise…
Custom sampler BeanInfo description
The BeanInfo file for our sampler defines metadata. This means it defines information about
what things can be expected about our sampler to the outside world (in this case, JMeter).
For example, our simple sampler allows us to provide two variables from the outside world:
ariableName. In order for JMeter to know how to present the variable fields in the
filename and v
GUI, it needs to know a little bit about them, for example:
package com.jkrzemien.performance.jmeter.advanced.sampler;
import org.apache.jmeter.testbeans.BeanInfoSupport;
import java.beans.PropertyDescriptor;
/**
* Created by jkrzemien on 13/06/2016.
*/
public class CustomSamplerBeanInfo extends BeanInfoSupport {
public CustomSamplerBeanInfo() {
super(CustomSampler.class);
PropertyDescriptor p = property(FILENAME);
p.setValue(NOT_UNDEFINED, Boolean.TRUE);
p.setValue(DEFAULT, "");
p.setValue(NOT_EXPRESSION, Boolean.TRUE);
p = property(VARIABLE_NAME);
p.setValue(NOT_UNDEFINED, Boolean.TRUE);
p.setValue(DEFAULT, "");
p.setValue(NOT_EXPRESSION, Boolean.TRUE);
}
}
Custom Sampler Resources
This is the easy part, this file (or files, if you are creating a sampler with multi-locale support)
defines the different texts to show in the GUI:
● displayName:Your sampler display name
● variableDefined1.displayName: The label for your variable 1
● variableDefined1.shortDescription: The tooltip (on hover) for your variable 1
Once we have our 3 files declared and doing what we need them to do, we can proceed to
compile and package our sampler:
> mvn clean package
If everything has gone well, in the target folder we will find a nice little JAR file with our compiled
code. Double check that it contains the files as expected:
Now all you need to do is to copy your proudly made JAR file into your JMeter’s lib/ext folder
and start JMeter. You should see your sampler among the other ones automagically.
Congratulations!
Results
JavaEE Integration Testing - Arquillian
Testing Java EE enterprise applications has been a major pain point. Testing business
components, in particular, can be very challenging. Often, a vanilla unit test isn't sufficient for
validating such a component's behavior. Why is that? The reason is that components in an
enterprise application rarely perform operations which are strictly self-contained. Instead, they
interact with or provide services for the greater system. They also have declarative functionality
which gets applied at runtime. You could say "no business component is an island."
The way the component interacts with the system is just as important as the work it performs.
Unit tests and mock testing can only take you so far. Business logic aside, how do you test your
component's "enterprise" semantics?
Especially true of business components, you eventually have to ensure that the declarative
services, such as dependency injection and transaction control, actually get applied and work as
expected. It means interacting with databases or remote systems and ensuring that the
component plays well with its collaborators. What happens when your Message Driven Bean
can't parse the XML message? Will the right component be injected? You may just need to write
a test to explore how the declarative services behave, or that your application is configured
correctly to use them. This style of testing needed here is referred to as integration testing,
and it's an essential part of the enterprise development process.
This chapter will focus on automated integration testing using Arquillian framework in Java.
Arquillian Framework
It strives to make integration testing no more complicated than basic unit testing and to become
a comprehensive solution for testing Java EE applications, namely because it leverages the
container rather than a contrived runtime environment.
It provides a test harness that can be used to produce a broad range of integration tests for
Java applications (most likely enterprise applications). A test case may be executed within the
container, deployed alongside the code under test, or by coordinating with the container, acting
as a client to the deployed code.
● A remote container resides in a separate JVM from the test runner. Its lifecycle may be
managed by Arquillian, or Arquillian may bind to a container that is already started.
● An embedded container resides in the same JVM and is mostly likely managed by
Arquillian. Containers can be further classified by their capabilities:
○ Fully compliant Java EE application server (e.g., GlassFish, JBoss AS,
Embedded GlassFish)
○ Servlet container (e.g., Tomcat, Jetty)
○ Bean container (e.g., Weld SE)
It also integrates with familiar testing frameworks like JUnit and TestNG, allowing tests to be
launched using existing IDE, Ant and Maven test plugins without any add-ons.
Terminology
It is important to define some terminology that will help you better understand the explanations
of how Arquillian works:
Shrinkwrap Project
Often these are intended to be run directly or deployed into a web or application server.
Archives carry with them implicit metadata regarding their structure, from which concerns like
ClassLoading scope and manifest parameters may be inferred. However useful, archives
typically require the addition of a build step within the development lifecycle, either via a script or
extra tool.
The ShrinkWrap project provides a simple API to programmatically assemble archives in code,
optionally allowing for export into ZIP or Exploded File formats. This makes it very fast to
prototype "virtual" archives from resources scattered about the classpath, the file system or
remote URLs.
Architecture overview
Arquillian combines a unit testing framework (JUnit or TestNG), ShrinkWrap, and one or more
supported target containers (Java EE container, servlet container, Java SE CDI environment,
etc) to provide a simple, flexible and pluggable integration testing environment.
Arquillian provides a custom test runner for JUnit and TestNG that delegates to service
providers to setup the environment to execute the tests inside or against the container. This
means that an Arquillian test case looks just like a regular JUnit or TestNG test case with two
declarative enhancements (more about this later).
For demo purposes, I’m going to be using a super simple application, consisting of just one
simple class. It is a temperature converter from Celsius to Fahrenheit and vice versa.
package com.jkrzemien.automation.arquillian;
public class TemperatureConverter {
public double convertToCelsius(double f) {
return ((f - 32) * 5 / 9);
}
public double convertToFarenheit(double c) {
return ((c * 9 / 5) + 32);
}
}
The idea here is to test this class using Arquillian framework while running this application under
the same container conditions as it would in production environment.
In this example I’ll be using GlassFish 3.1 embedded container, but you can pick anyone from
this list. The container to pick should be equal to the container the real application under test is
running on.
Setting up Arquillian in a Maven project
You'll need to decide whether you are going to write tests in JUnit 4.x or TestNG 6.x. Add the
one of your preference to your test build path as well as the corresponding Arquillian library.
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.arquillian.junit</groupId>
<artifactId>arquillian-junit-container</artifactId>
<version>1.1.11.Final</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>6.9.10</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.jboss.arquillian.testng</groupId>
<artifactId>arquillian-testng-container</artifactId>
<version>1.1.11.Final</version>
<scope>test</scope>
</dependency>
You will also need to include in your pom.xml file a dependency for the embedded, managed or
remote container to run your tests against.
<dependency>
<groupId>org.jboss.arquillian.container</groupId>
<artifactId>arquillian-glassfish-embedded-3.1</artifactId>
<version>1.0.0.Final</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.glassfish.main.extras</groupId>
<artifactId>glassfish-embedded-all</artifactId>
<version>3.1.2</version>
<scope>provided</scope>
</dependency>
Before I said that an Arquillian test case looks just like a regular JUnit or TestNG test case with
two declarative enhancements, those two differences are:
package com.jkrzemien.automation.arquillian;
import org.jboss.arquillian.container.test.api.Deployment;
import org.jboss.arquillian.junit.Arquillian;
import org.jboss.shrinkwrap.api.ShrinkWrap;
import org.jboss.shrinkwrap.api.asset.EmptyAsset;
import org.jboss.shrinkwrap.api.spec.JavaArchive;
import org.junit.Test;
import org.junit.runner.RunWith;
import javax.inject.Inject;
import static org.junit.Assert.assertEquals;
@RunWith(Arquillian.class) ❶
public class TemperatureConverterTest {
@Inject ❷
private TemperatureConverter converter;
@Deployment
public static JavaArchive createTestArchive() { ❸
JavaArchive jar = ShrinkWrap.create(JavaArchive.class) ❹
.addClasses(TemperatureConverter.class)
.addAsManifestResource(EmptyAsset.INSTANCE, "beans.xml");
System.out.println(jar.toString(true));
return jar;
}
@Test
public void testConvertToCelsius() {
assertEquals(converter.convertToCelsius(32d), 0d, 0.001d);
assertEquals(converter.convertToCelsius(212d), 100d, 0.001d);
}
@Test
public void testConvertToFarenheit() {
assertEquals(converter.convertToFarenheit(0d), 32d, 0.001d);
assertEquals(converter.convertToFarenheit(100d), 212d, 0.001d);
}
}
package com.jkrzemien.automation.arquillian;
import org.jboss.arquillian.container.test.api.Deployment;
import org.jboss.arquillian.testng.Arquillian;
import org.jboss.shrinkwrap.api.ShrinkWrap;
import org.jboss.shrinkwrap.api.asset.EmptyAsset;
import org.jboss.shrinkwrap.api.spec.JavaArchive;
import org.testng.annotations.Test;
import javax.inject.Inject;
import static org.junit.Assert.assertEquals;
public class TemperatureConverterTestNG extends Arquillian { ❶
@Inject ❷
private TemperatureConverter converter;
@Deployment
public static JavaArchive createTestArchive() { ❸
JavaArchive jar = ShrinkWrap.create(JavaArchive.class) ❹
.addClasses(TemperatureConverter.class)
.addAsManifestResource(EmptyAsset.INSTANCE, "beans.xml");
System.out.println(jar.toString(true));
return jar;
}
@Test
public void testConvertToCelsius() {
assertEquals(converter.convertToCelsius(32d), 0d, 0.001d);
assertEquals(converter.convertToCelsius(212d), 100d, 0.001d);
}
@Test
public void testConvertToFarenheit() {
assertEquals(converter.convertToFarenheit(0d), 32d, 0.001d);
assertEquals(converter.convertToFarenheit(100d), 212d, 0.001d);
}
}
⚠ Warning
@Inject annotations are pretty common to exist in many Java frameworks. Make sure you are
importing the Java EE extensions one from the package javax.inject.Inject. Otherwise, you
would end up noticing NullPointerExceptions every time you access injected instances.
So, as promised, here is a further explanation on the highlighted points:
❶ For JUnit we define the test runner to use. For TestNG, we declare our class as a
subclass of Arquillian, which provides hook points for tests lifecycle.
❷ Marks internal class fields with @Inject annotation to instruct test framework to
create and inject an instance of the field type in it automatically. This uses Java EE’s
CDI.
Notice that, despite being empty, some containers require a context settings file
(beans.xml) to exist in order to start up correctly.
1. A JAR file format via usage of JavaArchive class. WAR format would use
WebArchive and so on.
4. Finally, to print out the contents of the generated JAR file (for debugging
purposes only).
For completeness sake, here is the test run output as well as the content of the temporary
directory generated by Arquillian:
/tmp/gfembed8122746368800560171tmp/
├── applications
│ ├── __internal
│ │ └── test
│ │ └── test.war
│ └── test
│ └── WEB-INF
│ ├── beans.xml
│ └── lib
│ ├── 398ac1cf-4c79-413c-85c2-408e8e35a189.jar
│ ├── arquillian-core.jar
│ ├── arquillian-junit.jar
│ ├── arquillian-protocol.jar
│ ├── arquillian-testenricher-cdi.jar
│ ├── arquillian-testenricher-ejb.jar
│ ├── arquillian-testenricher-initialcontext.jar
│ └── arquillian-testenricher-resource.jar
├── autodeploy
├── config
│ ├── admin-keyfile
│ ├── cacerts.jks
│ ├── domain.xml
│ ├── keyfile
│ ├── keystore.jks
│ ├── local-password
│ ├── lockfile
│ ├── login.conf
│ ├── pid
│ ├── pid.prev
│ └── server.policy
├── docroot
├── generated
│ ├── ejb
│ │ └── test
│ ├── jsp
│ │ ├── __default-web-module-server
│ │ └── test
│ │ └── loader_565517913
│ │ └── META-INF
│ │ ├── beans.xml
│ │ ├── services
│ │ │ ├── javax.enterprise.inject.spi.Extension
│ │ │ ├── org.jboss.arquillian.container.test.spi.RemoteLoadableExtension
│ │ │ ├── org.jboss.arquillian.container.test.spi.TestRunner
│ │ │ └── org.jboss.arquillian.core.spi.ExtensionLoader
│ │ └── web-fragment.xml
│ └── xml
│ └── test
├── lib
│ └── install
│ └── applications
│ ├── __cp_jdbc_ra
│ │ ├── __cp_jdbc_ra.jar
│ │ └── META-INF
│ │ └── MANIFEST.MF
│ ├── __dm_jdbc_ra
│ │ ├── __dm_jdbc_ra.jar
│ │ └── META-INF
│ │ └── MANIFEST.MF
│ ├── __ds_jdbc_ra
│ │ ├── __ds_jdbc_ra.jar
│ │ └── META-INF
│ │ └── MANIFEST.MF
│ ├── jaxr-ra
│ │ ├── jaxr-ra.jar
│ │ └── META-INF
│ │ ├── MANIFEST.MF
│ │ └── ra.xml
│ ├── jmsra
│ │ ├── fscontext.jar
│ │ ├── imqbroker.jar
│ │ ├── imqjmsbridge.jar
│ │ ├── imqjmsra.jar
│ │ ├── imqjmx.jar
│ │ ├── imqstomp.jar
│ │ ├── META-INF
│ │ │ ├── MANIFEST.MF
│ │ │ └── ra.xml
│ │ └── props
│ │ └── broker
│ │ ├── default.properties
│ │ └── install.properties
│ └── __xa_jdbc_ra
│ ├── META-INF
│ │ └── MANIFEST.MF
│ └── __xa_jdbc_ra.jar
└── META-INF
└── MANIFEST.MF
GlassFish container + JAR with SUT + our test code + Arquillian test enhancing libs
Once the tests are done running the generated files get deleted. Notice that, for this particular
test run, our JAR file was named 398ac1cf-4c79-413c-85c2-408e8e35a189.jar.
Sep 27, 2016 12:15:08 PM com.sun.enterprise.v3.server.CommonClassLoaderServiceImpl
findDerbyClient
INFO: Cannot find javadb client jar file, derby jdbc driver will not be available by default.
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
Sep 27, 2016 12:15:08 PM org.hibernate.validator.util.Version <clinit>
INFO: Hibernate Validator 4.2.0.Final
Sep 27, 2016 12:15:09 PM com.sun.enterprise.v3.services.impl.GrizzlyProxy$2$1 onReady
INFO: Grizzly Framework 1.9.46 started in: 28ms - bound to [0.0.0.0:8181]
Sep 27, 2016 12:15:09 PM com.sun.enterprise.v3.services.impl.GrizzlyProxy$2$1 onReady
INFO: Grizzly Framework 1.9.46 started in: 21ms - bound to [0.0.0.0:8182]
Sep 27, 2016 12:15:09 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: GlassFish Server Open Source Edition 3.1.2 (java_re-private) startup time : Embedded
(557ms), startup services(354ms), total(911ms)
Sep 27, 2016 12:15:09 PM
org.glassfish.admin.mbeanserver.JMXStartupService$JMXConnectorsStarterThread run
INFO: JMX006: JMXStartupService had disabled JMXConnector system
398ac1cf-4c79-413c-85c2-408e8e35a189.jar:
/com/
/com/jkrzemien/
/com/jkrzemien/automation/
/com/jkrzemien/automation/arquillian/
/com/jkrzemien/automation/arquillian/TemperatureConverter.class
/META-INF/
/META-INF/beans.xml
Sep 27, 2016 12:15:10 PM com.sun.enterprise.security.SecurityLifecycle <init>
INFO: SEC1002: Security Manager is OFF.
Sep 27, 2016 12:15:10 PM com.sun.enterprise.security.SecurityLifecycle onInitialization
INFO: SEC1010: Entering Security Startup Service
Sep 27, 2016 12:15:10 PM com.sun.enterprise.security.PolicyLoader loadPolicy
INFO: SEC1143: Loading policy provider
com.sun.enterprise.security.jacc.provider.SimplePolicyProvider.
Sep 27, 2016 12:15:10 PM com.sun.enterprise.security.auth.realm.Realm doInstantiate
INFO: SEC1115: Realm [admin-realm] of classtype
[com.sun.enterprise.security.auth.realm.file.FileRealm] successfully created.
Sep 27, 2016 12:15:10 PM com.sun.enterprise.security.auth.realm.Realm doInstantiate
INFO: SEC1115: Realm [file] of classtype
[com.sun.enterprise.security.auth.realm.file.FileRealm] successfully created.
Sep 27, 2016 12:15:10 PM com.sun.enterprise.security.auth.realm.Realm doInstantiate
INFO: SEC1115: Realm [certificate] of classtype
[com.sun.enterprise.security.auth.realm.certificate.CertificateRealm] successfully created.
Sep 27, 2016 12:15:10 PM com.sun.enterprise.security.SecurityLifecycle onInitialization
INFO: SEC1011: Security Service(s) Started Successfully
Sep 27, 2016 12:15:10 PM com.sun.enterprise.web.WebContainer createHttpListener
INFO: WEB0169: Created HTTP listener [http-listener] on host/port [0.0.0.0:8181]
Sep 27, 2016 12:15:10 PM com.sun.enterprise.web.WebContainer createHttpListener
INFO: WEB0169: Created HTTP listener [https-listener] on host/port [0.0.0.0:8182]
Sep 27, 2016 12:15:10 PM com.sun.enterprise.web.WebContainer createHosts
INFO: WEB0171: Created virtual server [server]
Sep 27, 2016 12:15:10 PM com.sun.enterprise.web.WebContainer loadSystemDefaultWebModules
INFO: WEB0172: Virtual server [server] loaded default web module []
Sep 27, 2016 12:15:10 PM org.jboss.weld.bootstrap.WeldBootstrap <clinit>
INFO: WELD-000900 SNAPSHOT
Sep 27, 2016 12:15:11 PM com.sun.enterprise.web.WebApplication start
INFO: WEB0671: Loading application [test] at [/test]
Sep 27, 2016 12:15:11 PM org.glassfish.deployment.admin.DeployCommand execute
INFO: test was successfully deployed in 1,763 milliseconds.
PlainTextActionReporterSUCCESSNo monitoring data to report.
Sep 27, 2016 12:20:00 PM org.glassfish.admin.mbeanserver.JMXStartupService shutdown
INFO: JMX001: JMXStartupService and JMXConnectors have been shut down.
Sep 27, 2016 12:20:00 PM com.sun.enterprise.v3.server.AppServerStartup stop
INFO: Shutdown procedure finished
Sep 27, 2016 12:20:00 PM AppServerStartup run
INFO: [Thread[GlassFish Kernel Main Thread,5,main]] exiting
Container startup - Package generation and deployment - Test run within container - Container shutdown
Troubleshooting
SEVERE: SEC5054: Certificate has expired: [
[
Version: V3
Subject: CN=GTE CyberTrust Root 5, OU="GTE CyberTrust Solutions, Inc.", O=GTE
Corporation, C=US
Signature Algorithm: SHA1withRSA, OID = 1.2.840.113549.1.1.5
Key: Sun RSA public key, 2048 bits
modulus:
2374188982934726166081243736638775438544343197386111486549041415388405033174581196852311684
7625570146592736935209718565296053386842135985534863157983128812774162998053673746470782252
4076734022381468699944387295512467683687823183938783744210339075971622187580245817351396820
8712698280951147905910061702789288022758785587747943288560440440243566280239048409906587143
0585284534529627347717530352189612077130606642676951640071336717026459037542552927905851171
4605893615703921997487534148556756656350033357699159081872243472328073360224565373289620950
05323382940080676931822787496212635993279098588863972868266229522169377
public exponent: 65537
Validity: [From: Fri Aug 14 11:50:00 ART 1998,
To: Wed Aug 14 20:59:00 ART 2013]
Issuer: CN=GTE CyberTrust Root 5, OU="GTE CyberTrust Solutions, Inc.", O=GTE Corporation,
C=US
SerialNumber: [ 01b6]
Certificate Extensions: 4
[1]: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
CA:true
PathLen:5
]
[2]: ObjectId: 2.5.29.32 Criticality=false
CertificatePolicies [
[CertificatePolicyId: [1.2.840.113763.1.2.1.3]
[] ]
]
[3]: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
Key_CertSign
Crl_Sign
]
[4]: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 76 0A 49 21 38 4C 9F DE F8 C4 49 C7 71 71 91 9D v.I!8L....I.qq..
]
]
]
Algorithm: [SHA1withRSA]
Signature:
0000: 41 3A D4 18 5B DA B8 DE 21 1C E1 8E 09 E5 F1 68 A:..[...!......h
0010: 34 FF DE 96 F4 07 F5 A7 3C F3 AC 4A B1 9B FA 92 4.......<..J....
0020: FA 9B ED E6 32 21 AA 4A 76 C5 DC 4F 38 E5 DF D5 ....2!.Jv..O8...
0030: 86 E4 D5 C8 76 7D 98 D7 B1 CD 8F 4D B5 91 23 6C ....v......M..#l
0040: 8B 8A EB EA 7C EF 14 94 C4 C6 F0 1F 4A 2D 32 71 ............J-2q
0050: 63 2B 63 91 26 02 09 B6 80 1D ED E2 CC B8 7F DB c+c.&...........
0060: 87 63 C8 E1 D0 6C 26 B1 35 1D 40 66 10 1B CD 95 .c...l&.5.@f....
0070: 54 18 33 61 EC 13 4F DA 13 F7 99 AF 3E D0 CF 8E T.3a..O.....>...
0080: A6 72 A2 B3 C3 05 9A C9 27 7D 92 CC 7E 52 8D B3 .r......'....R..
0090: AB 70 6D 9E 89 9F 4D EB 1A 75 C2 98 AA D5 02 16 .pm...M..u......
00A0: D7 0C 8A BF 25 E4 EB 2D BC 98 E9 58 38 19 7C B9 ....%..-...X8...
00B0: 37 FE DB E2 99 08 73 06 C7 97 83 6A 7D 10 01 2F 7.....s....j.../
00C0: 32 B9 17 05 4A 65 E6 2F CE BE 5E 53 A6 82 E9 9A 2...Je./..^S....
00D0: 53 0A 84 74 2D 83 CA C8 94 16 76 5F 94 61 28 F0 S..t-.....v_.a(.
00E0: 85 A7 39 BB D7 8B D9 A8 B2 13 1D 54 09 34 24 7D ..9........T.4$.
00F0: 20 81 7D 66 7E A2 90 74 5C 10 C6 BD EC AB 1B C2 ..f...t\.......
]
Is caused by a certificate expiration in the keystore file: cacerts.jks. You can see that file in the
directory structure above under the config folder.
Copy the keystore file cacerts.jks into src/test/resources/config and mvm clean install your
project.
In case the keystore file also contains the expired certificate, simply delete the affected
certificate:
For people familiar with Java EE CDI or any other Dependency Injection (DI) frameworks like
Spring, Guice or Dagger there is nothing extraordinary mentioned in this document, aside from
the “self deploying” feature that Arquillian provides. The main purpose of Arquillian is to create
integration tests, including in the Archive the different classes (potentially all of them) that we
want to test, along with their required configuration files in order to deploy it onto a real
container.
Can you do the same without using Arquillian? Yes, you can. Spring is an example of that. You
can load whole contexts (created by devs for production usage) from your tests and run the
tests against them easily. You can even override (as in, mock out) whatever piece you may
want in the process.
In my opinion, Arquillian is mostly used by enterprises dealing with pure Java EE standards,
where including external frameworks (like Spring) would “feel wrong/not look right”. Specially if it
is done only for testing purposes.
Automated Tests in Continuous Integration Servers
Continuous Integration servers are a specific kind of software that orchestrates the different
steps (check out, compile, test, deploy, monitor, etc) during the lifecycle of software
applications. It ensures that the steps required for software delivery are being executed in the
correct order, metrics are gathered, quality checks are performed and reports on tests ran are
archived and shown.
There are several CI servers in use today, most automation projects I’ve dealt with used tools
like Jenkins, Teamcity and/or TFS.
But this scenario changes radically as soons as their management/boss decides that they have
to write/include integration/E2E tests as part of their build pipeline. As you may already know,
these tests are way slower that unit tests. A build that used to take a minute to complete now
can potentially take hours, depending on the amount of tests and the conditions on which those
tests are run. In agile environments, every piece of code a developer commits (at least, to
certain branches) triggers a build cycle.
Personally, I can clearly see why this is bothersome for the development team, since fast
feedback is a main pillar of agile methodologies. On the other hand, I think it is pointless,
depressing and demoralizing when the work done by QA part of the team is disregarded and not
included as part of the “official” build pipeline.
This means that there needs to be a line drawn somewhere.
There are a couple of “rules” that need to apply when dealing with automated tests (other than
unit tests) on a CI server:
The idea here is to run all automated tests. I know that it may take a long time. That is
why the "exclusively" word refers to having an extra job/build separated from the main
one (which is limited to running unit and component tests) that just runs the long life
cycle tests.
The fact that it gets triggered when a master/trunk branch is pushed implies that not
anyone in the team should be able to push to it and, therefore, only after a merge that
gets the code "ready for deployment" the tests are triggered.
π Daily builds at fixed times (let’s say, for example, at 22:00 hs every day)
˨ Email notifications are send to every team member upon failures. Optionally,
message notifications can be send also to team’s chat system (ex: Slack)
˨ Reports that are easy to read/follow must be presented. CI servers tend to have
several plugins to cover whatever file format your test runner of choice generates
˨ Archives of log and screenshots files must be kept, at least, for a week
˨ Ideally, run the integration UI tests parallely and against a Selenium Grid
⚠ Obviously, if your team does not use master branch as “ready for deployment” branch,
run your tests upon commit on the one that they do use.
ϫ Do not let the team get used to CI server failures due to integration test runs. Set a high
priority on fixing failing tests instead of adding new ones.
ϫ Do not commit tests that you know are flaky or unfinished into the build pipeline. This
will only increase the distrust of people in the team about automation testing strategy.
If your CI server of choice does not work as a service by default, you can still convert it into a
service manually.
You can change/install/remove Windows services safely using the Non Sucking Service
Manager (NSSM).
Non-console based windows applications (such as internet browsers) require access to the GUI
to be able to perform their UI paints, open/close/handle windows, etc.
By default, Windows does not allow a service to interact against the UI. So I’ve seen people
normally solve this issue by running Jenkins from command line under a specific user account
on the CI machine or by attempting to install a VNC server. There should be no need for all that.
Microsoft provides a way for services to interact against a virtual GUI dedicated just for services.
1. Go to Start Menu → Run (or hit Win key + R) and type: services.msc <Enter>
2. Locate the CI server service and double click on it
3. On Log On tab, check the “Allow service to interact with desktop” box and click on Apply
4. On General tab, restart the service
Every time there is “activity” in this virtual screen (as in a service is rendering something) , you
should see a dialog informing that a program is running behind scenes rendering into the virtual
screen. This is normal and expected.
Services interacting with the virtual desktop will raise a dialog like this one
Appendix
Preconditions
The following scenarios must be fulfilled before running the test case:
● A known existing Category Group should be available or must be created as part of this test
fixture
● At least, one Category should exist or must be created as part of this test fixture
{
"CategoryType": {
"Name": "Super"
},
Populate a CategoryGroup object with:
"Categories": [
● Category Type
{
1 ● Category Group
"Id": "FBMN",
● At least, one category Id is
},
defined in Categories list
...
],
"Name": "Oral Care"
}
Should exist
Should not be empty/null
Verify Location response header
3 Should match format
https://environment/WAPI/v1/categoryGroups/S
uper/Oral%20Care
4 Verify Errors in response body Should not be any error present in response
5 GET using the content of Location - Response should be HTTP 200 (OK).
header (/categoryGroups/{type}/{group})
received in step 3.
Postconditions
● Remove any entity created during test fixture from environment’s DB, if any.
● Remove created CategoryGroup from environment’s DB.
Glossary
● It is always useful to proof read the official Selenium HQ Wiki Pages. The real ones.