AA Coding Standards and BP
AA Coding Standards and BP
Professional Services
Document Information
Version: 1.1
Document Scope
This document is intended to be a development guideline and development standard for creating
automations using Automation Anywhere.
- is aware that a task (or any code set) is generally read 10 times more than it is changed
- is aware of some of the pitfalls of certain constructions in automation development
- is aware of the impact of using (or neglecting to use) particular solutions on aspects like
maintainability and performance
- knows that not every person developing an automation is capable of understanding an
automation as well as the original developer
Following standards creates clean, easier to read, easier to maintain, stable automations.
For example, suppose an automation has the need print a notepad document as a PDF file. The task
might look like the following:
FIGURE 1
In this particular automation, there is a need to print a file as a PDF document three times. The
temptation would be to simply copy and paste these nine lines into the three places where they are
needed. It’s fast and simple, and helps resolve the immediate goal – getting this automation finished
quickly. However, cutting and pasting these lines 3 times is short-sighted.
Consider for example on the development machine the PDF print driver is called “Pdf995”. When the
automation is moved to a production machine it is discovered that the PDF print driver on that machine
is not called “Pdf995”, but rather “CutePDF”. Now the entire automation task must be opened and
edited. In addition, there are three places to find this code, and because this is copied and pasted the
automation is now 27 lines longer than it would have been. Long tasks are more difficult to edit and they
take longer to edit. Not only that, a task that was tested and verified to be bug-free and production ready
has now been edited. Since it has been edited, it is no longer production ready because it has been
changed, and should now be re-tested.
Of course in this example, using variables could alleviate some of these issues. But now yet another
variable must be added and maintained, making the automation more cluttered. And there’s always the
possibility that the issue can’t be resolved by simply using a variable.
So what’s the solution? A sub-task, that is called by the task needing this service performed. By placing
these commands into a task that is called by a parent task, we realize a number of benefits:
Sub-tasks can be referred to as a “helper task” or “utility task”, since their only purpose is to help the
calling task. Figure 2 is an example of what the helper task for the above example might look like:
FIGURE 2
If any changes are required to this specific set of commands, only this helper task will need to be edited,
and only this helper task will need to be retested.
Tip: Remember that Excel sessions, CSV/text file sessions and browser sessions (web recorder) cannot be
shared across tasks. So sub-tasks must be included in such a way as they do not break these sessions.
This rule is typically only applied to a small number of commands, or even single commands. For
example, if a task calls a sub-task, and then calls it again when it fails, it is acceptable to have two call
sites; however, if it is to be called five times before giving up, all of the calls should be replaced with loop
containing a single call.
Tip: Sub-tasks should be small and focused (have only a single responsibility or only a few
responsibilities).
Now imagine a single automation task that is 2000 lines long. This task is far too long and should be split
into several sub-tasks. The automation developer decides to put several of the repeated tasks into
another sub-task. Consider for example he or she picks three repeated sections and puts them in a sub-
task. This new helper task now handles printing a PDF, but also handles saving a file to a specific folder,
and in addition to that it also handles moving a file from one folder to another. The developer manages
this which part is called by passing an action variable.
The above example would break the Single Responsibility Principle. The developer has reduced the
number of lines in the master task, which is good, but has now a sub-task that is far too big. Additionally,
if any one of those responsibilities (printing a PDF, moving a file or saving a file) need to be modified, the
entire helper task must be modified. This creates the possibility for introducing a bug in task that would
not have otherwise been effected.
The proper approach would be to create three sub-tasks, each having their own responsibility – One for
print to a PDF, another for moving the files, and another for saving files.
Using the login sub-task as an example, if the login task can only be called by one single master task, then
it is tightly coupled to that master task. If the login sub-task is designed in such a way that the URL of the
page it uses has to be set by the calling task, then it cannot run by itself. It cannot be unit-tested alone,
and other tasks cannot call it without knowing the URL of the login page before calling it. If the calling
However, if the login sub-task contains all of the information it needs to login to the web application,
including the URL, then it is a truly stand-alone sub-task. It can be unit-tested, and it can be called by any
other task without the need to be provided the URL. It is then “decoupled” from other tasks and is much
more maintainable.
For example, take an automation that always starts with a login to a web-based application. The login
step should be in a separate sub-task. Using this approach, if anything changes with the login UI, only the
login task has to be modified. If several other tasks are using the login step, none of those tasks will need
to be modified or re-tested if the login UI changes. Only the single login task will need to be modified,
and as such, only that login task will need to be tested if something changes, therefore testing that single
unit. This is far more efficient than modifying a single large task, or set of tasks if something changes in a
smaller unit.
For example, take an automation that always starts with a login to a web-based application. The login
step is only performed once, at the beginning of the automation, so it appears to not make sense to split
it out into a separate task. In reality, it does actually make sense to split the login step into its own task.
The reason is test driven design and greater maintainability.
With the login step in its own sub-task, if there is a change to the login screen, only that task needs to be
modified. If the task is created using the Single Responsibility Principle, it only performs on function –
logging in to the application. Additionally, if the task is designed to be loosely coupled, it doesn’t need to
know anything about the master task calling it. The URL, user name and password can be put into the
automation for testing purposes, and then that task can but tested independently.
This approach may not be possible in all cases, but when possible this can be make an automation much
easier to maintain and deploy.
Testing
After creating an automation task and beginning preparation for a deployment either to a production
environment or delivery to a customer, the automation should be fully tested. Then it should be fully
Error Handling
Automating applications, especially browser-based applications, can be a moving target at times. If a web
page changes for example, it will often break the automation. Or most commonly, if a page doesn’t
appear when the automation expects, it can cause an error. They key to a successful automation is
predicting and handling expected events (a save as dialog not appearing within a specific time frame for
example), and handling unexpected events (a file not found message for example).
Never assume conditions will always be as you expect them to be. If your automation works with a web
browser, do expect that the site will always be up, or that the internet will always be available. Therefore
it is critical to make use of the Error Handling command in Automation Anywhere. Use this command to
not only log errors, but to also provide a means to recover from an error.
For example, consider an automation that downloads a file from a web site. After clicking on the
download link, the automation waits 15 seconds for the download prompt to appear at the bottom of IE.
the automation uses a “wait for window to exist” command to determine when the Save As dialog
appears.
An automation that has 30 helper tasks, or would be thousands and thousands of lines without the use of
helper tasks, probably indicates a business process that is too large for one automation. Such a process
should be broken down into pieces, and each of those separate pieces encapsulated in their own
automations.
If a business process is so large that it requires more than 8 or 10 sub-tasks, or any one task contains
thousands of lines, then the automation approach to the process should be reconsidered. Some things to
take into account:
1) What parts of the business process can be split into their own separate automations?
2) Can the business process itself be reduced in size? Are there any redundant or unnecessary
steps?
All comments should be written in the same language, be grammatically correct, and contain appropriate
punctuation.
General rules:
Box important sections with repeating slashes, asterisks or equal signs:
Only use comments for bad task lines to say “fix this” – otherwise remove, or rewrite that part of
the task!
Include comments using Task-List keyword flags to allow comment-filtering.
Example:
// TODO: Place database command here
// UNDONE: Removed keystroke command due to errors here
Never leave disabled task lines in the final production version. Always delete disabled task lines.
Try to focus comments on the why and what of a command block and not the how. Try to help
the reader understand why you chose a certain solution or approach and what you are trying to
achieve. If applicable, also mention that you chose an alternative solution because you ran into a
problem with the obvious solution.
Naming Conventions
Use bumpyCasing for variables and CamelCasing for task names:
CamelCase is the practice of writing compound words or phrases such that each word or
abbreviation begins with a capital letter (e.g. PrintUtility)
bumpyCase is the same, but always begins with a lower letter (e.g. backgroundColor).
Do not use underscores. Underscores waste space and do not provide any value. Readability can be
achieved by using Bumpy Casing and Camel Casing.
Always use lower case Boolean values “true” and “false”. Never deviate, stick to this method of defining a
boolean state. The same should also be applied to flags. Always use “true” or “false” for Boolean
variables, never a 0 or 1 or anything else (Figure 3).
Don’t prefix fields. For example, don’t use g_ or s_ or just _. It’s okay to use the letter v in order to make
finding the variable simpler.
Definitely avoid single character variable names. Never use “i” or “x” for example. A person should always
be able to look at a variable name and gain some clue about what it is for.
Name flags with Is, Has, Can, Allows or Supports, like isAvailable, isNotAvailable, hasBeenUpdated. Name
scripts with a noun, noun phrase or adjective like Utility or Helper for example FileSaveHelper.atmx. Also
use verb-object pairs when naming scripts like GetMostRecentVersion. Name variables with a descriptive
name like employeeFirstName or socialSecurityNumber.
Logging
One phrase that will be repeated throughout this section will be this: Logs should be easy to read and
easy to parse. There are two receivers for log files, humans and machines. When humans are the
receiver, they’ll may be a developer looking for information in order to debug, analyse performance, or
look for errors. They may also be an analyst, looking for audit information or performance information. In
either case, logs should be easy to look at and understand, and they should be easy to parse with a tool
or import into Excel. What follows is a set of standards to ensure logging is properly executed.
Types of Logs
Process/Informational log – The process log is meant to be an informational log. It can be used for
monitoring normal operation of a task, but more importantly, it can be used for auditing. Use the process
log for an audit trail can be an excellent method for determining if a business process was completed
properly. For example, was an order placed, or a ticket completed without error.
Error log – The error log is for detailed error messages. When an error occurs in a task, notification that
an error occurred should go into the process log. Detailed information about the error should go into the
error log.
Debug log - Debugging information should go into its own log file, and should be turned off when in
production mode. An isProductionMode variable should be used to turn these statements off when the
automation is moved to production.
Performance log - Performance logging can either go into the process/informational log or it can go into
the performance log. In some cases it may be desirable to have it in its own log file.
WARN – the task might be continued, but take extra caution. For example: “Task is running in
development mode”. The task can continue to operate, but the message should always be justified and
examined.
INFO – Important business process has finished. In ideal world, and administrator or user should be able
to understand INFO messages and quickly find out what the application is doing. For example if an
application is all about booking airplane tickets, there should be only one INFO statement per each ticket
saying “[Who] booked ticket from [Where] to [Where]”. Other definition of INFO message: each action
that changes the state of the application significantly (database update, external system request).
DEBUG – Developers stuff. Any information that is helpful in debugging an automation, and should not go
into the process log. An isProductionMode variable should be used to turn these statements off when the
automation is moved to production.
PERFORMANCE – Performance logging can either go into the process/informational log or it can go into
the performance log, if a separate performance log has been created. Performance should track how long
it takes to perform specific steps, but too much granularity should be avoided. In most cases,
performance logging should be limited to an overall business process. For example, how long it took to
complete an order, or how long it took to process an invoice.
FIGURE 5
In Figure 4, two lines are used to communicate one set of information. All of the information should be
on one line, and in one pass to the log command. Was it so hard to include thee actual message type,
message id, etc. in the warning string? I know something went wrong, but what? What was the context?
Be detailed, don’t make the reader wonder what is going on.
If this approach is used, you’ll end up with a process log file that looks like a random sequence of
characters. Instead, a log file should be readable, clean and descriptive. Don’t use magic numbers, log
values, numbers, ids and include their context. Show the data being processed and show its meaning.
Show what the automation is actually doing. Good logs can serve as a great documentation of the
automation itself.
Formatting
Use the log to file feature built into Automation Anywhere. Use the built-in time stamp in the log to file
command, don’t create your own method and format for time stamping, even for Excel. This is not the
industry standard, and you’re trying to solve a problem that doesn’t even exist. It’s up to the receiver of
the log to manage the time stamps. If the customer or receiver needs a different timestamp, then it
makes sense to modify it, but not before.
A good format to use is timestamp, logging level, machine name, name of the task, and the message. All
of these values should be tab delimited, for easy importing or parsing.
On the other hand, if your application produces 500 MB of logs each hour, no man and no graphical text
editor will ever manage to read them entirely. In this case, other tools will be uised. If it is possible, try to
write logging messages in such a way, that they could be understood both by humans and computers,
e.g. avoid formatting of numbers, use patterns that can be easily recognized by regular expressions, etc.
VB Script
Automation Anywhere has the ability to call VB script. It is recommended to limit the use of VB script to
situations where there are simply no other choices. The reasons are:
Another thing to remember is to never use AA to write VB script, or create a VB script file. Doing so is
extremely difficult to maintain and is an anti-pattern at best. At worst, it demonstrates the ability to
embed and deliver a malicious payload in an automation.
Configuration Files
Always separate your initial variable values from your task. This means do not set necessary variable
values in the task. You will need to change these values when you run the task in different environments
like UAT or PROD. Use a configuration file and read those variables into the task at start time. Make use
of system path variables to load the configuration file so it can be located no matter where AA is installed
on the system.