Groovy Lang Specification
Groovy Lang Specification
1.1.1. Comments
Single-line comment
Single-line comments start with // and can be found at any position in the line. The
characters following // , until the end of the line, are considered part of the comment.
Multiline comment
A multiline comment starts with /* and can be found at any position in the line. The
characters following /* will be considered part of the comment, including new line
characters, up to the first */ closing the comment. Multiline comments can thus be put
at the end of a statement, or even inside a statement.
Groovydoc comment
Similarly to multiline comments, Groovydoc comments are multiline, but start
with /** and end with */ . Lines following the first Groovydoc comment line can
optionally start with a star * . Those comments are associated with:
/**
* A Class description
*/
class Person {
/** the name of the person */
String name
/**
* Creates a greeting method for a certain person.
*
* @param otherPerson the person to greet
* @return a greeting message
*/
String greet(String otherPerson) {
"Hello ${otherPerson}"
}
}
Groovydoc follows the same conventions as Java’s own Javadoc. So you’ll be able to use
the same tags as with Javadoc.
Shebang line
Beside the single-line comment, there is a special line comment, often called
the shebang line understood by UNIX systems which allows scripts to be run directly
from the command-line, provided you have installed the Groovy distribution and
the groovy command is available on the PATH .
#!/usr/bin/env groovy
println "Hello from the shebang line"
The # character must be the first character of the file. Any indentation would yield a compilation erro
1.1.2. Keywords
The following list represents all the keywords of the Groovy language:
Table 1. Keywords
while
1.1.3. Identifiers
Normal identifiers
Identifiers start with a letter, a dollar or an underscore. They cannot start with a
number.
def name
def item3
def with_underscore
def $dollarStart
But the following ones are invalid identifiers:
def 3tier
def a+b
def a#b
All keywords are also valid identifiers when following a dot:
foo.as
foo.assert
foo.break
foo.case
foo.catch
Quoted identifiers
Quoted identifiers appear after the dot of a dotted expression. For instance,
the name part of the person.name expression can be quoted
with person."name" or person.'name' . This is particularly interesting when certain
identifiers contain illegal characters that are forbidden by the Java Language
Specification, but which are allowed by Groovy when quoted. For example, characters
like a dash, a space, an exclamation mark, etc.
map.'single quote'
map."double quote"
map.'''triple single quote'''
map."""triple double quote"""
map./slashy string/
map.$/dollar slashy string/$
There’s a difference between plain character strings and Groovy’s GStrings (interpolated
strings), as in that the latter case, the interpolated values are inserted in the final string
for evaluating the whole identifier:
1.1.4. Strings
Text literals are represented in the form of chain of characters called strings. Groovy lets
you instantiate java.lang.String objects, as well as GStrings
( groovy.lang.GString ) which are also called interpolated strings in other
programming languages.
Single-quoted string
Single-quoted strings are a series of characters surrounded by single quotes:
String concatenation
All the Groovy strings can be concatenated with the + operator:
Triple-single-quoted string
Triple-single-quoted strings are a series of characters surrounded by triplets of single
quotes:
Triple-single-quoted strings may span multiple lines. The content of the string can cross
line boundaries without the need to split the string in several pieces and without
concatenation or newline escape characters:
assert !strippedFirstNewline.startsWith('\n')
Escape Character
sequence
\t tabulation
\b backspace
\n newline
\r carriage return
\f formfeed
\\ backslash
\' single quote within a single-quoted string (and optional for triple-
single-quoted and double-quoted strings)
\" double quote within a double-quoted string (and optional for triple-
double-quoted and single-quoted strings)
We’ll see some more escaping details when it comes to other types of strings discussed
later.
Double-quoted string
Double-quoted strings are a series of characters surrounded by double quotes:
String interpolation
Any Groovy expression can be interpolated in all string literals, apart from single and
triple-single-quoted strings. Interpolation is the act of replacing a placeholder in the
string with its value upon evaluation of the string. The placeholder expressions are
surrounded by ${} . The curly braces may be omitted for unambiguous dotted
expressions, i.e. we can use just a $ prefix in those cases. If the GString is ever passed to a
method taking a String, the expression value inside the placeholder is evaluated to its
string representation (by calling toString() on that expression) and the resulting
String is passed to the method.
Not only are expressions allowed in between the ${} placeholder, but so are statements. However,
is just null . So if several statements are inserted in that placeholder, the last one should somehow
meaningful value to be inserted. For instance, "The sum of 1 and 2 is equal to ${def a = 1; def b = 2; a
and works as expected but a good practice is usually to stick to simple expressions inside GString plac
In addition to ${} placeholders, we can also use a lone $ sign prefixing a dotted
expression:
shouldFail(MissingPropertyException) {
println "$number.toString()"
}
Similarly, if the expression is ambiguous, you need to keep the curly braces:
Here, the closure takes a single java.io.StringWriter argument, to which you can
append content with the << leftShift operator. In either case, both placeholders are embedded
closures.
def number = 1
def eagerGString = "value == ${number}"
def lazyGString = "value == ${ -> number }"
assert eagerGString == "value == 1"
assert lazyGString == "value == 1"
number = 2
assert eagerGString == "value == 1"
assert lazyGString == "value == 2"
We define a number variable containing 1 that we then interpolate within two
GStrings, as an expression in eagerGString and as a closure in lazyGString .
With a plain interpolated expression, the value was actually bound at the time of
creation of the GString.
But with a closure expression, the closure is called upon each coercion of the GString
into String, resulting in an updated string containing the new number value.
An embedded closure expression taking more than one parameter will generate
an exception at runtime. Only closures with zero or one parameters are allowed.
The signature of the takeString() method explicitly says its sole parameter is a String
We also verify that the parameter is indeed a String and not a GString.
When we try to fetch the value with a String key, we will not find it, as Strings and GString have
different hashCode values
Triple-double-quoted string
Triple-double-quoted strings behave like double-quoted strings, with the addition that
they are multiline, like the triple-single-quoted strings.
Yours sincerly,
Dave
"""
assert template.toString().contains('Groovy')
Neither double quotes nor single quotes need be escaped in triple-double-quoted strings.
Slashy string
Beyond the usual quoted strings, Groovy offers slashy strings, which use / as the
opening and closing delimiter. Slashy strings are particularly useful for defining regular
expressions and patterns, as there is no need to escape backslashes.
assert multilineSlashy.contains('\n')
Slashy strings can be thought of as just another way to define a GString but with
different escaping rules. They hence support interpolation:
Special cases
An empty slashy string cannot be represented with a double forward slash, as it’s
understood by the Groovy parser as a line comment. That’s why the following assert
would actually not compile as it would look like a non-terminated statement:
assert '' == //
As slashy strings were mostly designed to make regexp easier so a few things that are
errors in GStrings like $() or $5 will work with slashy strings.
Here’s an example:
def dollarSlashy = $/
Hello $name,
today we're ${date}.
$ dollar sign
$$ escaped dollar sign
\ backslash
/ forward slash
$/ escaped forward slash
$$$/ escaped opening dollar slashy
$/$$ escaped closing dollar slashy
/$
assert [
'Guillaume',
'April, 1st',
'$ dollar sign',
'$ escaped dollar sign',
'\\ backslash',
'/ forward slash',
'/ escaped forward slash',
'$/ escaped opening dollar slashy',
'/$ escaped closing dollar slashy'
].every { dollarSlashy.contains(it) }
It was created to overcome some of the limitations of the slashy string escaping rules.
Use it when its escaping rules suit your string contents (typically if it has some slashes
you don’t want to escape).
Single-quoted '…' \
Triple-single-quoted '''…''' \
Double-quoted "…" \
Triple-double- """…""" \
quoted
Slashy /…/ \
Characters
Unlike Java, Groovy doesn’t have an explicit character literal. However, you can be
explicit about making a Groovy string an actual character, by three different means:
char c1 = 'A'
assert c1 instanceof Character
def c3 = (char)'C'
assert c3 instanceof Character
by being explicit when declaring a variable holding the character by specifying
the char type
The first option 1 is interesting when the character is held in a variable, while the
other two (2and 3) are more interesting when a char value must be passed as
argument of a method call.
1.1.5. Numbers
Groovy supports different kinds of integral literals and decimal literals, backed by the
usual Number types of Java.
Integral literals
The integral literal types are the same as in Java:
• byte
• char
• short
• int
• long
• java.lang.BigInteger
You can create integral numbers of those types with the following declarations:
// primitive types
byte b = 1
char c = 2
short s = 3
int i = 4
long l = 5
// infinite precision
BigInteger bi = 6
If you use optional typing by using the def keyword, the type of the integral number
will vary: it’ll adapt to the capacity of the type that can hold that number.
def a = 1
assert a instanceof Integer
// Integer.MAX_VALUE
def b = 2147483647
assert b instanceof Integer
// Integer.MAX_VALUE + 1
def c = 2147483648
assert c instanceof Long
// Long.MAX_VALUE
def d = 9223372036854775807
assert d instanceof Long
// Long.MAX_VALUE + 1
def e = 9223372036854775808
assert e instanceof BigInteger
As well as for negative numbers:
def na = -1
assert na instanceof Integer
// Integer.MIN_VALUE
def nb = -2147483648
assert nb instanceof Integer
// Integer.MIN_VALUE - 1
def nc = -2147483649
assert nc instanceof Long
// Long.MIN_VALUE
def nd = -9223372036854775808
assert nd instanceof Long
// Long.MIN_VALUE - 1
def ne = -9223372036854775809
assert ne instanceof BigInteger
Binary literal
Octal numbers are specified in the typical format of 0 followed by octal digits.
Hexadecimal numbers are specified in the typical format of 0x followed by hex digits.
Decimal literals
The decimal literal types are the same as in Java:
• float
• double
• java.lang.BigDecimal
You can create decimal numbers of those types with the following declarations:
// primitive types
float f = 1.234
double d = 2.345
// infinite precision
BigDecimal bd = 3.456
Decimals can use exponents, with the e or E exponent letter, followed by an optional
sign, and a integral number representing the exponent:
Underscore in literals
When writing long literal numbers, it’s harder on the eye to figure out how some
numbers are grouped together, for example with groups of thousands, of words, etc. By
allowing you to place underscore in number literals, it’s easier to spot those groups:
Type Suffix
BigInteger G or g
Long L or l
Integer I or i
BigDecimal G or g
Double D or d
Float F or f
Examples:
Math operations
Although operators are covered later on, it’s important to discuss the behavior of math
operations and what their resulting types are.
• binary operations between byte , char , short and int result in int
• binary operations involving long with byte , char , short and int result in long
• binary operations involving BigInteger and any other integral type result
in BigInteger
• binary operations
involving BigDecimal with byte , char , short , int and BigInteger result
in BigDecimal
BigDeci BigDeci
mal mal
Thanks to Groovy’s operator overloading, the usual arithmetic operators work as well
with BigInteger and BigDecimal , unlike in Java where you have to use explicit methods for op
numbers.
BigDecimal division is performed with the divide() method if the division is exact
(i.e. yielding a result that can be represented within the bounds of the same precision
and scale), or using a MathContext with a precision of the maximum of the two
operands' precision plus an extra precision of 10, and a scale of the maximum of 10 and
the maximum of the operands' scale.
For integer division like in Java, you should use the intdiv() method, as Groovy doesn’t provide a
division operator symbol.
The case of the power operator
The power operation is represented by the ** operator, with two parameters: the base
and the exponent. The result of the power operation depends on its operands, and the
result of the operation (in particular if the result can be represented as an integral
value).
The following rules are used by Groovy’s power operation to determine the resulting
type:
▪ if the base is an Integer , then return an Integer if the result value fits in it,
otherwise a BigInteger
▪ if the base is a Long , then return a Long if the result value fits in it, otherwise
a BigInteger
We can illustrate those rules with a few examples:
// base and exponent are ints and the result can be represented by an
Integer
assert 2 ** 3 instanceof Integer // 8
assert 10 ** 9 instanceof Integer // 1_000_000_000
1.1.6. Booleans
Boolean is a special data type that is used to represent truth values: true and false .
Use this data type for simple flags that track true/false conditions.
Boolean values can be stored in variables, assigned into fields, just like any other data
type:
In addition, Groovy has special rules (often referred to as Groovy Truth) for coercing
non-boolean objects to a boolean value.
1.1.7. Lists
Groovy uses a comma-separated list of values, surrounded by square brackets, to denote
lists. Groovy lists are plain JDK java.util.List , as Groovy doesn’t define its own
collection classes. The concrete list implementation used when defining list literals
are java.util.ArrayList by default, unless you decide to specify otherwise, as we
shall see later on.
The size of the list can be queried with the size() method, and shows our list contains 3
elements
In the above example, we used a homogeneous list, but you can also create lists
containing values of heterogeneous types:
We can say that the variable holding the list literal is of type java.util.LinkedList
You can access elements of the list with the [] subscript operator (both for reading and
setting values) with positive indices or negative indices to access elements from the end
of the list, as well as with ranges, and use the << leftShift operator to append elements
to a list:
Access the last element of the list with a negative index: -1 is the first element from the end of
the list
Use an assignment to set a new value for the third element of the list
Use the << leftShift operator to append an element at the end of the list
Access two elements at once, returning a new list containing those two elements
Use a range to access a range of values from the list, from a start to an end element position
As lists can be heterogeneous in nature, lists can also contain other lists to create multi-
dimensional lists:
Access the second element of the top-most list, and the first element of the inner list
1.1.8. Arrays
Groovy reuses the list notation for arrays, but to make such literals arrays, you need to
explicitely define the type of the array through coercion or type declaration.
Integer[][] matrix2
matrix2 = [[1, 2], [3, 4]]
assert matrix2 instanceof Integer[][]
You can define the bounds of a new array
names[2] = 'Blackdrag'
assert names[2] == 'Blackdrag'
Retrieve the first element of the array
Set the value of the third element of the array to a new value
Java’s array initializer notation is not supported by Groovy, as the curly braces
can be misinterpreted with the notation of Groovy closures.
1.1.9. Maps
Sometimes called dictionaries or associative arrays in other languages, Groovy features
maps. Maps associate keys to values, separating keys and values with colons, and each
key/value pairs with commas, and the whole keys and values surrounded by square
brackets.
colors['pink'] = '#FF00FF'
colors.yellow = '#FFFF00'
We use the subscript notation to check the content associated with the red key
We can also use the property notation to assert the color green’s hexadecimal
representation
Similarly, we can use the subscript notation to add a new key/value pair
When using names for the keys, we actually define string keys in the map.
In the examples above, we used string keys, but you can also use values of other types as
keys:
assert !person.containsKey('name')
assert person.containsKey('key')
The key associated with the 'Guillaume' name will actually be the "key" string,
not the value associated with the key variable
When you need to pass variable values as keys in your map definitions, you must
surround the variable or expression with parentheses:
assert person.containsKey('name')
assert !person.containsKey('key')
This time, we surround the key variable with parentheses, to instruct the parser we are passing
a variable rather than defining a string key
1.2. Operators
This chapter covers the operators of the Groovy programming language.
+ addition
- subtraction
* multiplication
/ division Use intdiv() for integer division, and see the section
about integer division for more information on the
return type of the division.
Operator Purpose Remarks
% remainder
** power See the section about the power operation for more
information on the return type of the operation.
Here are a few examples of usage of those operators:
assert 1 + 2 == 3
assert 4 - 3 == 1
assert 3 * 5 == 15
assert 3 / 2 == 1.5
assert 10 % 3 == 1
assert 2 ** 3 == 8
Unary operators
The + and - operators are also available as unary operators:
assert +3 == 3
assert -4 == 0 - 4
assert -(-1) == 1
Note the usage of parentheses to surround an expression to apply the unary minus to that
surrounded expression.
def a = 2
def b = a++ * 3
assert a == 3 && b == 6
def c = 3
def d = c-- * 2
assert c == 2 && d == 6
def e = 1
def f = ++e + 3
assert e == 2 && f == 5
def g = 4
def h = --g + 1
assert g == 3 && h == 4
The postfix increment will increment a after the expression has been evaluated and assigned
into b
The postfix decrement will decrement c after the expression has been evaluated and assigned
into d
The prefix increment will increment e before the expression is evaluated and assigned into f
The prefix decrement will decrement g before the expression is evaluated and assigned into h
• +=
• -=
• *=
• /=
• %=
• **=
def a = 4
a += 3
assert a == 7
def b = 5
b -= 3
assert b == 2
def c = 5
c *= 3
assert c == 15
def d = 10
d /= 2
assert d == 5
def e = 10
e %= 3
assert e == 1
def f = 3
f **= 2
assert f == 9
Operator Purpose
== equal
!= different
Here are some examples of simple number comparisons using these operators:
assert 1 + 2 == 3
assert 3 != 4
assert -2 < 3
assert 2 <= 2
assert 3 <= 4
assert 5 > 1
assert 5 >= -2
• ! : logical "not"
assert !false
assert true && true
assert true || false
"not" false is true
Precedence
The logical "not" has a higher priority than the logical "and".
The logical "and" has a higher priority than the logical "or".
Short-circuiting
The logical || operator supports short-circuiting: if the left operand is true, it knows
that the result will be true in any case, so it won’t evaluate the right operand. The right
operand will be evaluated only if the left operand is false.
Likewise for the logical && operator: if the left operand is false, it knows that the result
will be false in any case, so it won’t evaluate the right operand. The right operand will be
evaluated only if the left operand is true.
boolean checkIfCalled() {
called = true
}
called = false
true || checkIfCalled()
assert !called
called = false
false || checkIfCalled()
assert called
called = false
false && checkIfCalled()
assert !called
called = false
true && checkIfCalled()
assert called
We create a function that sets the called flag to true whenever it’s called
In the first case, after resetting the called flag, we confirm that if the left operand to || is true,
the function is not called, as || short-circuits the evaluation of the right operand
In the second case, the left operand is false and so the function is called, as indicated by the fact
our flag is now true
Likewise for && , we confirm that the function is not called with a false left operand
• | : bitwise "or"
• ~ : bitwise negation
int a = 0b00101010
assert a == 42
int b = 0b00001000
assert b == 8
assert (a & a) == a
assert (a & b) == b
assert (a | a) == a
assert (a | b) == a
bitwise or
bitwise exclusive or
bitwise negation
It’s worth noting that the internal representation of primitive types follow the Java
Language Specification. In particular, primitive types are signed, meaning that for a
bitwise negation, it is always good to use a mask to retrieve only the necessary bits.
In Groovy, bitwise operators have the particularity of being overloadable, meaning that
you can define the behavior of those operators for any kind of object.
Ternary operator
The ternary operator is a shortcut expression that is equivalent to an if/else branch
assigning some value to a variable.
Instead of:
if (string!=null && string.length()>0) {
result = 'Found'
} else {
result = 'Not found'
}
You can write:
Elvis operator
The "Elvis operator" is a shortening of the ternary operator. One instance of where this
is handy is for returning a 'sensible default' value if an expression resolves to false -ish
(as in Groovy truth). A simple example might look like this:
with the Elvis operator, the value, which is tested, is used if it is not false -ish
Usage of the Elvis operator reduces the verbosity of your code and reduces the risks of
errors in case of refactorings, by removing the need to duplicate the expression which is
tested in both the condition and the positive return value.
result is null
Direct field access operator
Normally in Groovy, when you write code like this:
class User {
public final String name
User(String name) { this.name = name}
String getName() { "Name: $name" }
}
def user = new User('Bob')
assert user.name == 'Name: Bob'
public field name
The user.name call triggers a call to the property of the same name, that is to say, here,
to the getter for name . If you want to retrieve the field instead of calling the getter, you
can use the direct field access operator:
we store a reference to the toUpperCase method on the str instance inside a variable
named fun
we can check that the result is the same as if we had called it directly on str
There are multiple advantages in using method pointers. First of all, the type of such a
method pointer is a groovy.lang.Closure , so it can be used in any place a closure
would be used. In particular, it is suitable to convert an existing method for the needs of
the strategy pattern:
def transform(List elements, Closure action) {
def result = []
elements.each {
result << action(it)
}
result
}
String describe(Person p) {
"$p.name is $p.age"
}
def action = this.&describe
def list = [
new Person(name: 'Bob', age: 42),
new Person(name: 'Julia', age: 35)]
assert transform(list, action) == ['Bob is 42', 'Julia is 35']
the transform method takes each element of the list and calls the action closure on them,
returning a new list
Method pointers are bound by the receiver and a method name. Arguments are resolved
at runtime, meaning that if you have multiple methods with the same name, the syntax is
not different, only resolution of the appropriate method to be called will be done at
runtime:
using the method pointer with a String calls the String version of doSomething
using the method pointer with an Integer calls the Integer version of doSomething
1.2.7. Regular expression operators
Pattern operator
The pattern operator ( ~ ) provides a simple way to create
a java.util.regex.Pattern instance:
def p = ~/foo/
assert p instanceof Pattern
while in general, you find the pattern operator with an expression in a slashy-string, it
can be used with any kind of String in Groovy:
p = ~'foo'
p = ~"foo"
p = ~$/dollar/slashy $ string/$
p = ~"${pattern}"
using single quote strings
the dollar-slashy string lets you use slashes and the dollar sign without having to escape them
Find operator
Alternatively to building a pattern, you can directly use the find operator =~ to build
a java.util.regex.Matcher instance:
Since a Matcher coerces to a boolean by calling its find method, the =~ operator is
consistent with the simple use of Perl’s =~ operator, when it appears as a predicate
(in if , while , etc.).
Match operator
The match operator ( ==~ ) is a slight variation of the find operator, that does not return
a Matcher but a boolean and requires a strict match of the input string:
class Car {
String make
String model
}
def cars = [
new Car(make: 'Peugeot', model: '508'),
new Car(make: 'Renault', model: 'Clio')]
def makes = cars*.make
assert makes == ['Peugeot', 'Renault']
build a list of Car items. The list is an aggregate of objects.
call the spread operator on the list, accessing the make property of each item
The spread operator is null-safe, meaning that if an element of the collection is null, it
will return null instead of throwing a NullPointerException :
cars = [
new Car(make: 'Peugeot', model: '508'),
null,
new Car(make: 'Renault', model: 'Clio')]
assert cars*.make == ['Peugeot', null, 'Renault']
assert null*.make == null
build a list for which of of the elements is null
the receiver might also be null, in which case the return value is null
class Component {
Long id
String name
}
class CompositeObject implements Iterable<Component> {
def components = [
new Component(id: 1, name: 'Foo'),
new Component(id: 2, name: 'Bar')]
@Override
Iterator<Component> iterator() {
components.iterator()
}
}
def composite = new CompositeObject()
assert composite*.id == [1,2]
assert composite*.name == ['Foo','Bar']
Use multiple invocations of the spread-dot operator (here cars*.models*.name ) when
working with aggregates of data structures which themselves contain aggregates:
class Make {
String name
List<Model> models
}
@Canonical
class Model {
String name
}
def cars = [
new Make(name: 'Peugeot',
models: [new Model('408'), new Model('508')]),
new Make(name: 'Renault',
models: [new Model('Clio'), new Model('Captur')])
]
def makes = cars*.name
assert makes == ['Peugeot', 'Renault']
class Car {
String make
String model
}
def cars = [
[
new Car(make: 'Peugeot', model: '408'),
new Car(make: 'Peugeot', model: '508')
], [
new Car(make: 'Renault', model: 'Clio'),
new Car(make: 'Renault', model: 'Captur')
]
]
def models = cars.collectNested{ it.model }
assert models == [['408', '508'], ['Clio', 'Captur']]
assert function(*args) == 26
It is even possible to mix normal arguments with spread ones:
args = [4]
assert function(*args,5,6) == 26
Spread list elements
When used inside a list literal, the spread operator acts as if the spread element contents
were inlined into the list:
we want to insert the contents of the items list directly into list without having to
call addAll
The position of the spread map operator is relevant, like illustrated in the following
example:
we use the *:m1 notation to spread the contents of m1 into map , but redefine the
key d after spreading
Range operator
Groovy supports the concept of ranges and provides a notation ( .. ) to create ranges of
objects:
Ranges implementation is lightweight, meaning that only the lower and upper bounds
are stored. You can create a range from any Comparable object that
has next() and previous() methods to determine the next / previous item in the
range. For example, you can create a range of characters this way:
Spaceship operator
The spaceship operator ( <=> ) delegates to the compareTo method:
assert (1 <=> 1) == 0
assert (1 <=> 2) == -1
assert (2 <=> 1) == 1
assert ('a' <=> 'z') == -1
Subscript operator
The subscript operator is a short hand notation for getAt or putAt , depending on
whether you find it on the left hand side or the right hand side of an assignment:
so does putAt
class User {
Long id
String name
def getAt(int i) {
switch (i) {
case 0: return id
case 1: return name
}
throw new IllegalArgumentException("No such element $i")
}
void putAt(int i, def value) {
switch (i) {
case 0: id = value; return
case 1: name = value; return
}
throw new IllegalArgumentException("No such element $i")
}
}
def user = new User(id: 1, name: 'Alex')
assert user[0] == 1
assert user[1] == 'Alex'
user[1] = 'Bob'
assert user.name == 'Bob'
the User class defines a custom getAt implementation
using the subscript operator with index 0 allows retrieving the user id
using the subscript operator with index 1 allows retrieving the user name
we can use the subscript operator to write to a property thanks to the delegation to putAt
and check that it’s really the property name which was changed
Membership operator
The membership operator ( in ) is equivalent to calling the isCase method. In the
context of a List , it is equivalent to calling contains , like in the following example:
Identity operator
In Groovy, using == to test equality is different from using the same operator in Java. In
Groovy, it is calling equals . If you want to compare reference equality, you should
use is like in the following example:
Coercion operator
The coercion operator ( as ) is a variant of casting. Coercion converts object from one
type to another withoutthem being compatible for assignment. Let’s take an example:
Integer x = 123
String s = (String) x
Integer is not assignable to a String , so it will produce a ClassCastException at
runtime
Integer x = 123
String s = x as String
Integer is not assignable to a String , but use of as will coerce it to a String
When an object is coerced into another, unless the target type is the same as the source
type, coercion will return a new object. The rules of coercion differ depending on the
source and target types, and coercion may fail if no conversion rules are found. Custom
conversion rules may be implemented thanks to the asType method:
class Identifiable {
String name
}
class User {
Long id
String name
def asType(Class target) {
if (target == Identifiable) {
return new Identifiable(name: name)
}
throw new ClassCastException("User cannot be coerced into $target")
}
}
def u = new User(name: 'Xavier')
def p = u as Identifiable
assert p instanceof Identifiable
assert !(p instanceof User)
the User class defines a custom conversion rule from User to Identifiable
Diamond operator
The diamond operator ( <> ) is a syntactic sugar only operator added to support
compatibility with the operator of the same name in Java 7. It is used to indicate that
generic types should be inferred from the declaration:
Call operator
The call operator () is used to call a method named call implicitly. For any object
which defines a call method, you can omit the .call part and use the call operator
instead:
class MyCallable {
int call(int x) {
2*x
}
}
we can call the method using the classic method call syntax
2 ** power
5 + - addition, subtraction
10 ^ binary/bitwise xor
11 | binary/bitwise or
13 || logical or
14 ? : ternary conditional
?: elvis operator
class Bucket {
int size
Just by implementing the plus() method, the Bucket class can now be used with
the + operator like so:
All (non-comparator) Groovy operators have a corresponding method that you can
implement in your own classes. The only requirements are that your method is public,
has the correct name, and has the correct number of arguments. The argument types
depend on what types you want to support on the right hand side of the operator. For
example, you could support the statement
* a.multiply(b) a in b b.isCase(a)
| a.or(b) ++ a.next()
^ a.xor(b) +a a.positive()
Operator Method Operator Method
as a.asType(b) -a a.negative()
1.3.2. Imports
In order to refer to any class you need a qualified reference to its package. Groovy
follows Java’s notion of allowing import statement to resolve class references.
Default imports
Default imports are the imports that Groovy language provides by default. For example
look at the following code:
new Date()
The same code in Java needs an import statement to Date class like this: import
java.util.Date. Groovy by default imports these classes for you.
import java.lang.*
import java.util.*
import java.io.*
import java.net.*
import groovy.lang.*
import groovy.util.*
import java.math.BigInteger
import java.math.BigDecimal
This is done because the classes from these packages are most commonly used. By
importing these boilerplate code is reduced.
Simple import
A simple import is an import statement where you fully define the class name along with
the package. For example the import statement import groovy.xml.MarkupBuilder in the
code below is a simple import which directly refers to a class inside a package.
Star import
Groovy, like Java, provides a special way to import all classes from a package using * ,
the so called star import. MarkupBuilder is a class which is in package groovy.xml ,
alongside another class called StreamingMarkupBuilder . In case you need to use both
classes, you can do:
import groovy.xml.MarkupBuilder
import groovy.xml.StreamingMarkupBuilder
import groovy.xml.*
def markupBuilder = new MarkupBuilder()
Static import
Groovy’s static import capability allows you to reference imported classes as if they
were static methods in your own class:
class SomeClass {
String format(Integer i) {
i.toString()
}
declaration of method with same name as method statically imported above, but with a
different parameter type
If you have the same types, the imported class takes precedence.
For example, lets say we need to calculate sines and cosines for our application. The
class java.lang.Math has static methods named sin and cos which fit our need. With
the help of a static star import, we can do:
Import aliasing
With type aliasing, we can refer to a fully qualified class name using a name of our
choice. This can be done with the as keyword, as before.
For example we can import java.sql.Date as SQLDate and use it in the same file
as java.util.Date without having to use the fully qualified name of either class:
import java.util.Date
import java.sql.Date as SQLDate
Main.groovy
class Main {
static void main(String... args) {
println 'Groovy world!'
}
}
define a Main class, the name is arbitrary
the public static void main(String[]) method is usable as the main method of the
class
This is typical code that you would find coming from Java, where code has to be
embedded into a class to be executable. Groovy makes it easier, the following code is
equivalent:
Main.groovy
Script class
A script is always compiled into a class. The Groovy compiler will compile the class for
you, with the body of the script copied into a run method. The previous example is
therefore compiled as if it was the following:
Main.groovy
import org.codehaus.groovy.runtime.InvokerHelper
class Main extends Script {
def run() {
println 'Groovy world!'
}
static void main(String[] args) {
InvokerHelper.runScript(Main, args)
}
}
The Main class extends the groovy.lang.Script class
If the script is in a file, then the base name of the file is used to determine the name of
the generated script class. In this example, if the name of the file is Main.groovy , then
the script class is going to be Main .
Methods
It is possible to define methods into a script, as illustrated here:
int fib(int n) {
n < 2 ? 1 : fib(n-1) + fib(n-2)
}
assert fib(10)==89
You can also mix methods and code. The generated script class will carry all methods
into the script class, and assemble all script bodies into the run method:
println 'Hello'
println "2^6==${power(6)}"
script begins
import org.codehaus.groovy.runtime.InvokerHelper
class Main extends Script {
int power(int n) { 2** n}
def run() {
println 'Hello'
println "2^6==${power(6)}"
}
static void main(String[] args) {
InvokerHelper.runScript(Main, args)
}
}
the power method is copied as is into the generated script class
Even if Groovy creates a class from your script, it is totally transparent for the
user. In particular, scripts are compiled to bytecode, and line numbers are
preserved. This implies that if an exception is thrown in a script, the stack trace
will show line numbers corresponding to the original script, not the generated
code that we have shown.
Variables
Variables in a script do not require a type definition. This means that this script:
int x = 1
int y = 2
assert x+y == 3
will behave the same as:
x = 1
y = 2
assert x+y == 3
However there is a semantic difference between the two:
If you want a variable to become a field of the class without going into the Binding , you can use th
annotation.
1.4.1. Types
Primitive types
Groovy supports the same primitive types as those defined by the Java Language
Specification:
• integral types: byte (8 bit), short (16 bit), int (32 bit) and long (64 bit)
• char type (16 bit, usable as a numeric type, representing an UTF-16 code)
While Groovy declares and stores primitive fields and variables as primitives, because it
uses Objects for everything, it autowraps references to primitives. Just like Java, the
wrappers it uses are
Table 2. primitive wrappers
boolean Boolean
char Character
short Short
int Integer
long Long
float Float
double Double
class Foo {
static int i
}
Class
Groovy classes are very similar to Java classes, and are compatible with Java ones at JVM
level. They may have methods, fields and properties (think JavaBean properties but with
less boilerplate). Classes and class members can have the same modifiers (public,
protected, private, static, etc) as in Java with some minor differences at the source level
which are explained shortly.
The key differences between Groovy classes and their Java counterparts are:
class Person {
String name
Integer age
method definition
Normal class
Normal classes refer to classes which are top level and concrete. This means they can be
instantiated without restrictions from any other classes or scripts. This way, they can
only be public (even though the public keyword may be suppressed). Classes are
instantiated by calling their constructors, using the new keyword, as in the following
snippet.
Inner class
Inner classes are defined within another classes. The enclosing class can use the inner
class as usual. On the other side, a inner class can access members of its enclosing class,
even if they are private. Classes other than the enclosing class are not allowed to access
inner classes. Here is an example:
class Outer {
private String privateStr
def callInnerMethod() {
new Inner().methodA()
}
class Inner {
def methodA() {
println "${privateStr}."
}
}
}
the inner class is instantiated and its method gets called
even being private, a field of the enclosing class is accessed by the inner class
• They increase encapsulation by hiding the inner class from other classes, which do
not need to know about it. This also leads to cleaner packages and workspaces.
• They provide a good organization, by grouping classes that are used by only one
class.
• They lead to more maintainable codes, since inner classes are near the classes that
use them.
In several cases, inner classes are implementation of interfaces whose methods are
needed by the outer class. The code below illustrates this with the usage of threads,
which are very common.
class Outer2 {
private String privateStr = 'some string'
def startThread() {
new Thread(new Inner2()).start()
}
The last example of inner class can be simplified with an anonymous inner class. The
same functionality can be achieved with the following code.
class Outer3 {
private String privateStr = 'some string'
def startThread() {
new Thread(new Runnable() {
void run() {
println "${privateStr}."
}
}).start()
}
}
comparing with the last example of previous section, the new Inner2() was replaced
by new Runnable() along with all its implementation
Thus, there was no need to define a new class to be used just once.
Abstract class
Abstract classes represent generic concepts, thus, they cannot be instantiated, being
created to be subclassed. Their members include fields/properties and abstract or
concrete methods. Abstract methods do not have implementation, and must be
implemented by concrete subclasses.
def concreteMethod() {
println 'concrete'
}
}
abstract classes must be declared with abstract keyword
Abstract classes are commonly compared to interfaces. But there are at least two
important differences of choosing one or another. First, while abstract classes may
contain fields/properties and concrete methods, interfaces may contain only abstract
methods (method signatures). Moreover, one class can implement several interfaces,
whereas it can extend just one class, abstract or not.
Interface
An interface defines a contract that a class needs to conform to. An interface only defines
a list of methods that need to be implemented, but does not define the methods
implementation.
interface Greeter {
void greet(String name)
}
an interface needs to be declared using the interface keyword
interface Greeter {
protected void greet(String name)
}
Using protected is a compile-time error
A class implements an interface if it defines the interface in its implements list or if any
of its superclasses does:
class DefaultGreeter {
void greet(String name) { println "Hello" }
}
greeter = new DefaultGreeter()
assert !(greeter instanceof Greeter)
In other words, Groovy does not define structural typing. It is however possible to make
an instance of an object implement an interface at runtime, using the as coercion
operator:
You can see that there are two distinct objects: one is the source object,
a DefaultGreeter instance, which does not implement the interface. The other is an
instance of Greeter that delegates to the coerced object.
Groovy interfaces do not support default implementation like Java 8 interfaces. If you are looking for
(but not equal), traits are close to interfaces, but allow default implementation as well as other impor
described in this manual.
Constructors
Constructors are special methods used to initialize an object with a specific state. As
with normal methods, it is possible for a class to declare more than one constructor, so
long as each constructor has a unique type signature. If an object doesn’t require any
parameters during construction, it may use a no-arg constructor. If no constructors are
supplied, an empty no-arg constructor will be provided by the Groovy compiler.
• positional parameters are used in a similar to how you would use Java constructors
• named parameters allow you to specify parameter names when invoking the
constructor.
Positional parameters
To create an object by using positional parameters, the respective class needs to declare
one or more constructors. In the case of multiple constructors, each must have a unique
type signature. The constructors can also added to the class using
the groovy.transform.TupleConstructor annotation.
Typically, once at least one constructor is declared, the class can only be instantiated by
having one of its constructors called. It is worth noting that, in this case, you can’t
normally create the class with named parameters. Groovy does support named
parameters so long as the class contains a no-arg constructor or provides a constructor
which takes a Map argument as the first (and potentially only) argument - see the next
section for details.
There are three forms of using a declared constructor. The first one is the normal Java
way, with the new keyword. The others rely on coercion of lists into the desired types. In
this case, it is possible to coerce with the as keyword and by statically typing the
variable.
class PersonConstructor {
String name
Integer age
PersonConstructor(name, age) {
this.name = name
this.age = age
}
}
Named parameters
If no (or a no-arg) constructor is declared, it is possible to create objects by passing
parameters in the form of a map (property/value pairs). This can be in handy in cases
where one wants to allow several combinations of parameters. Otherwise, by using
traditional positional parameters it would be necessary to declare all possible
constructors. Having a constructor where the first (and perhaps only) argument is
a Map argument is also supported - such a constructor may also be added using
the groovy.transform.MapConstructor annotation.
class PersonWOConstructor {
String name
Integer age
}
It is important to highlight, however, that this approach gives more power to the
constructor caller, while imposing an increased responsibility on the caller to get the
names and value types correct. Thus, if greater control is desired, declaring constructors
using positional parameters might be preferred.
Notes:
• While the example above supplied no constructor, you can also supply a no-arg
constructor or a constructor where the first argument is a Map , most typically it’s the
only argument.
• When no (or a no-arg) constructor is declared, Groovy replaces the named
constructor call by a call to the no-arg constructor followed by calls to the setter for
each supplied named property.
• When the first argument is a Map, Groovy combines all named parameters into a
Map (regardless of ordering) and supplies the map as the first parameter. This can
be a good approach if your properties are declared as final (since they will be set
in the constructor rather than after the fact with setters).
• You can support both named and positional construction by supply both positional
constructors as well as a no-arg or Map constructor.
• You can support hybrid construction by having a constructor where the first
argument is a Map but there are also additional positional parameters. Use this style
with caution.
Methods
Groovy methods are quite similar to other languages. Some peculiarities will be shown
in the next subsections.
Method definition
A method is defined with a return type or with the def keyword, to make the return
type untyped. A method can also receive any number of arguments, which may not have
their types explicitly declared. Java modifiers can be used normally, and if no visibility
modifier is provided, the method is public.
Methods in Groovy always return some value. If no return statement is provided, the
value evaluated in the last line executed will be returned. For instance, note that none of
the following methods uses the return keyword.
Named parameters
Like constructors, normal methods can also be called with named parameters. To
support this notation, a convention is used where the first argument to the method is
a Map . In the method body, the parameter values can be accessed as in normal maps
( map.key ). If the method has just a single Map argument, all supplied parameters must
be named.
Named parameters can be mixed with positional parameters. The same convention
applies, in this case, in addition to the Map argument as the first argument, the method
in question will have additional positional arguments as needed. Supplied positional
parameters when calling the method must be in order. The named parameters can be in
any position. They are grouped into the map and supplied as the first parameter
automatically.
If we don’t have the Map as the first argument, then a Map must be supplied for that
argument instead of named parameters. Failure to do so will lead
to groovy.lang.MissingMethodException :
def foo(Integer number, Map args) { "${args.name}: ${args.age}, and the
number is ${number}" }
foo(name: 'Marie', age: 1, 23)
Method call throws groovy.lang.MissingMethodException: No signature of
method: foo() is applicable for argument types: (LinkedHashMap,
Integer) values: [[name:Marie, age:1], 23] , because the named
argument Map parameter is not defined as the first argument
Although Groovy allows you to mix named and positional parameters, it can lead
to unnecessary confusion. Mix named and positional arguments with caution.
Default arguments
Default arguments make parameters optional. If the argument is not supplied, the
method assumes a default value.
Varargs
Groovy supports methods with a variable number of arguments. They are defined like
this: def foo(p1, …, pn, T… args) . Here foo supports n arguments by default, but
also an unspecified number of further arguments exceeding n .
Exception declaration
Groovy automatically allows you to treat checked exceptions like unchecked exceptions.
This means that you don’t need to declare any checked exceptions that a method may
throw as shown in the following example which can throw
a FileNotFoundException if the file isn’t found:
def badRead() {
new File('doesNotExist.txt').text
}
shouldFail(FileNotFoundException) {
badRead()
}
Nor will you be required to surround the call to the badRead method in the previous
example within a try/catch block - though you are free to do so if you wish.
If you wish to declare any exceptions that your code might throw (checked or
otherwise) you are free to do so. Adding exceptions won’t change how the code is used
from any other Groovy code but can be seen as documentation for the human reader of
your code. The exceptions will become part of the method declaration in the bytecode,
so if your code might be called from Java, it might be useful to include them. Using an
explicit checked exception declaration is illustrated in the following example:
shouldFail(FileNotFoundException) {
badRead()
}
class Data {
private String id = IDGenerator.next()
// ...
}
the private field id is initialized with IDGenerator.next()
It is possible to omit the type declaration of a field. This is however considered a bad
practice and in general it is a good idea to use strong typing for fields:
class BadPractice {
private mapping
}
class GoodPractice {
private Map<String,String> mapping
}
the field mapping doesn’t declare a type
The difference between the two is important if you want to use optional type checking
later. It is also important for documentation. However in some cases like scripting or if
you want to rely on duck typing it may be interesting to omit the type.
Properties
A property is an externally visible feature of a class. Rather than just using a public field
to represent such features (which provides a more limited abstraction and would
restrict refactoring possibilities), the typical convention in Java is to follow JavaBean
conventions, i.e. represent the property using a combination of a private backing field
and getters/setters. Groovy follows these same conventions but provides a simpler
approach to defining the property. You can define a property with:
class Person {
String name
int age
}
creates a backing private String name field, a getName and a setName method
creates a backing private int age field, a getAge and a setAge method
class Person {
final String name
final int age
Person(String name, int age) {
this.name = name
this.age = age
}
}
defines a read-only property of type String
defines a read-only property of type int
Properties are accessed by name and will call the getter or setter transparently, unless
the code is in the class which defines the property:
class Person {
String name
void name(String name) {
this.name = "Wonder$name"
}
String wonder() {
this.name
}
}
def p = new Person()
p.name = 'Marge'
assert p.name == 'Marge'
p.name('Marge')
assert p.wonder() == 'WonderMarge'
this.name will directly access the field because the property is accessed from within the class
that defines it
write access to the property is done outside of the Person class so it will implicitly
call setName
read access to the property is done outside of the Person class so it will implicitly
call getName
this will call the name method on Person which performs a direct access to the field
this will call the wonder method on Person which performs a direct read access to the field
It is worth noting that this behavior of accessing the backing field directly is done in
order to prevent a stack overflow when using the property access syntax within a class
that defines the property.
It is possible to list the properties of a class thanks to the meta properties field of an
instance:
class Person {
String name
int age
}
def p = new Person()
assert p.properties.keySet().containsAll(['name','age'])
By convention, Groovy will recognize properties even if there is no backing field
provided there are getters or setters that follow the Java Beans specification. For
example:
class PseudoProperties {
// a pseudo property "name"
void setName(String name) {}
String getName() {}
Annotation
Annotation definition
An annotation is a kind of special interface dedicated at annotating elements of the code.
An annotation is a type which superinterface is the Annotation interface. Annotations
are declared in a very similar way to interfaces, using the @interface keyword:
@interface SomeAnnotation {}
An annotation may define members in the form of methods without bodies and an
optional default value. The possible member types are limited to:
• primitive types
• Strings
• Classes
• an enumeration
• another annotation type
• or any array of the above
For example:
@interface SomeAnnotation {
String value()
}
@interface SomeAnnotation {
String value() default 'something'
}
@interface SomeAnnotation {
int step()
}
@interface SomeAnnotation {
Class appliesTo()
}
@interface SomeAnnotation {}
@interface SomeAnnotations {
SomeAnnotation[] value()
}
enum DayOfWeek { mon, tue, wed, thu, fri, sat, sun }
@interface Scheduled {
DayOfWeek dayOfWeek()
}
an annotation defining a value member of type String
an annotation defining a value member of type String with a default value of something
an annotation defining a value member which type is an array of another annotation type
Unlike in the Java language, in Groovy, an annotation can be used to alter the semantics
of the language. It is especially true of AST transformations which will generate code
based on annotations.
Annotation placement
An annotation can be applied on various elements of the code:
@SomeAnnotation
void someMethod() {
// ...
}
@SomeAnnotation
class SomeClass {}
In order to limit the scope where an annotation can be applied, it is necessary to declare
it on the annotation definition, using the Target annotation. For example, here is how
you would declare that an annotation can be applied to a class or a method:
import java.lang.annotation.ElementType
import java.lang.annotation.Target
@Target([ElementType.METHOD, ElementType.TYPE])
@interface SomeAnnotation {}
the @Target annotation is meant to annotate an annotation with a scope.
Groovy does not support the TYPE_PARAMETER and TYPE_USE element types which were introduced
@interface Page {
int statusCode()
}
@Page(statusCode=404)
void notFound() {
// ...
}
However it is possible to omit value= in the declaration of the value of an annotation if
the member value is the only one being set:
@interface Page {
String value()
int statusCode() default 200
}
@Page(value='/home')
void home() {
// ...
}
@Page('/users')
void userList() {
// ...
}
@Page(value='error',statusCode=404)
void notFound() {
// ...
}
we can omit the statusCode because it has a default value, but value needs to be set
since value is the only mandatory member without a default, we can omit value=
if both value and statusCode need to be set, it is required to use value= for the
default value member
Retention policy
The visibility of an annotation depends on its retention policy. The retention policy of an
annotation is set using the Retention annotation:
import java.lang.annotation.Retention
import java.lang.annotation.RetentionPolicy
@Retention(RetentionPolicy.SOURCE)
@interface SomeAnnotation {}
the @Retention annotation annotates the @SomeAnnotation annotation
@Retention(RetentionPolicy.RUNTIME)
@interface OnlyIf {
Class value()
}
To complete the example, let’s write a sample runner that would use that information:
class Runner {
static <T> T run(Class<T> taskClass) {
def tasks = taskClass.newInstance()
def params = [jdk:6, windows: false]
tasks.class.declaredMethods.each { m ->
if (Modifier.isPublic(m.modifiers) && m.parameterTypes.length
== 0) {
def onlyIf = m.getAnnotation(OnlyIf)
if (onlyIf) {
Closure cl = onlyIf.value().newInstance(tasks,tasks)
cl.delegate = params
if (cl()) {
m.invoke(tasks)
}
} else {
m.invoke(tasks)
}
}
}
tasks
}
}
create a new instance of the class passed as an argument (the task class)
call the closure, which is the annotation closure. It will return a boolean
if the method is not annotated with @OnlyIf , execute the method anyway
Meta-annotations
Declaring meta-annotations
Meta-annotations, also known as annotation aliases are annotations that are replaced at
compile time by other annotations (one meta-annotation is an alias for one or more
annotations). Meta-annotations can be used to reduce the size of code involving multiple
annotations.
@Service
@Transactional
class MyTransactionalService {}
Given the multiplication of annotations that you could add to the same class, a meta-
annotation could help by reducing the two annotations with a single one having the very
same semantics. For example, we might want to write this instead:
@TransactionalService
class MyTransactionalService {}
@TransactionalService is a meta-annotation
A meta-annotation is declared as a regular annotation but annotated
with @AnnotationCollector and the list of annotations it is collecting. In our case,
the @TransactionalService annotation can be written:
import groovy.transform.AnnotationCollector
@Service
@Transactional
@AnnotationCollector
@interface TransactionalService {
}
annotate the meta-annotation with @Service
Behavior of meta-annotations
Groovy supports both precompiled and source form meta-annotations. This means that
your meta-annotation may be precompiled, or you can have it in the same source tree as
the one you are currently compiling.
Meta-annotation parameters
@Timeout(after=3600)
@Dangerous(type='explosive')
And suppose that you want create a meta-annotation named @Explosive :
@Timeout(after=3600)
@Dangerous(type='explosive')
@AnnotationCollector
public @interface Explosive {}
By default, when the annotations are replaced, they will get the annotation parameter
values as they were defined in the alias. More interesting, the meta-annotation
supports overriding specific values:
@Explosive(after=0)
class Bomb {}
the after value provided as a parameter to @Explosive overrides the one defined in
the @Timeout annotation
If two annotations define the same parameter name, the default processor will copy the
annotation value to all annotations that accept this parameter:
@Retention(RetentionPolicy.RUNTIME)
public @interface Foo {
String value()
}
@Retention(RetentionPolicy.RUNTIME)
public @interface Bar {
String value()
}
@Foo
@Bar
@AnnotationCollector
public @interface FooBar {}
@Foo('a')
@Bar('b')
class Bob {}
@FooBar('a')
class Joe {}
assert Joe.getAnnotation(Foo).value() == 'a'
println Joe.getAnnotation(Bar).value() == 'a'
the @Foo annotation defines the value member of type String
the @Bar annotation also defines the value member of type String
It is a compile time error if the collected annotations define the same members with incompatible typ
on the previous example @Foo defined a value of type String but @Bar defined a value of type i
INFO: Custom processors (discussed next) may or may not support this parameter.
A custom annotation processor will let you choose how to expand a meta-annotation
into collected annotations. The behaviour of the meta-annotation is, in this case, totally
up to you. To do this, you must:
@CompileStatic(TypeCheckingMode.SKIP)
@AnnotationCollector
public @interface CompileDynamic {}
Instead, we will define it like this:
@AnnotationCollector(processor =
"org.codehaus.groovy.transform.CompileDynamicProcessor")
public @interface CompileDynamic {
}
The first thing you may notice is that our interface is no longer annotated
with @CompileStatic . The reason for this is that we rely on the processor parameter
instead, that references a class which will generate the annotation.
CompileDynamicProcessor.groovy
@CompileStatic
class CompileDynamicProcessor extends AnnotationCollectorTransform {
private static final ClassNode CS_NODE =
ClassHelper.make(CompileStatic)
private static final ClassNode TC_NODE =
ClassHelper.make(TypeCheckingMode)
In the example, the visit method is the only method which has to be overridden. It is
meant to return a list of annotation nodes that will be added to the node annotated with
the meta-annotation. In this example, we return a single one corresponding
to @CompileStatic(TypeCheckingMode.SKIP) .
Inheritance
(TBD)
Generics
(TBD)
1.4.2. Traits
Traits are a structural construct of the language which allows:
• composition of behaviors
• runtime implementation of interfaces
• behavior overriding
• compatibility with static type checking/compilation
They can be seen as interfaces carrying both default implementations and state. A
trait is defined using the trait keyword:
trait FlyingAbility {
String fly() { "I'm flying!" }
}
declaration of a trait
Then it can be used like a normal interface using the implements keyword:
the Bird class automatically gets the behavior of the FlyingAbility trait
Traits allow a wide range of capabilities, from simple composition to testing, which are
described thoroughly in this section.
Methods
Public methods
Declaring a method in a trait can be done like any regular method in a class:
trait FlyingAbility {
String fly() { "I'm flying!" }
}
declaration of a trait
Abstract methods
In addition, traits may declare abstract methods too, which therefore need to be
implemented in the class implementing the trait:
trait Greetable {
abstract String name()
String greeting() { "Hello, ${name()}!" }
}
implementing class will have to declare the name method
Private methods
Traits may also define private methods. Those methods will not appear in the trait
contract interface:
trait Greeter {
private String greetingMessage() {
'Hello from a private method!'
}
String greet() {
def m = greetingMessage()
println m
m
}
}
class GreetingMachine implements Greeter {}
def g = new GreetingMachine()
assert g.greet() == "Hello from a private method!"
try {
assert g.greetingMessage()
} catch (MissingMethodException e) {
println "greetingMessage is private in trait"
}
define a private method greetingMessage in the trait
Final methods
If we have a class implementing a trait, conceptually implementations from the trait
methods are "inherited" into the class. But, in reality, there is no base class containing
such implementations. Rather, they are woven directly into the class. A final modifier on
a method just indicates what the modifier will be for the woven method. While it would
likely be considered bad style to inherit and override or multiply inherit methods with
the same signature but a mix of final and non-final variants, Groovy doesn’t prohibit this
scenario. Normal method selection applies and the modifier used will be determined
from the resulting method. You might consider creating a base class which implements
the desired trait(s) if you want trait implementation methods that can’t be overridden.
trait Introspector {
def whoAmI() { this }
}
class Foo implements Introspector {}
def foo = new Foo()
then calling:
foo.whoAmI()
will return the same instance:
assert foo.whoAmI().is(foo)
Interfaces
Traits may implement interfaces, in which case the interfaces are declared using
the implements keyword:
interface Named {
String name()
}
trait Greetable implements Named {
String greeting() { "Hello, ${name()}!" }
}
class Person implements Greetable {
String name() { 'Bob' }
}
Properties
A trait may define properties, like in the following example:
trait Named {
String name
}
class Person implements Named {}
def p = new Person(name: 'Bob')
assert p.name == 'Bob'
assert p.getName() == 'Bob'
declare a property name inside a trait
Fields
Private fields
Since traits allow the use of private methods, it can also be interesting to use private
fields to store state. Traits will let you do that:
trait Counter {
private int count = 0
int count() { count += 1; count }
}
class Foo implements Counter {}
def f = new Foo()
assert f.count() == 1
assert f.count() == 2
declare a private field count inside a trait
declare a public method count that increments the counter and returns it
This is a major difference with Java 8 virtual extension methods. While virtual
extension methods do not carry state, traits can. Moreover, traits in Groovy are
supported starting with Java 6, because their implementation does not rely on
virtual extension methods. This means that even if a trait can be seen from a Java
class as a regular interface, that interface will not have default methods, only
abstract ones.
Public fields
Public fields work the same way as private fields, but in order to avoid the diamond
problem, field names are remapped in the implementing class:
trait Named {
public String name
}
class Person implements Named {}
def p = new Person()
p.Named__name = 'Bob'
declare a public field inside the trait
The name of the field depends on the fully qualified name of the trait. All dots ( . ) in
package are replaced with an underscore ( _ ), and the final name includes a double
underscore. So if the type of the field is String , the name of the package
is my.package , the name of the trait is Foo and the name of the field is bar , in the
implementing class, the public field will appear as:
String my_package_Foo__bar
While traits support public fields, it is not recommended to use them and considered as a bad practic
Composition of behaviors
Traits can be used to implement multiple inheritance in a controlled way. For example,
we can have the following traits:
trait FlyingAbility {
String fly() { "I'm flying!" }
}
trait SpeakingAbility {
String speak() { "I'm speaking!" }
}
And a class implementing both traits:
Traits encourage the reuse of capabilities among objects, and the creation of new classes
by the composition of existing behavior.
Extending traits
Simple inheritance
Traits may extend another trait, in which case you must use the extends keyword:
trait Named {
String name
}
trait Polite extends Named {
String introduce() { "Hello, I am $name" }
}
class Person implements Polite {}
def p = new Person(name: 'Alice')
assert p.introduce() == 'Hello, I am Alice'
the Named trait defines a single name property
Polite adds a new method which has access to the name property of the super-trait
the name property is visible from the Person class implementing Polite
Multiple inheritance
Alternatively, a trait may extend multiple traits. In that case, all super traits must be
declared in the implements clause:
trait WithId {
Long id
}
trait WithName {
String name
}
trait Identified implements WithId, WithName {}
WithId trait defines the id property
trait SpeakingDuck {
String speak() { quack() }
}
class Duck implements SpeakingDuck {
String methodMissing(String name, args) {
"${name.capitalize()}!"
}
}
def d = new Duck()
assert d.speak() == 'Quack!'
the SpeakingDuck expects the quack method to be defined
calling the speak method triggers a call to quack which is handled by methodMissing
trait DynamicObject {
private Map props = [:]
def methodMissing(String name, args) {
name.toUpperCase()
}
def propertyMissing(String prop) {
props[prop]
}
void setProperty(String prop, Object value) {
props[prop] = value
}
}
calling an non-existing property will call the method from the trait
trait A {
String exec() { 'A' }
}
trait B {
String exec() { 'B' }
}
class C implements A,B {}
trait A defines a method named exec returning a String
In this case, the default behavior is that the method from the last declared trait in
the implements clause wins. Here, B is declared after A so the method from B will be
picked up:
calls the version from A instead of using the default resolution, which would be the one from B
trait Extra {
String extra() { "I'm an extra method" }
}
class Something {
String doSomething() { 'Something' }
}
the Extra trait defines an extra method
Then if we do:
When coercing an object to a trait, the result of the operation is not the same
instance. It is guaranteed that the coerced object will implement both the
trait and the interfaces that the original object implements, but the result
will not be an instance of the original class.
class C {}
When coercing an object to multiple traits, the result of the operation is not the
same instance. It is guaranteed that the coerced object will implement both the
traits and the interfaces that the original object implements, but the result
will not be an instance of the original class.
Chaining behavior
Groovy supports the concept of stackable traits. The idea is to delegate from one trait to
the other if the current trait is not capable of handling a message. To illustrate this, let’s
imagine a message handler interface like this:
interface MessageHandler {
void on(String message, Map payload)
}
Then you can compose a message handler by applying small behaviors. For example,
let’s define a default handler in the form of a trait:
perform logging
then super makes it delegate the call to the next trait in the chain
Then our class can be rewritten as this:
The interest of this approach becomes more evident if we add a third handler, which is
responsible for handling messages that start with say :
if the precondition is not met, pass the message to the next handler in the chain
• if the message starts with say , then the handler consumes the message
• if not, the say handler delegates to the next handler in the chain
This approach is very powerful because it allows you to write handlers that do not know
each other and yet let you combine them in the order you want. For example, if we
execute the code, it will print:
1. if the class implements another trait, the call delegates to the next trait in the chain
2. if there isn’t any trait left in the chain, super refers to the super class of the
implementing class (this)
For example, it is possible to decorate final classes thanks to this behavior:
trait Filtering {
StringBuilder append(String str) {
def subst = str.replace('o','')
super.append(subst)
}
String toString() { super.toString() }
}
def sb = new StringBuilder().withTraits Filtering
sb.append('Groovy')
assert sb.toString() == 'Grvy'
define a trait named Filtering , supposed to be applied on a StringBuilder at runtime
redefine the append method
the string which has been appended no longer contains the letter o
Advanced features
SAM type coercion
If a trait defines a single abstract method, it is candidate for SAM (Single Abstract
Method) type coercion. For example, imagine the following trait:
trait Greeter {
String greet() { "Hello $name" }
abstract String getName()
}
the greet method is not abstract and calls the abstract method getName
Since getName is the single abstract method in the Greeter trait, you can write:
or even:
This feature can be used to compose behaviors in an very precise way, in case you want
to override the behavior of an already implemented method.
import groovy.transform.CompileStatic
import org.codehaus.groovy.control.CompilerConfiguration
import org.codehaus.groovy.control.customizers.ASTTransformationCustomizer
import org.codehaus.groovy.control.customizers.ImportCustomizer
void setup() {
config = new CompilerConfiguration()
shell = new GroovyShell(config)
}
void testSomething() {
assert shell.evaluate('1+1') == 2
}
void otherTest() { /* ... */ }
}
In this example, we create a simple test case which uses two properties (config and shell)
and uses those in multiple test methods. Now imagine that you want to test the same,
but with another distinct compiler configuration. One option is to create a subclass
of SomeTest :
trait MyTestSupport {
void setup() {
config = new CompilerConfiguration()
config.addCompilationCustomizers( new
ASTTransformationCustomizer(CompileStatic) )
shell = new GroovyShell(config)
}
}
Then use it in the subclasses:
This feature is in particular useful when you don’t have access to the super class source
code. It can be used to mock methods or force a particular implementation of a method
in a subclass. It lets you refactor your code to keep the overridden logic in a single trait
and inherit a new behavior just by implementing it. The alternative, of course, is to
override the method in every place you would have used the new code.
It’s worth noting that if you use runtime traits, the methods from the trait are always preferred to tho
object:
class Person {
String name
}
trait Bob {
String getName() { 'Bob' }
}
Again, don’t forget that dynamic trait coercion returns a distinct object which
only implements the original interfaces, as well as the traits.
mixin B into A
The last point is actually a very important and illustrates a place where mixins have an
advantage over traits: the instances are not modified, so if you mixin some class into
another, there isn’t a third class generated, and methods which respond to A will
continue responding to A even if mixed in.
It is possible to define static methods in a trait, but it comes with numerous limitations:
• Traits with static methods cannot be compiled statically or type checked. All static
methods, properties and field are accessed dynamically (it’s a limitation from the
JVM).
• Static methods do not appear within the generated interfaces for each trait.
• The trait is interpreted as a template for the implementing class, which means that
each implementing class will get its own static methods, properties and fields. So a
static member declared on a trait doesn’t belong to the Trait , but to it’s
implementing class.
• You should typically not mix static and instance methods of the same signature. The
normal rules for applying traits apply (including multiple inheritance conflict
resolution). If the method chosen is static but some implemented trait has an
instance variant, a compilation error will occur. If the method chosen is the instance
variant, the static variant will be ignored (the behavior is similar to static methods in
Java interfaces for this case).
Let’s start with a simple example:
trait TestHelper {
public static boolean CALLED = false
static void init() {
CALLED = true
}
}
class Foo implements TestHelper {}
Foo.init()
assert Foo.TestHelper__CALLED
the static field is declared in the trait
As usual, it is not recommended to use public fields. Anyway, should you want this, you
must understand that the following code would fail:
Foo.CALLED = true
because there is no static field CALLED defined on the trait itself. Likewise, if you have
two distinct implementing classes, each one gets a distinct static field:
trait IntCouple {
int x = 1
int y = 2
int sum() { x+y }
}
The trait defines two properties, x and y , as well as a sum method. Now let’s create a
class which implements the trait:
Override property y
assert elem.f() == 3
The reason is that the sum method accesses the fields of the trait. So it is using
the x and y values defined in the trait. If you want to use the values from the
implementing class, then you need to dereference fields by using getters and setters, like
in this last example:
trait IntCouple {
int x = 1
int y = 2
int sum() { getX()+getY() }
}
Self types
Type constraints on traits
Sometimes you will want to write a trait that can only be applied to some type. For
example, you may want to apply a trait on a class that extends another class which is
beyond your control, and still be able to call those methods. To illustrate this, let’s start
with this example:
class CommunicationService {
static void sendMessage(String from, String to, String message) {
println "$from sent [$message] to $to"
}
}
trait Communicating {
void sendMessage(Device to, String message) {
CommunicationService.sendMessage(id, to.id, message)
}
}
Defines a communicating trait for devices that can call the service
It is clear, here, that the Communicating trait can only apply to Device . However,
there’s no explicit contract to indicate that, because traits cannot extend classes.
However, the code compiles and runs perfectly fine, because id in the trait method will
be resolved dynamically. The problem is that there is nothing that prevents the trait
from being applied to any class which is not a Device . Any class which has an id would
work, while any class that does not have an id property would cause a runtime error.
The problem is even more complex if you want to enable type checking or
apply @CompileStatic on the trait: because the trait knows nothing about itself being
a Device , the type checker will complain saying that it does not find the id property.
One possibility is to explicitly add a getId method in the trait, but it would not solve all
issues. What if a method requires this as a parameter, and actually requires it to be
a Device ?
class SecurityService {
static void check(Device d) { if (d.id==null) throw new
SecurityException() }
}
If you want to be able to call this in the trait, then you will explicitly need to
cast this into a Device . This can quickly become unreadable with explicit casts
to this everywhere.
The @SelfType annotation
In order to make this contract explicit, and to make the type checker aware of the type of
itself, Groovy provides a @SelfType annotation that will:
• let you declare the types that a class that implements this trait must inherit or
implement
• throw a compile time error if those type constraints are not satisfied
So in our previous example, we can fix the trait using
the @groovy.transform.SelfType annotation:
@SelfType(Device)
@CompileStatic
trait Communicating {
void sendMessage(Device to, String message) {
SecurityService.check(this)
CommunicationService.sendMessage(id, to.id, message)
}
}
Now if you try to implement this trait on a class that is not a device, a compile-time error
will occur:
class 'MyDevice' implements trait 'Communicating' but does not extend self
type class 'Device'
In conclusion, self types are a powerful way of declaring constraints on traits without
having to declare the contract directly in the trait or having to use casts everywhere,
maintaining separation of concerns as tight as it should be.
Limitations
Compatibility with AST transformations
Traits are not officially compatible with AST transformations. Some of them, like @CompileStatic
the trait itself (not on implementing classes), while others will apply on both the implementing class a
is absolutely no guarantee that an AST transformation will run on a trait as it does on a regular class, s
own risk!
trait Counting {
int x
void inc() {
x++
}
void dec() {
--x
}
}
class Counter implements Counting {}
def c = new Counter()
c.inc()
x is defined within the trait, postfix increment is not allowed
1.5. Closures
This chapter covers Groovy Closures. A closure in Groovy is an open, anonymous, block
of code that can take arguments, return a value and be assigned to a variable. A closure
may reference variables declared in its surrounding scope. In opposition to the formal
definition of a closure, Closure in the Groovy language can also contain free variables
which are defined outside of its surrounding scope. While breaking the formal concept
of a closure, it offers a variety of advantages which are described in this chapter.
1.5.1. Syntax
Defining a closure
A closure definition follows this syntax:
When a parameter list is specified, the -> character is required and serves to separate
the arguments from the closure body. The statements portion consists of 0, 1, or many
Groovy statements.
{ item++ }
{ -> item++ }
{ println it }
{ it -> println it }
{ reader ->
def line = reader.readLine()
line.trim()
}
A closure referencing a variable named item
It is possible to explicitly separate closure parameters from code by adding an arrow ( -> )
In that case it is often better to use an explicit name for the parameter
Closures as an object
A closure is an instance of the groovy.lang.Closure class, making it assignable to a
variable or a field as any other variable, despite being a block of code:
If not using def , you can assign a closure to a variable of type groovy.lang.Closure
Optionally, you can specify the return type of the closure by using the generic type
of groovy.lang.Closure
Calling a closure
A closure, as an anonymous block of code, can be called like any other method. If you
define a closure which takes no argument like this:
or using call
Unlike a method, a closure always returns a value when called. The next section
discusses how to declare closure arguments, when to use them and what is the implicit
"it" parameter.
1.5.2. Parameters
Normal parameters
Parameters of closures follow the same principle as parameters of regular methods:
• an optional type
• a name
• an optional default value
Parameters are separated with commas:
def closureWithOneArg = { str -> str.toUpperCase() }
assert closureWithOneArg('groovy') == 'GROOVY'
Implicit parameter
When a closure does not explicitly define a parameter list (using -> ), a
closure always defines an implicit parameter, named it . This means that this code:
// this call will fail because the closure doesn't accept any argument
magicNumber(11)
Varargs
It is possible for a closure to declare variable arguments like any other
method. Vargs methods are methods that can accept a variable number of arguments if
the last parameter is of variable length (or an array) like in the next examples:
It may be called using any number of arguments without having to explicitly wrap them into an
array
The same behavior is directly available if the args parameter is declared as an array
• owner corresponds to the enclosing object where the closure is defined, which may
be either a class or a closure
• delegate corresponds to a third party object where methods calls or properties are
resolved whenever the receiver of the message is not defined
class Enclosing {
void run() {
def whatIsThisObject = { getThisObject() }
assert whatIsThisObject() == this
def whatIsThis = { this }
assert whatIsThis() == this
}
}
class EnclosedInInnerClass {
class Inner {
Closure cl = { this }
}
void run() {
def inner = new Inner()
assert inner.cl() == inner
}
}
class NestedClosures {
void run() {
def nestedClosures = {
def cl = { this }
cl()
}
assert nestedClosures() == this
}
}
a closure is defined inside the Enclosing class, and returns getThisObject
calling the closure will return the instance of Enclosing where the closure is defined
in general, you will just want to use the shortcut this notation
this in the closure will return the inner class, not the top-level one
in case of nested closures, like here cl being defined inside the scope of nestedClosures
then this corresponds to the closest outer class, not the enclosing closure!
It is of course possible to call methods from the enclosing class this way:
class Person {
String name
int age
String toString() { "$name is $age years old" }
String dump() {
def cl = {
String msg = this.toString()
println msg
msg
}
cl()
}
}
def p = new Person(name:'Janice', age:74)
assert p.dump() == 'Janice is 74 years old'
the closure calls toString on this , which will actually call the toString method on the
enclosing object, that is to say the Person instance
Owner of a closure
The owner of a closure is very similar to the definition of this in a closure with a subtle
difference: it will return the direct enclosing object, be it a closure or a class:
class Enclosing {
void run() {
def whatIsOwnerMethod = { getOwner() }
assert whatIsOwnerMethod() == this
def whatIsOwner = { owner }
assert whatIsOwner() == this
}
}
class EnclosedInInnerClass {
class Inner {
Closure cl = { owner }
}
void run() {
def inner = new Inner()
assert inner.cl() == inner
}
}
class NestedClosures {
void run() {
def nestedClosures = {
def cl = { owner }
cl()
}
assert nestedClosures() == nestedClosures
}
}
a closure is defined inside the Enclosing class, and returns getOwner
calling the closure will return the instance of Enclosing where the closure is defined
in general, you will just want to use the shortcut owner notation
owner in the closure will return the inner class, not the top-level one
but in case of nested closures, like here cl being defined inside the scope
of nestedClosures
then owner corresponds to the enclosing closure, hence a different object from this !
Delegate of a closure
The delegate of a closure can be accessed by using the delegate property or calling
the getDelegate method. It is a powerful concept for building domain specific
languages in Groovy. While closure-this and closure-owner refer to the lexical scope of a
closure, the delegate is a user defined object that a closure will use. By default, the
delegate is set to owner :
class Enclosing {
void run() {
def cl = { getDelegate() }
def cl2 = { delegate }
assert cl() == cl2()
assert cl() == this
def enclosed = {
{ -> delegate }.call()
}
assert enclosed() == enclosed
}
}
you can get the delegate of a closure calling the getDelegate method
The delegate of a closure can be changed to any object. Let’s illustrate this by creating
two classes which are not subclasses of each other but both define a property
called name :
class Person {
String name
}
class Thing {
String name
}
def p = new Person(name: 'Norman')
def t = new Thing(name: 'Teapot')
Then let’s define a closure which fetches the name property on the delegate:
upperCasedName.delegate = p
assert upperCasedName() == 'NORMAN'
upperCasedName.delegate = t
assert upperCasedName() == 'TEAPOT'
At this point, the behavior is not different from having a target variable defined in the
lexical scope of the closure:
def target = p
def upperCasedNameUsingVar = { target.name.toUpperCase() }
assert upperCasedNameUsingVar() == 'NORMAN'
However, there are major differences:
• in the last example, target is a local variable referenced from within the closure
• the delegate can be used transparently, that is to say without prefixing method calls
with delegate. as explained in the next paragraph.
Delegation strategy
Whenever, in a closure, a property is accessed without explicitly setting a receiver
object, then a delegation strategy is involved:
class Person {
String name
}
def p = new Person(name:'Igor')
def cl = { name.toUpperCase() }
cl.delegate = p
assert cl() == 'IGOR'
name is not referencing a variable in the lexical scope of the closure
The reason this code works is that the name property will be resolved transparently on
the delegate object! This is a very powerful way to resolve properties or method calls
inside closures. There’s no need to set an explicit delegate. receiver: the call will be
made because the default delegation strategy of the closure makes it so. A closure
actually defines multiple resolution strategies that you can choose:
• Closure.OWNER_FIRST is the default strategy. If a property/method exists on
the owner, then it will be called on the owner. If not, then the delegate is used.
• Closure.DELEGATE_FIRST reverses the logic: the delegate is used first, then
the owner
• Closure.OWNER_ONLY will only resolve the property/method lookup on the owner:
the delegate will be ignored.
• Closure.DELEGATE_ONLY will only resolve the property/method lookup on the
delegate: the owner will be ignored.
• Closure.TO_SELF can be used by developers who need advanced meta-
programming techniques and wish to implement a custom resolution strategy: the
resolution will not be made on the owner or the delegate but only on the closure
class itself. It makes only sense to use this if you implement your own subclass
of Closure .
Let’s illustrate the default "owner first" strategy with this code:
class Person {
String name
def pretty = { "My name is $name" }
String toString() {
pretty()
}
}
class Thing {
String name
}
both the Person and the Thing class define a name property
Using the default strategy, the name property is resolved on the owner first
there is no change in the result: name is first resolved on the owner of the closure
p.pretty.resolveStrategy = Closure.DELEGATE_FIRST
assert p.toString() == 'My name is Teapot'
By changing the resolveStrategy , we are modifying the way Groovy will resolve the
"implicit this" references: in this case, name will first be looked in the delegate, then if
not found, on the owner. Since name is defined in the delegate, an instance of Thing ,
then this value is used.
The difference between "delegate first" and "delegate only" or "owner first" and "owner
only" can be illustrated if one of the delegate (resp. owner) does not have such a method
or property:
class Person {
String name
int age
def fetchAge = { age }
}
class Thing {
String name
}
A comprehensive explanation about how to use this feature to develop DSLs can be found in adedicat
manual.
1.5.4. Closures in GStrings
Take the following code:
def x = 1
def gs = "x = ${x}"
assert gs == 'x = 1'
The code behaves as you would expect, but what happens if you add:
x = 2
assert gs == 'x = 2'
You will see that the assert fails! There are two reasons for this:
• the syntax ${x} in a GString does not represent a closure but an expression to $x ,
evaluated when the GString is created.
In our example, the GString is created with an expression referencing x . When
the GString is created, the value of x is 1, so the GString is created with a value of 1.
When the assert is triggered, the GString is evaluated and 1 is converted to
a String using toString . When we change x to 2, we did change the value of x , but it
is a different object, and the GString still references the old one.
A GString will only change its toString representation if the values it references are mutating. I
change, nothing will happen.
If you need a real closure in a GString and for example enforce lazy evaluation of
variables, you need to use the alternate syntax ${→ x} like in the fixed example:
def x = 1
def gs = "x = ${-> x}"
assert gs == 'x = 1'
x = 2
assert gs == 'x = 2'
And let’s illustrate how it differs from mutation with this code:
class Person {
String name
String toString() { name }
}
def sam = new Person(name:'Sam')
def lucy = new Person(name:'Lucy')
def p = sam
def gs = "Name: ${p}"
assert gs == 'Name: Sam'
p = lucy
assert gs == 'Name: Sam'
sam.name = 'Lucy'
assert gs == 'Name: Lucy'
the Person class has a toString method returning the name property
if we change p to Lucy
the string still evaluates to Sam because it was the value of p when the GString was created
So if you don’t want to rely on mutating objects or wrapping objects, you must use
closures in GString by explicitly declaring an empty argument list:
class Person {
String name
String toString() { name }
}
def sam = new Person(name:'Sam')
def lucy = new Person(name:'Lucy')
def p = sam
// Create a GString with lazy evaluation of "p"
def gs = "Name: ${-> p}"
assert gs == 'Name: Sam'
p = lucy
assert gs == 'Name: Lucy'
Left currying
Left currying is the fact of setting the left-most parameter of a closure, like in this
example:
curry will set the first parameter to 2 , creating a new closure (function) which accepts a
single String
Right currying
Similarily to left currying, it is possible to set the right-most parameter of a closure:
rcurry will set the last parameter to bla , creating a new closure (function) which accepts a
single int
ncurry will set the second parameter (index = 1) to 2d , creating a new volume function which
accepts length and height
it is also possible to set multiple parameters, starting from the specified index
the resulting function accepts as many parameters as the initial one minus the number of
parameters set by ncurry
Memoization
Memoization allows the result of the call of a closure to be cached. It is interesting if the
computation done by a function (closure) is slow, but you know that this function is
going to be called often with the same arguments. A typical example is the Fibonacci
suite. A naive implementation may look like this:
def fib
fib = { long n -> n<2?n:fib(n-1)+fib(n-2) }
assert fib(15) == 610 // slow!
It is a naive implementation because 'fib' is often called recursively with the same
arguments, leading to an exponential algorithm:
• memoizeBetween will generate a new closure which caches at least n values and at
most n values
The cache used in all memoize variants is a LRU cache.
Composition
Closure composition corresponds to the concept of function composition, that is to say
creating a new function by composing two or more functions (chaining calls), as
illustrated in this example:
def plus2 = { it + 2 }
def times3 = { it * 3 }
// reverse composition
assert times3plus2(3) == (times3 >> plus2)(3)
Trampoline
Recursive algorithms are often restricted by a physical limit: the maximum stack height.
For example, if you call a method that recursively calls itself too deep, you will
eventually receive a StackOverflowException .
An approach that helps in those situations is by using Closure and its trampoline
capability.
def factorial
factorial = { int n, def accu = 1G ->
if (n < 2) return accu
factorial.trampoline(n - 1, n * accu)
}
factorial = factorial.trampoline()
assert factorial(1) == 1
assert factorial(3) == 1 * 2 * 3
assert factorial(1000) // == 402387260.. plus another 2560 digits
Method pointers
It is often practical to be able to use a regular method as a closure. For example, you
might want to use the currying abilities of a closure, but those are not available to
normal methods. In Groovy, you can obtain a closure from any method with the method
pointer operator.
1.6. Semantics
This chapter covers the semantics of the Groovy programming language.
1.6.1. Statements
Variable definition
Variables can be defined using either their type (like String ) or by using the
keyword def :
String x
def o
def is a replacement for a type name. In variable definitions it is used to indicate that
you don’t care about the type. In variable definitions it is mandatory to either provide a
type name explicitly or to use "def" in replacement. This is needed to make variable
definitions detectable for the Groovy parser.
You can think of def as an alias of Object and you will understand it in an instant.
Variable definition types can be refined by using generics, like in List<String> names . To learn m
generics support, please read the generics section.
Variable assignment
You can assign values to variables for later use. Try the following:
x = 1
println x
x = new java.util.Date()
println x
x = -3.1499392
println x
x = false
println x
x = "Hi"
println x
Multiple assignment
Groovy supports multiple assignment, i.e. where multiple variables can be assigned at
once, e.g.:
With this technique, we can combine multiple assignments and the subscript operator
methods to implement object destructuring.
Consider the following immutable Coordinates class, containing a pair of longitude
and latitude doubles, and notice our implementation of the getAt() method:
@Immutable
class Coordinates {
double latitude
double longitude
assert la == 43.23
assert lo == 3.67
we create an instance of the Coordinates class
then, we use a multiple assignment to get the individual longitude and latitude values
Control structures
Conditional structures
if / else
def x = false
def y = false
if ( !x ) {
x = true
}
assert x == true
if ( x ) {
x = false
} else {
y = true
}
assert x == y
Groovy also supports the normal Java "nested" if then else if syntax:
if ( ... ) {
...
} else if (...) {
...
} else {
...
}
switch / case
The switch statement in Groovy is backwards compatible with Java code; so you can fall
through cases sharing the same code for multiple matches.
One difference though is that the Groovy switch statement can handle any kind of switch
value and different kinds of matching can be performed.
def x = 1.23
def result = ""
switch ( x ) {
case "foo":
result = "found foo"
// lets fall through
case "bar":
result += "bar"
case 12..30:
result = "range"
break
case Integer:
result = "integer"
break
case Number:
result = "number"
break
• Class case values match if the switch value is an instance of the class
• Regular expression case values match if the toString() representation of the
switch value matches the regex
• Collection case values match if the switch value is contained in the collection. This
also includes ranges (since they are Lists)
• Closure case values match if the calling the closure returns a result which is true
according to the Groovy truth
• If none of the above are used then the case value matches if the case value equals the
switch value
When using a closure case value, the default it parameter is actually the switch value (in our examp
Looping structures
Classic for loop
The for loop in Groovy is much simpler and works with any kind of array, collection,
Map, etc.
Groovy also supports the Java colon variation with colons: for (char c : text) {} , where the
variable is mandatory.
while loop
def x = 0
def y = 5
assert x == 5
Exception handling
Exception handling is the same as Java.
try / catch / finally
You can specify a complete try-catch-finally , a try-catch , or a try-finally set
of blocks.
try {
'moo'.toLong() // this will generate an exception
assert false // asserting that this point should never be reached
} catch ( e ) {
assert e in NumberFormatException
}
We can put code within a 'finally' clause following a matching 'try' clause, so that
regardless of whether the code in the 'try' clause throws an exception, the code in the
finally clause will always execute:
def z
try {
def i = 7, j = 0
try {
def k = i / j
assert false //never reached due to Exception in previous
line
} finally {
z = 'reached here' //always executed even if Exception thrown
}
} catch ( e ) {
assert e in ArithmeticException
assert z == 'reached here'
}
Multi-catch
With the multi catch block (since Groovy 2.0), we’re able to define several exceptions to
be catch and treated by the same catch block:
try {
/* ... */
} catch ( IOException | NullPointerException e ) {
/* one block to handle 2 exceptions */
}
Power assertion
Unlike Java with which Groovy shares the assert keyword, the latter in Groovy
behaves very differently. First of all, an assertion in Groovy is always executed,
independently of the -ea flag of the JVM. It makes this a first class choice for unit tests.
The notion of "power asserts" is directly related to how the Groovy assert behaves.
assert 1+1 == 3
Will yield:
assert 1+1 == 3
| |
2 false
Power asserts become very interesting when the expressions are more complex, like in
the next example:
def x = 2
def y = 7
def z = 5
def calc = { a,b -> a*b+1 }
assert calc(x,y) == [x,z].sum()
Which will print the value for each sub-expression:
def x = 2
def y = 7
def z = 5
def calc = { a,b -> a*b+1 }
assert calc(x,y) == z*z : 'Incorrect computation result'
Which will print the following error message:
Labeled statements
Any statement can be associated with a label. Labels do not impact the semantics of the
code and can be used to make the code easier to read like in the following example:
given:
def x = 1
def y = 2
when:
def z = x+y
then:
assert z == 3
Despite not changing the semantics of the labelled statement, it is possible to use labels
in the break instruction as a target for jump, as in the next example. However, even if
this is allowed, this coding style is in general considered a bad practice:
1.6.2. Expressions
(TBD)
GPath expressions
GPath is a path expression language integrated into Groovy which allows parts of
nested structured data to be identified. In this sense, it has similar aims and scope as
XPath does for XML. GPath is often used in the context of processing XML, but it really
applies to any object graph. Where XPath uses a filesystem-like path notation, a tree
hierarchy with parts separated by a slash / , GPath use a dot-object notation to
perform object navigation.
• a.b.c → for POJOs, yields the c properties for all the b properties of a (sort of
like a.getB().getC() in JavaBeans)
In both cases, the GPath expression can be viewed as a query on an object graph. For
POJOs, the object graph is most often built by the program being written through object
instantiation and composition; for XML processing, the object graph is the result
of parsing the XML text, most often with classes like XmlParser or XmlSlurper.
See Processing XML for more in-depth details on consuming XML in Groovy.
When querying the object graph generated from XmlParser or XmlSlurper, a GPath expre
to attributes defined on elements with the @ notation:
Object navigation
Let’s see an example of a GPath expression on a simple object graph, the one obtained
using java reflection. Suppose you are in a non-static method of a class having another
method named aMethodFoo
Expression Deconstruction
We can decompose the expression this.class.methods.name.grep(~/.*Bar/) to
get an idea of how a GPath is evaluated:
this.class
property accessor, equivalent to this.getClass() in Java, yields
a Class object.
this.class.methods
property accessor, equivalent to this.getClass().getMethods() , yields an
array of Method objects.
this.class.methods.name
apply a property accessor on each element of an array and produce a list of the
results.
this.class.methods.name.grep(…)
call method grep on each element of the list yielded
by this.class.methods.name and produce a list of the results.
A sub-expression like this.class.methods yields an array because this is what
calling this.getClass().getMethods() in Java would produce. GPath expressions do not ha
where a s means a list or anything like that.
assert 'aSecondMethodBar' ==
this.class.methods.name.grep(~/.*Bar/).sort()[1]
array access are zero-based in GPath expressions
Text value of key element of first keyVal element of second sublevel element
under root/level is 'anotherKey'
Functional interfaces
interface Predicate<T> {
boolean accept(T obj)
}
Abstract classes with single abstract method
interface FooBar {
int foo()
void bar()
}
You can coerce a closure into the interface using the as keyword:
class FooBar {
int foo() { 1 }
void bar() { println 'bar' }
}
def map
map = [
i: 10,
hasNext: { map.i > 0 },
next: { map.i-- },
]
def iter = map as Iterator
Of course this is a rather contrived example, but illustrates the concept. You only need to
implement those methods that are actually called, but if a method is called that doesn’t
exist in the map a MissingMethodException or
an UnsupportedOperationException is thrown, depending on the arguments passed
to the call, as in the following example:
interface X {
void f()
void g(int n)
void h(String s, int n)
}
• MissingMethodException if the arguments of the call do not match those from the
interface/class
• UnsupportedOperationException if the arguments of the call match one of the
overloaded methods of the interface/class
enum State {
up,
down
}
then you can assign a string to the enum without having to use an explicit as coercion:
State st = 'up'
assert st == State.up
It is also possible to use a GString as the value:
class Polar {
double r
double phi
}
class Cartesian {
double x
double y
}
And that you want to convert from polar coordinates to cartesian coordinates. One way
of doing this is to define the asType method in the Polar class:
class Polar {
double r
double phi
def asType(Class target) {
if (Cartesian==target) {
return new Cartesian(x: r*cos(phi), y: r*sin(phi))
}
}
}
but it is also possible to define asType outside of the Polar class, which can be
practical if you want to define custom coercion strategies for "closed" classes or classes
for which you don’t own the source code, for example using a metaclass:
interface Greeter {
void greet()
}
def greeter = { println 'Hello, Groovy!' } as Greeter // Greeter is known
statically
greeter.greet()
But what if you get the class by reflection, for example by calling Class.forName ?
1.6.4. Optionality
Optional parentheses
Method calls can omit the parentheses if there is at least one parameter and there is no
ambiguity:
println()
println(Math.max(5, 10))
Optional semicolons
In Groovy semicolons at the end of the line can be omitted, if the line contains only a
single statement.
assert true;
can be more idiomatically written as:
assert true
Multiple statements in a line require semicolons to separate them:
class Server {
String toString() { "a server" }
}
Boolean expressions
True if the corresponding Boolean value is true .
assert true
assert !false
assert [1, 2, 3]
assert ![]
Matchers
True if the Matcher has at least one match.
assert [0].iterator()
assert ![].iterator()
Vector v = [0] as Vector
Enumeration enumeration = v.elements()
assert enumeration
enumeration.nextElement()
assert !enumeration
Maps
Non-empty Maps are evaluated to true.
assert ['one' : 1]
assert ![:]
Strings
Non-empty Strings, GStrings and CharSequences are coerced to true.
assert 'a'
assert !''
def nonEmpty = 'a'
assert "$nonEmpty"
def empty = ''
assert !"$empty"
Numbers
Non-zero numbers are true.
assert 1
assert 3.5
assert !0
Object References
Non-null object references are coerced to true.
class Color {
String name
boolean asBoolean(){
name == 'green' ? true : false
}
}
Groovy will call this method to coerce your object to a boolean value, e.g.:
1.6.6. Typing
Optional typing
Optional typing is the idea that a program can work even if you don’t put an explicit type
on a variable. Being a dynamic language, Groovy naturally implements that feature, for
example when you declare a variable:
we can still call the toUpperCase method, because the type of aString is resolved at
runtime
So it doesn’t matter that you use an explicit type here. It is in particular interesting when
you combine this feature with static type checking, because the type checker performs
type inference.
Eventually, the type can be removed altogether from both the return type and the
descriptor. But if you want to remove it from the return type, you then need to add an
explicit modifier for the method, so that the compiler can make a difference between a
method declaration and a method call, like illustrated in this example:
private concat(a,b) {
a+b
}
assert concat('foo','bar') == 'foobar'
assert concat(1,2) == 3
if we want to omit the return type, an explicit modifier has to be set.
class Person {
String firstName
String lastName
}
def p = new Person(firstName: 'Raymond', lastName: 'Devos')
assert p.formattedName == 'Raymond Devos'
the Person class only defines two properties, firstName and lastName
It is quite common in dynamic languages for code such as the above example not to
throw any error. How can this be? In Java, this would typically fail at compile time.
However, in Groovy, it will not fail at compile time, and if coded correctly, will also not
fail at runtime. In fact, to make this work at runtime, one possibility is to rely on runtime
metaprogramming. So just adding this line after the declaration of the Person class is
enough:
Person.metaClass.getFormattedName = { "$delegate.firstName
$delegate.lastName" }
This means that in general, in Groovy, you can’t make any assumption about the type of
an object beyond its declaration type, and even if you know it, you can’t determine at
compile time what method will be called, or which property will be retrieved. It has a lot
of interest, going from writing DSLs to testing, which is discussed in other sections of
this manual.
However, if your program doesn’t rely on dynamic features and that you come from the
static world (in particular, from a Java mindset), not catching such "errors" at compile
time can be surprising. As we have seen in the previous example, the compiler cannot be
sure this is an error. To make it aware that it is, you have to explicitly instruct the
compiler that you are switching to a type checked mode. This can be done by annotating
a class or a method with @groovy.lang.TypeChecked .
When type checking is activated, the compiler performs much more work:
• type inference is activated, meaning that even if you use def on a local variable for
example, the type checker will be able to infer the type of the variable from the
assignments
• method calls are resolved at compile time, meaning that if a method is not declared
on a class, the compiler will throw an error
• in general, all the compile time errors that you are used to find in a static language
will appear: method not found, property not found, incompatible types for method
calls, number precision errors, …
In this section, we will describe the behavior of the type checker in various situations
and explain the limits of using @TypeChecked on your code.
@groovy.transform.TypeChecked
class Calculator {
int sum(int x, int y) { x+y }
}
Or on a method:
class Calculator {
@groovy.transform.TypeChecked
int sum(int x, int y) { x+y }
}
In the first case, all methods, properties, fields, inner classes, … of the annotated class
will be type checked, whereas in the second case, only the method and potential closures
or anonymous inner classes that it contains will be type checked.
Skipping sections
The scope of type checking can be restricted. For example, if a class is type checked, you
can instruct the type checker to skip a method by annotating it
with @TypeChecked(TypeCheckingMode.SKIP) :
import groovy.transform.TypeChecked
import groovy.transform.TypeCheckingMode
@TypeChecked
class GreetingService {
String greeting() {
doGreet()
}
@TypeChecked(TypeCheckingMode.SKIP)
private String doGreet() {
def b = new SentenceBuilder()
b.Hello.my.name.is.John
b
}
}
def s = new GreetingService()
assert s.greeting() == 'Hello my name is John'
the GreetingService class is marked as type checked
• T equals A
T A Examples
T A Examples
Long l4 = (byte)
4
• the assignment is a variable declaration and A is a list literal and T has a constructor
whose parameters match the types of the elements in the list literal
• the assignment is a variable declaration and A is a map literal and T has a no-arg
constructor and a property for each of the map keys
For example, instead of writing:
@groovy.transform.TupleConstructor
class Person {
String firstName
String lastName
}
Person classic = new Person('Ada','Lovelace')
You can use a "list constructor":
@groovy.transform.TupleConstructor
class Person {
String firstName
String lastName
}
Person map = [firstName:'Ada', lastName:'Lovelace', age: 24]
The type checker will throw an error No such property: age for class: Person at
compile time
Method resolution
In type checked mode, methods are resolved at compile time. Resolution works by name
and arguments. The return type is irrelevant to method selection. Types of arguments
are matched against the types of the parameters following those rules:
An argument o of type A can be used for a parameter of type T if and only if:
• T equals A
• int sum(int x, int y) {
• x+y
• }
assert sum(3,4) == 7
• or T is a String and A is a GString
• String format(String str) {
• "Result: $str"
• }
assert format("${3+4}") == "Result: 7"
• or o is null and T is not a primitive type
• String format(int value) {
• "Result: $value"
• }
• assert format(7) == "Result: 7"
format(null) // fails
• or T is an array and A is an array and the component type of A is assignable to the
component type of T
• String format(String[] values) {
• "Result: ${values.join(' ')}"
• }
• assert format(['a','b'] as String[]) == "Result: a b"
format([1,2] as int[]) // fails
• or T is a superclass of A
• String format(AbstractList list) {
• list.join(',')
• }
• format(new ArrayList()) // passes
• String format(LinkedList list) {
• list.join(',')
• }
format(new ArrayList()) // fails
• or T is an interface implemented by A
• String format(List list) {
• list.join(',')
• }
• format(new ArrayList()) // passes
• String format(RandomAccess list) {
• 'foo'
• }
format(new LinkedList()) // fails
• or T or A are a primitive type and their boxed types are assignable
• int sum(int x, Integer y) {
• x+y
• }
• assert sum(3, new Integer(4)) == 7
• assert sum(new Integer(3), 4) == 7
• assert sum(new Integer(3), new Integer(4)) == 7
assert sum(new Integer(3), 4) == 7
• or T extends groovy.lang.Closure and A is a SAM-type (single abstract method
type)
• interface SAMType {
• int doSomething()
• }
• int twice(SAMType sam) { 2*sam.doSomething() }
• assert twice { 123 } == 246
• abstract class AbstractSAM {
• int calc() { 2* value() }
• abstract int value()
• }
• int eightTimes(AbstractSAM sam) { 4*sam.calc() }
assert eightTimes { 123 } == 984
• or T and A derive from java.lang.Number and conform to the same rules
as assignment of numbers
If a method with the appropriate name and arguments is not found at compile time, an
error is thrown. The difference with "normal" Groovy is illustrated in the following
example:
class MyService {
void doSomething() {
printLine 'Do something'
}
}
printLine is an error, but since we’re in a dynamic mode, the error is not caught at compile
time
The example above shows a class that Groovy will be able to compile. However, if you
try to create an instance of MyService and call the doSomething method, then it will
fail at runtime, because printLine doesn’t exist. Of course, we already showed how
Groovy could make this a perfectly valid call, for example by
catching MethodMissingException or implementing a custom meta-class, but if you
know you’re not in such a case, @TypeChecked comes handy:
@groovy.transform.TypeChecked
class MyService {
void doSomething() {
printLine 'Do something'
}
}
printLine is this time a compile-time error
Just adding @TypeChecked will trigger compile time method resolution. The type
checker will try to find a method printLine accepting a String on
the MyService class, but cannot find one. It will fail compilation with the following
message:
It is important to understand the logic behind the type checker: it is a compile-time check, so by defin
type checker is not aware of any kind of runtime metaprogramming that you do. This means that cod
perfectly valid without @TypeChecked will notcompile anymore if you activate type checking. This
particular true if you think of duck typing:
class Duck {
void quack() {
println 'Quack!'
}
}
class QuackingBird {
void quack() {
println 'Quack!'
}
}
@groovy.transform.TypeChecked
void accept(quacker) {
quacker.quack()
}
accept(new Duck())
we define a Duck class which defines a quack method
we define another QuackingBird class which also defines a quack method
quacker is loosely typed, so since the method is @TypeChecked , we will obtain a compile-
time error
Type inference
Principles
When code is annotated with @TypeChecked , the compiler performs type inference. It
doesn’t simply rely on static types, but also uses various techniques to infer the types of
variables, return types, literals, … so that the code remains as clean as possible even if
you activate the type checker.
It is worth noting that although the compiler performs type inference on local variables,
it does not perform any kind of type inference on fields, always falling back to
the declared type of a field. To illustrate this, let’s take a look at this example:
class SomeClass {
def someUntypedField
String someTypedField
void someMethod() {
someUntypedField = '123'
someUntypedField = someUntypedField.toUpperCase() // compile-time
error
}
void someSafeMethod() {
someTypedField = '123'
someTypedField = someTypedField.toUpperCase()
}
void someMethodUsingLocalVariable() {
def localVariable = '123'
someUntypedField = localVariable.toUpperCase()
}
}
someUntypedField uses def as a declaration type
yet calling toUpperCase fails at compile time because the field is not typed properly
Why such a difference? The reason is thread safety. At compile time, we can’t
make any guarantee about the type of a field. Any thread can access any field at any time
and between the moment a field is assigned a variable of some type in a method and the
time is is used the line after, another thread may have changed the contents of the field.
This is not the case for local variables: we know if they "escape" or not, so we can make
sure that the type of a variable is constant (or not) over time. Note that even if a field is
final, the JVM makes no guarantee about it, so the type checker doesn’t behave
differently if a field is final or not.
This is one of the reasons why we recommend to use typed fields. While using def for local variable
thanks to type inference, this is not the case for fields, which also belong to the public API of a class, h
important.
Groovy provides a syntax for various type literals. There are three native collection
literals in Groovy:
As you can see, with the noticeable exception of the IntRange , the inferred type makes
use of generics types to describe the contents of a collection. In case the collection
contains elements of different types, the type checker still performs type inference of the
components, but uses the notion of least upper bound.
• if A or B is a primitive type and that A isn’t equal to B , the least upper bound
of A and B is the least upper bound of their wrapper types
If A and B only have one (1) interface in common and that their common superclass
is Object , then the LUB of both is the common interface.
The least upper bound represents the minimal type to which both A and B can be
assigned. So for example, if A and B are both String , then the LUB (least upper bound)
of both is also String .
class Top {}
class Bottom1 extends Top {}
class Bottom2 extends Top {}
the LUB of ArrayList and LinkedList is their common super type, AbstractList
the LUB of ArrayList and List is their only common interface, List
In those examples, the LUB is always representable as a normal, JVM supported, type.
But Groovy internally represents the LUB as a type which can be more complex, and that
you wouldn’t be able to use to define a variable for example. To illustrate this, let’s
continue with this example:
interface Foo {}
class Top {}
class Bottom extends Top implements Serializable, Foo {}
class SerializableFooImpl implements Serializable, Foo {}
What is the least upper bound of Bottom and SerializableFooImpl ? They don’t have
a common super class (apart from Object ), but they do share 2 interfaces
( Serializable and Foo ), so their least upper bound is a type which represents the
union of two interfaces ( Serializable and Foo ). This type cannot be defined in the
source code, yet Groovy knows about it.
In the context of collection type inference (and generic type inference in general), this
becomes handy, because the type of the components is inferred as the least upper
bound. We can illustrate why this is important in the following example:
class A implements both Greeter and Salute but there’s no explicit interface extending
both
same for B
instanceof inference
In normal, non type checked, Groovy, you can write things like:
class Greeter {
String greeting() { 'Hello' }
}
void doSomething(def o) {
if (o instanceof Greeter) {
println o.greeting()
}
}
doSomething(new Greeter())
guard the method call with an instanceof check
The method call works because of dynamic dispatch (the method is selected at runtime).
The equivalent code in Java would require to cast o to a Greeter before calling
the greeting method, because methods are selected at compile time:
if (o instanceof Greeter) {
System.out.println(((Greeter)o).greeting());
}
However, in Groovy, even if you add @TypeChecked (and thus activate type checking)
on the doSomething method, the cast is not necessary. The compiler
embeds instanceof inference that makes the cast optional.
Flow typing
Flow typing is an important concept of Groovy in type checked mode and an extension of
type inference. The idea is that the compiler is capable of inferring the type of variables
in the flow of the code, not just at initialization:
@groovy.transform.TypeChecked
void flowTyping() {
def o = 'foo'
o = o.toUpperCase()
o = 9d
o = Math.sqrt(o)
}
first, o is declared using def and assigned a String
calling Math.sqrt passes compilation because the compiler knows that at this point, o is
a double
So the type checker is aware of the fact that the concrete type of a variable is different
over time. In particular, if you replace the last assignment with:
o = 9d
o = o.toUpperCase()
The type checker will now fail at compile time, because it knows that o is
a double when toUpperCase is called, so it’s a type error.
It is important to understand that it is not the fact of declaring a variable with def that
triggers type inference. Flow typing works for any variable of any type. Declaring a
variable with an explicit type only constrains what you can assign to the variable:
@groovy.transform.TypeChecked
void flowTypingWithExplicitType() {
List list = ['a','b','c']
list = list*.toUpperCase()
list = 'foo'
}
list is declared as an unchecked List and assigned a list literal of `String`s
this line passes compilation because of flow typing: the type checker knows that list is at this
point a List<String>
but you can’t assign a String to a List so this is a type checking error
You can also note that even if the variable is declared without generics information, the
type checker knows what is the component type. Therefore, such code would fail
compilation:
@groovy.transform.TypeChecked
void flowTypingWithExplicitType() {
List list = ['a','b','c']
list.add(1)
}
list is inferred as List<String>
@groovy.transform.TypeChecked
void flowTypingWithExplicitType() {
List<? extends Serializable> list = []
list.addAll(['a','b','c'])
list.add(1)
}
list declared as List<? extends Serializable> and initialized with an empty list
elements added to the list conform to the declaration type of the list
Flow typing has been introduced to reduce the difference in semantics between classic
and static Groovy. In particular, consider the behavior of this code in Java:
In Java, this code will output Nope , because method selection is done at compile time
and based on the declared types. So even if o is a String at runtime, it is still
the Object version which is called, because o has been declared as an Object . To be
short, in Java, declared types are most important, be it variable types, parameter types
or return types.
In type checked Groovy, we want to make sure the type checker selects the same
method at compile time, that the runtime would choose. It is not possible in general,
due to the semantics of the language, but we can make things better with flow typing.
With flow typing, o is inferred as a String when the compute method is called, so the
version which takes a String and returns an int is chosen. This means that we can
infer the return type of the method to be an int , and not a String . This is important
for subsequent calls and type safety.
So in type checked Groovy, flow typing is a very important concept, which also implies
that if @TypeChecked is applied, methods are selected based on the inferred types of the
arguments, not on the declared types. This doesn’t ensure 100% type safety, because the
type checker may select a wrong method, but it ensures the closest semantics to
dynamic Groovy.
A combination of flow typing and least upper bound inference is used to perform
advanced type inference and ensure type safety in multiple situations. In particular,
program control structures are likely to alter the inferred type of a variable:
class Top {
void methodFromTop() {}
}
class Bottom extends Top {
void methodFromBottom() {}
}
def o
if (someCondition) {
o = new Top()
} else {
o = new Bottom()
}
o.methodFromTop()
o.methodFromBottom() // compilation error
if someCondition is true, o is assigned a Top
When the type checker visits an if/else control structure, it checks all variables which
are assigned in if/else branches and computes the least upper bound of all
assignments. This type is the type of the inferred variable after the if/else block, so in
this example, o is assigned a Top in the if branch and a Bottom in the else branch.
The LUB of those is a Top , so after the conditional branches, the compiler infers o as
being a Top . Calling methodFromTop will therefore be allowed, but
not methodFromBottom .
The same reasoning exists with closures and in particular closure shared variables. A
closure shared variable is a variable which is defined outside of a closure, but used
inside a closure, as in this example:
Groovy allows developers to use those variables without requiring them to be final. This
means that a closure shared variable can be reassigned inside a closure:
String result
doSomething { String it ->
result = "Result: $it"
}
result = result?.toUpperCase()
The problem is that a closure is an independent block of code that can be executed (or
not) at any time. In particular, doSomething may be asynchronous, for example. This
means that the body of a closure doesn’t belong to the main control flow. For that
reason, the type checker also computes, for each closure shared variable, the LUB of all
assignments of the variable, and will use that LUB as the inferred type outside of the
scope of the closure, like in this example:
class Top {
void methodFromTop() {}
}
class Bottom extends Top {
void methodFromBottom() {}
}
def o = new Top()
Thread.start {
o = new Bottom()
}
o.methodFromTop()
o.methodFromBottom() // compilation error
a closure-shared variable is first assigned a Top
methodFromTop is allowed
The first thing that the type checker is capable of doing is inferring the return type of a
closure. This is simply illustrated in the following example:
@groovy.transform.TypeChecked
int testClosureReturnTypeInference(String arg) {
def cl = { "Arg: $arg" }
def val = cl()
val.length()
}
a closure is defined, and it returns a string (more precisely a GString )
the type checker inferred that the closure would return a string, so calling length() is allowed
As you can see, unlike a method which declares its return type explicitly, there’s no need
to declare the return type of a closure: its type is inferred from the body of the closure.
Closures vs methods
It’s worth noting that return type inference is only applicable to closures. While the type
checker could do the same on a method, it is in practice not desirable: in general,
methods can be overridden and it is not statically possible to make sure that the method
which is called is not an overridden version. So flow typing would actually think that a
method returns something, while in reality, it could return something else, like
illustrated in the following example:
@TypeChecked
class A {
def compute() { 'some string' }
def computeFully() {
compute().toUpperCase()
}
}
@TypeChecked
class B extends A {
def compute() { 123 }
}
class A defines a method compute which effectively returns a String
this will fail compilation because the return type of compute is def (aka Object )
As you can see, if the type checker relied on the inferred return type of a method,
with flow typing, the type checker could determine that it is ok to call toUpperCase . It is
in fact an error, because a subclass can override compute and return a different object.
Here, B#compute returns an int , so someone calling computeFully on an instance
of B would see a runtime error. The compiler prevents this from happening by using the
declared return type of methods instead of the inferred return type.
For consistency, this behavior is the same for every method, even if they are static or
final.
Parameter type inference
In addition to the return type, it is possible for a closure to infer its parameter types
from the context. There are two ways for the compiler to infer the parameter types:
class Person {
String name
int age
}
@groovy.transform.TypeChecked
void failCompilation() {
Person p = new Person(name: 'Gerard', age: 55)
inviteIf(p) {
it.age >= 18 // No such property: age
}
}
the inviteIf method accepts a Person and a Closure
In this example, the closure body contains it.age . With dynamic, not type checked
code, this would work, because the type of it would be a Person at runtime.
Unfortunately, at compile-time, there’s no way to know what is the type of it , just by
reading the signature of inviteIf .
By explicitly declaring the type of the it variable, you can workaround the problem and
make this code statically checked.
For an API or framework designer, there are two ways to make this more elegant for
users, so that they don’t have to declare an explicit type for the closure parameters. The
first one, and easiest, is to replace the closure with a SAM type:
inviteIf(p) {
it.age >= 18
}
}
declare a SAM interface with an apply method
The original issue that needs to be solved when it comes to closure parameter type
inference, that is to say, statically determining the types of the arguments of a
closure without having to have them explicitly declared, is that the Groovy type system
inherits the Java type system, which is insufficient to describe the types of the
arguments.
import groovy.transform.stc.ClosureParams
import groovy.transform.stc.FirstParam
void inviteIf(Person p, @ClosureParams(FirstParam) Closure<Boolean>
predicate) {
if (predicate.call(p)) {
// send invite
// ...
}
}
inviteIf(p) {
it.age >= 18
}
the closure parameter is annotated with @ClosureParams
A second optional argument is named options. It’s semantics depend on the type
hint class. Groovy comes with various bundled type hints, illustrated in the table below:
withHash('foo',
(int)System.currentTimeMillis()) {
int mod = it%2
}
import groovy.transform.stc.ThirdParam
String format(String prefix, String
postfix, String o,
@ClosureParams(ThirdParam) Closure c) {
"$prefix${c(o)}$postfix"
}
assert format('foo', 'bar', 'baz') {
it.toUpperCase()
} == 'fooBAZbar'
import groovy.transform.stc.SimpleType
public void
doSomething(@ClosureParams(value=SimpleType
Table 4. Predefined type hints
,options=['java.lang.String','int'])
Closure c) {
c('foo',3)
}
doSomething { str, len ->
assert str.length() == len
}
This type hint supports a single signature and each
of the parameter is specified as a value of
the options array using a fully-qualified type name
or a primitive type.
MapEntryOrKeyVa Yes A dedicated type hint for closures that either work
lue on a Map.Entry single parameter, or two
parameters corresponding to the key and the value.
import
groovy.transform.stc.MapEntryOrKeyValue
public <K,V> void doSomething(Map<K,V> map,
@ClosureParams(MapEntryOrKeyValue) Closure
c) {
// ...
}
doSomething([a: 'A']) { k,v ->
assert k.toUpperCase() ==
v.toUpperCase()
}
doSomething([abc: 3]) { e ->
assert e.key.length() == e.value
}
This type hint requires that the first argument is
a Map type, and infers the closure parameter types
from the map actual key/value types.
import
groovy.transform.stc.FromAbstractTypeMethod
s
Table 4. Predefined type hints
import groovy.transform.stc.FromString
Table 4. Predefined type hints
void
doSomething(@ClosureParams(value=FromString
, options=["String","String,Integer"])
Closure cl) {
// ...
}
doSomething { s -> s.toUpperCase() }
doSomething { s,i -> s.toUpperCase()*i }
A polymorphic closure, accepting either
a String or a String, Integer :
import groovy.transform.stc.FromString
void
doSomething(@ClosureParams(value=FromString
, options=["String","String,Integer"])
Closure cl) {
// ...
}
doSomething { s -> s.toUpperCase() }
doSomething { s,i -> s.toUpperCase()*i }
A polymorphic closure, accepting either a T or a
pair T,T :
import groovy.transform.stc.FromString
public <T> void doSomething(T e,
@ClosureParams(value=FromString,
options=["T","T,T"]) Closure cl) {
// ...
}
doSomething('foo') { s -> s.toUpperCase() }
doSomething('foo') { s1,s2 -> assert
s1.toUpperCase() == s2.toUpperCase() }
Even though you use FirstParam , SecondParam or ThirdParam as a type hint, it doesn’t stric
argument which will be passed to the closure will be the first (resp. second, third) argument of the m
means that the type of the parameter of the closure will be the same as the type of the first (resp. se
argument of the method call.
@DelegatesTo
The @DelegatesTo annotation is used by the type checker to infer the type of the
delegate. It allows the API designer to instruct the compiler what is the type of the
delegate and the delegation strategy. The @DelegatesTo annotation is discussed in
a specific section.
Static compilation
Dynamic vs static
In the type checking section, we have seen that Groovy provides optional type checking
thanks to the @TypeChecked annotation. The type checker runs at compile time and
performs a static analysis of dynamic code. The program will behave exactly the same
whether type checking has been enabled or not. This means that
the @TypeChecked annotation is neutral with regards to the semantics of a program.
Even though it may be necessary to add type information in the sources so that the
program is considered type safe, in the end, the semantics of the program are the same.
While this may sound fine, there is actually one issue with this: type checking of dynamic
code, done at compile time, is by definition only correct if no runtime specific behavior
occurs. For example, the following program passes type checking:
class Computer {
int compute(String str) {
str.length()
}
String compute(int x) {
String.valueOf(x)
}
}
@groovy.transform.TypeChecked
void test() {
def computer = new Computer()
computer.with {
assert compute(compute('foobar')) =='6'
}
}
There are two compute methods. One accepts a String and returns an int , the other
accepts an int and returns a String . If you compile this, it is considered type safe: the
inner compute('foobar') call will return an int , and calling compute on
this int will in turn return a String .
Let’s take the example which failed, but this time let’s replace
the @TypeChecked annotation with @CompileStatic :
class Computer {
int compute(String str) {
str.length()
}
String compute(int x) {
String.valueOf(x)
}
}
@groovy.transform.CompileStatic
void test() {
def computer = new Computer()
computer.with {
assert compute(compute('foobar')) =='6'
}
}
Computer.metaClass.compute = { String str -> new Date() }
test()
This is the only difference. If we execute this program, this time, there is no runtime
error. The test method became immune to monkey patching, because
the compute methods which are called in its body are linked at compile time, so even if
the metaclass of Computer changes, the program still behaves as expected by the type
checker.
Key benefits
There are several benefits of using @CompileStatic on your code:
• type safety
• immunity to monkey patching
• performance improvements
The performance improvements depend on the kind of program you are executing. If it
is I/O bound, the difference between statically compiled code and dynamic code is
barely noticeable. On highly CPU intensive code, since the bytecode which is generated is
very close, if not equal, to the one that Java would produce for an equivalent program,
the performance is greatly improved.
Using the invokedynamic version of Groovy, which is accessible to people using JDK 7 and above, the
dynamic code should be very close to the performance of statically compiled code. Sometimes, it can
There is only one way to determine which version you should choose: measuring. The reason is that d
program and the JVM that you use, the performance can be significantly different. In particular,
the invokedynamic version of Groovy is very sensitive to the JVM version in use.
Groovy is a platform of choice when it comes to implement internal DSLs. The flexible
syntax, combined with runtime and compile-time metaprogramming capabilities make
Groovy an interesting choice because it allows the programmer to focus on the DSL
rather than on tooling or implementation. Since Groovy DSLs are Groovy code, it’s easy
to have IDE support without having to write a dedicated plugin for example.
In a lot of cases, DSL engines are written in Groovy (or Java) then user code is executed
as scripts, meaning that you have some kind of wrapper on top of user logic. The
wrapper may consist, for example, in a GroovyShell or GroovyScriptEngine that
performs some tasks transparently before running the script (adding imports, applying
AST transforms, extending a base script,…). Often, user written scripts come to
production without testing because the DSL logic comes to a point where any user may
write code using the DSL syntax. In the end, a user may just ignore that what he writes is
actually code. This adds some challenges for the DSL implementer, such as securing
execution of user code or, in this case, early reporting of errors.
For example, imagine a DSL which goal is to drive a rover on Mars remotely. Sending a
message to the rover takes around 15 minutes. If the rover executes the script and fails
with an error (say a typo), you have two problems:
• first, feedback comes only after 30 minutes (the time needed for the rover to get the
script and the time needed to receive the error)
• second, some portion of the script has been executed and you may have to change
the fixed script significantly (implying that you need to know the current state of the
rover…)
Type checking extensions is a mechanism that will allow the developer of a DSL engine
to make those scripts safer by applying the same kind of checks that static type checking
allows on regular groovy classes.
The principle, here, is to fail early, that is to say fail compilation of scripts as soon as
possible, and if possible provide feedback to the user (including nice error messages).
In short, the idea behind type checking extensions is to make the compiler aware of all
the runtime metaprogramming tricks that the DSL uses, so that scripts can benefit the
same level of compile-time checks as a verbose statically compiled code would have. We
will see that you can go even further by performing checks that a normal type checker
wouldn’t do, delivering powerful compile-time checks for your users.
@TypeChecked(extensions='/path/to/myextension.groovy')
void foo() { ...}
In that case, the foo methods would be type checked with the rules of the normal type
checker completed by those found in the myextension.groovy script. Note that while
internally the type checker supports multiple mechanisms to implement type checking
extensions (including plain old java code), the recommended way is to use those type
checking extension scripts.
robot.move 100
If you have a class defined as such:
class Robot {
Robot move(int qt) { this }
}
The script can be type checked before being executed using the following script:
so that scripts compiled using the shell are compiled with @TypeChecked without the user
having to add it explicitly
config.addCompilationCustomizers(
new ASTTransformationCustomizer(
TypeChecked,
extensions:['robotextension.groovy'])
)
Then add the following to your classpath:
robotextension.groovy
The type checking API is a low level API, dealing with the Abstract Syntax Tree. You will
have to know your AST well to develop extensions, even if the DSL makes it much easier
than just dealing with AST code from plain Java or Groovy.
Events
The type checker sends the following events, to which an extension script can react:
Usage setup {
// this is called before anything else
}
Can be used to perform setup of your extension
Arguments none
Usage finish {
// this is after completion
// of all type checking
}
Can be used to perform additional checks after the type checker has
finished its job.
getType(pexp.objectExpression)==classNodeFor(String)) {
storeType(pexp,classNodeFor(int))
handled = true
}
}
Allows the developer to handle "dynamic" properties
Called Called when the type checker cannot find an attribute on the
When receiver
Event beforeMethodCall
name
Called Called before the type checker starts type checking a method call
When
Event afterMethodCall
name
Called Called once the type checker has finished type checking a method call
When
Event onMethodSelection
name
Called Called by the type checker when it finds a method appropriate for a
When method call
Event methodNotFound
name
Event beforeVisitMethod
name
Called Called by the type checker before type checking a method body
When
Event afterVisitMethod
name
Called Called by the type checker after type checking a method body
When
Event beforeVisitClass
name
Event afterVisitClass
name
Called Called by the type checker after having finished the visit of a type
When checked class
Event incompatibleAssignment
name
Called Called when the type checker thinks that an assignment is incorrect,
When meaning that the right hand side of an assignment is incompatible
with the left hand side
Event ambiguousMethods
name
Called Called when the type checker cannot choose between several
When candidate methods
Of course, an extension script may consist of several blocks, and you can have multiple
blocks responding to the same event. This makes the DSL look nicer and easier to write.
However, reacting to events is far from sufficient. If you know you can react to events,
you also need to deal with the errors, which implies several helper methods that will
make things easier.
Handling class nodes is something that needs particular attention when you work with a
type checking extension. Compilation works with an abstract syntax tree (AST) and the
tree may not be complete when you are type checking a class. This also means that when
you refer to types, you must not use class literals such as String or HashSet , but to
class nodes representing those types. This requires a certain level of abstraction and
understanding how Groovy deals with class nodes. To make things easier, Groovy
supplies several helper methods to deal with class nodes. For example, if you want to say
"the type for String", you can write:
The second problem that you might encounter is referencing a type which is not yet
compiled. This may happen more often than you think. For example, when you compile a
set of files together. In that case, if you want to say "that variable is of type Foo"
but Foo is not yet compiled, you can still refer to the Foo class node
using lookupClassNodeFor :
Say that you know that variable foo is of type Foo and you want to tell the type checker
about it. Then you can use the storeType method, which takes two arguments: the first
one is the node for which you want to store the type and the second one is the type of
the node. If you look at the implementation of storeType , you would see that it
delegates to the type checker equivalent method, which itself does a lot of work to store
node metadata. You would also see that storing the type is not limited to variables: you
can set the type of any expression.
Likewise, getting the type of an AST node is just a matter of calling getType on that
node. This would in general be what you want, but there’s something that you must
understand:
• getType returns the inferred type of an expression. This means that it will not
return, for a variable declared of type Object the class node for Object , but the
inferred type of this variable at this point of the code (flow typing)
• if you want to access the origin type of a variable (or field/parameter), then you
must call the appropriate method on the AST node
Throwing an error
To throw a type checking error, you only have to call the addStaticTypeError method
which takes two arguments:
It is often required to know the type of an AST node. For readability, the DSL provides a
special isXXXExpression method that will delegate to x instance of XXXExpression .
For example, instead of writing:
if (isBinaryExpression(node)) {
...
}
Virtual methods
When you perform type checking of dynamic code, you may often face the case when
you know that a method call is valid but there is no "real" method behind it. As an
example, take the Grails dynamic finders. You can have a method call consisting of a
method named findByName(…). As there’s no findByName method defined in the bean,
the type checker would complain. Yet, you would know that this method wouldn’t fail at
runtime, and you can even tell what is the return type of this method. For this case, the
DSL supports two special constructs that consist of phantom methods. This means that
you will return a method node that doesn’t really exist but is defined in the context of
type checking. Three methods exist:
All three variants do the same: they create a new method node which name is the
supplied name and define the return type of this method. Moreover, the type checker
would add those methods in the generatedMethods list (see isGenerated below). The
reason why we only set a name and a return type is that it is only what you need in 90%
of the cases. For example, in the findByName example upper, the only thing you need to
know is that findByName wouldn’t fail at runtime, and that it returns a domain class.
The Callable version of return type is interesting because it defers the computation of
the return type when the type checker actually needs it. This is interesting because in
some circumstances, you may not know the actual return type when the type checker
demands it, so you can use a closure that will be called each time getReturnType is
called by the type checker on this method node. If you combine this with deferred
checks, you can achieve pretty complex type checking including handling of forward
references.
newMethod(name) {
// each time getReturnType on this method node will be called, this
closure will be called!
println 'Type checker called me!'
lookupClassNodeFor(Foo) // return type
}
Should you need more than the name and return type, you can always create a
new MethodNode by yourself.
Scoping
Scoping is very important in DSL type checking and is one of the reasons why we
couldn’t use a pointcut based approach to DSL type checking. Basically, you must be able
to define very precisely when your extension applies and when it does not. Moreover,
you must be able to handle situations that a regular type checker would not be able to
handle, such as forward references:
point a(1,1)
line a,b // b is referenced afterwards!
point b(5,2)
Say for example that you want to handle a builder:
builder.foo {
bar
baz(bar)
}
Your extension, then, should only be active once you’ve entered the foo method, and
inactive outside of this scope. But you could have complex situations like multiple
builders in the same file or embedded builders (builders in builders). While you should
not try to fix all this from start (you must accept limitations to type checking), the type
checker does offer a nice mechanism to handle this: a scoping stack, using
the newScope and scopeExit methods.
• a parent scope
• a map of custom data
If you want to look at the implementation, it’s simply
a LinkedHashMap (org.codehaus.groovy.transform.stc.GroovyTypeCheckingExtensionSu
pport.TypeCheckingScope), but it’s quite powerful. For example, you can use such a
scope to store a list of closures to be executed when you exit the scope. This is how you
would handle forward references:
newScope {
secondPassChecks = []
}
At anytime in the DSL, you can access the current scope using getCurrentScope() or
more simply currentScope :
//...
currentScope.secondPassChecks << { println 'executed later' }
// ...
The general schema would then be:
• determine a pointcut where you push a new scope on stack and initialize custom
variables within this scope
• using the various events, you can use the information stored in your custom scope to
perform checks, defer checks,…
• determine a pointcut where you exit the scope, call scopeExit and eventually
perform additional checks
Other useful methods
• write the extension in Groovy, compile it, then use a reference to the extension class
instead of the source
• write the extension in Java, compile it, then use a reference to the extension class
Writing a type checking extension in Groovy is the easiest path. Basically, the idea is that
the type checking extension script becomes the body of the main method of a type
checking extension class, as illustrated here:
import org.codehaus.groovy.transform.stc.GroovyTypeCheckingExtensionSupport
and you can use the very same events as an extension written in source form
config.addCompilationCustomizers(
new ASTTransformationCustomizer(
TypeChecked,
extensions:['typing.PrecompiledExtension'])
)
The difference is that instead of using a path in classpath, you just specify the fully
qualified class name of the precompiled extension.
In case you really want to write an extension in Java, then you will not benefit from the
type checking extension DSL. The extension above can be rewritten in Java this way:
import org.codehaus.groovy.ast.ClassHelper;
import org.codehaus.groovy.ast.expr.VariableExpression;
import org.codehaus.groovy.transform.stc.AbstractTypeCheckingExtension;
import org.codehaus.groovy.transform.stc.StaticTypeCheckingVisitor;
@Override
public boolean handleUnresolvedVariableExpression(final
VariableExpression vexp) {
if ("robot".equals(vexp.getName())) {
storeType(vexp, ClassHelper.make(Robot.class));
setHandled(true);
return true;
}
return false;
}
}
extend the AbstractTypeCheckingExtension class
One possible solution for this particular example is to instruct the compiler to use mixed
mode compilation. The more advanced one is to use AST transformations during type
checking but it is far more complex.
Type checking extensions allow you to help the type checker where it fails, but it also
allow you to fail where it doesn’t. In that context, it makes sense to support extensions
for @CompileStatic too. Imagine an extension that is capable of type checking SQL
queries. In that case, the extension would be valid in both dynamic and static context,
because without the extension, the code would still pass.
Mixed mode compilation
In the previous section, we highlighted the fact that you can activate type checking
extensions with @CompileStatic . In that context, the type checker would not complain
anymore about some unresolved variables or unknown method calls, but it would still
wouldn’t know how to compile them statically.
Mixed mode compilation offers a third way, which is to instruct the compiler that
whenever an unresolved variable or method call is found, then it should fall back to a
dynamic mode. This is possible thanks to type checking extensions and a
special makeDynamic call.
robot.move 100
And let’s try to activate our type checking extension using @CompileStatic instead
of @TypeChecked :
The script will run fine because the static compiler is told about the type of
the robot variable, so it is capable of making a direct call to move . But before that, how
did the compiler know how to get the robot variable? In fact by default, in a type
checking extension, setting handled=true on an unresolved variable will automatically
trigger a dynamic resolution, so in this case you don’t have anything special to make the
compiler use a mixed mode. However, let’s slightly update our example, starting from
the robot script:
move 100
Here you can notice that there is no reference to robot anymore. Our extension will not
help then because we will not be able to instruct the compiler that move is done on
a Robot instance. This example of code can be executed in a totally dynamic way thanks
to the help of a groovy.util.DelegatingScript:
the script source needs to be parsed and will return an instance of DelegatingScript
we can then call setDelegate to use a Robot as the delegate of the script
then execute the script. move will be directly executed on the delegate
If we want this to pass with @CompileStatic , we have to use a type checking extension,
so let’s update our configuration:
config.addCompilationCustomizers(
new ASTTransformationCustomizer(
CompileStatic,
extensions:['robotextension2.groovy'])
)
apply @CompileStatic transparently
use an alternate type checking extension meant to recognize the call to move
Then in the previous section we have learnt how to deal with unrecognized method
calls, so we are able to write this extension:
robotextension2.groovy
If you try to execute this code, then you could be surprised that it actually fails at
runtime:
java.lang.NoSuchMethodError: java.lang.Object.move()Ltyping/Robot;
The reason is very simple: while the type checking extension is sufficient
for @TypeChecked , which does not involve static compilation, it is not enough
for @CompileStatic which requires additional information. In this case, you told the
compiler that the method existed, but you didn’t explain to it what method it is in
reality, and what is the receiver of the message (the delegate).
Fixing this is very easy and just implies replacing the newMethod call with something
else:
robotextension3.groovy
So when the compiler will have to generate bytecode for the call to move , since it is now
marked as a dynamic call, it will fallback to the dynamic compiler and let it handle the
call. And since the extension tells us that the return type of the dynamic call is a Robot ,
subsequent calls will be done statically!
Some would wonder why the static compiler doesn’t do this by default without an
extension. It is a design decision:
• if the code is statically compiled, we normally want type safety and best performance
• so if unrecognized variables/method calls are made dynamic, you loose type safety,
but also all support for typos at compile time!
In short, if you want to have mixed mode compilation, it has to be explicit, through a
type checking extension, so that the compiler, and the designer of the DSL, are totally
aware of what they are doing.
• a variable ( VariableExpression )
• First of all, you would explicitly break the contract of type checking, which is to
annotate, and only annotate the AST. Type checking should not modify the AST tree
because you wouldn’t be able to guarantee anymore that code without
the @TypeChecked annotation behaves the same without the annotation.
• If your extension is meant to work with @CompileStatic, then you can modify the
AST because this is indeed what @CompileStatic will eventually do. Static
compilation doesn’t guarantee the same semantics at dynamic Groovy so there is
effectively a difference between code compiled with @CompileStatic and code
compiled with @TypeChecked. It’s up to you to choose whatever strategy you want to
update the AST, but probably using an AST transformation that runs before type
checking is easier.
• if you cannot rely on a transformation that kicks in before the type checker, then you
must be very careful
The type checking phase is the last phase running in the compiler before bytecode generation. All oth
transformations run before that and the compiler does a very good job at "fixing" incorrect AST gener
type checking phase. As soon as you perform a transformation during type checking, for example dire
checking extension, then you have to do all this work of generating a 100% compiler compliant abstra
yourself, which can easily become complex. That’s why we do not recommend to go that way if you a
type checking extensions and AST transformations.
Examples
Examples of real life type checking extensions are easy to find. You can download the
source code for Groovy and take a look at the TypeCheckingExtensionsTest class which
is linked to various extension scripts.
An example of a complex type checking extension can be found in the Markup Template
Engine source code: this template engine relies on a type checking extension and AST
transformations to transform templates into fully statically compiled code. Sources for
this can be found here.
2. Tools
2.1. Running Groovy from the commandline
2.1.1. groovy, the Groovy command
groovy invokes the Groovy command line processor. It allows you to run inline Groovy
expressions, and scripts, tests or application within groovy files. It plays a similar role
to java in the Java world but handles inline scripts and rather than invoking class files,
it is normally called with scripts and will automatically call the Groovy compiler as
needed.
The easiest way to run a Groovy script, test or application is to run the following
command at your shell prompt:
groovyc MyClass.groovy
This will produce a MyClass.class file (as well as other .class files depending on the
contents of the source). groovyc supports a number of command line switches:
--temp Temporary
directory for the
compiler
- Properties to be groovyc -j -
Jproperty=value passed Jtarget=1.6 -
to javac if joint Jsource=1.6 A.groovy
compilation is B.java
enabled
Description
Compiles Groovy source files and, if joint compilation option is used, Java source files.
Required taskdef
Assuming all the groovy jars you need are in my.classpath (this will be groovy-
VERSION.jar , groovy-ant-VERSION.jar plus any modules and transitive
dependencies you might be using) you will need to declare this task at some point in
the build.xml prior to the groovyc task being invoked.
<taskdef name="groovyc"
classname="org.codehaus.groovy.ant.Groovyc"
classpathref="my.classpath"/>
<groovyc> Attributes
Joint Compilation
Joint compilation is enabled by using an embedded javac element, as shown in the
following example:
More details about joint compilation can be found in the joint compilation section.
2.2.3. Gant
Gant is a tool for scripting Ant tasks using Groovy instead of XML to specify the logic. As
such, it has exactly the same features as the Groovyc Ant task.
2.2.4. Gradle
Gradle is a build tool that allows you to leverage the flexibility of Ant, while keeping the
simplicity of convention over configuration that tools like Maven offer. Builds are
specified using a Groovy DSL, which offers great flexibility and succinctness.
A third approach is to use Maven’s Ant plugin to compile a groovy project. Note that the
Ant plugin is bound to the compile and test-compile phases of the build in the example
below. It will be invoked during these phases and the contained tasks will be carried out
which runs the Groovy compiler over the source and test directories. The resulting Java
classes will coexist with and be treated like any standard Java classes compiled from
Java source and will appear no different to the JRE, or the JUnit runtime.
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycomp.MyGroovy</groupId>
<artifactId>MyGroovy</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>Maven Example building a Groovy project</name>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.codehaus.groovy</groupId>
<artifactId>groovy-all</artifactId>
<version>2.5.0</version>
<type>pom</type> <!-- required JUST since Groovy 2.5.0 -->
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<artifactId>maven-antrun-plugin</artifactId>
<executions>
<execution>
<id>compile</id>
<phase>compile</phase>
<configuration>
<tasks>
<mkdir dir="${basedir}/src/main/groovy"/>
<taskdef name="groovyc"
classname="org.codehaus.groovy.ant.Groovyc">
<classpath
refid="maven.compile.classpath"/>
</taskdef>
<mkdir
dir="${project.build.outputDirectory}"/>
<groovyc
destdir="${project.build.outputDirectory}"
srcdir="${basedir}/src/main/groovy/"
listfiles="true">
<classpath
refid="maven.compile.classpath"/>
</groovyc>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
<execution>
<id>test-compile</id>
<phase>test-compile</phase>
<configuration>
<tasks>
<mkdir dir="${basedir}/src/test/groovy"/>
<taskdef name="groovyc"
classname="org.codehaus.groovy.ant.Groovyc">
<classpath
refid="maven.test.classpath"/>
</taskdef>
<mkdir
dir="${project.build.testOutputDirectory}"/>
<groovyc
destdir="${project.build.testOutputDirectory}"
srcdir="${basedir}/src/test/groovy/"
listfiles="true">
<classpath
refid="maven.test.classpath"/>
</groovyc>
</tasks>
</configuration>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
This assumes you have a Maven project setup with groovy subfolders as peers to the
java src and test subfolders. You can use the java / jar archetype to set this up then
rename the java folders to groovy or keep the java folders and just create groovy peer
folders. There exists, also a groovy plugin which has not been tested or used in
production. After defining the build section as in the above example, you can invoke the
typical Maven build phases normally. For example, mvn test will execute the test
phase, compiling Groovy source and Groovy test source and finally executing the unit
tests. If you run mvn jar it will execute the jar phase bundling up all of your compiled
production classes into a jar after all of the unit tests pass. For more detail on Maven
build phases consult the Maven2 documentation.
Important:
You should be aware that GMaven is not supported anymore and can have difficulties
with joint compilation.GMavenPlus can be a good replacement, but if you are having
problems with joint compilation, you might consider the Groovy Eclipse maven plugin.
GMavenPlus
GMavenPlus is a rewrite of GMaven and is in active development. It supports most of the
features of GMaven (a couple notable exceptions being mojo Javadoc tags and support
for older Groovy versions). Its joint compilation uses stubs (which means it has the same
potential issues as GMaven and Gradle). The main advantages over its predecessor are
that it supports recent Groovy versions, InvokeDynamic, Groovy on Android, GroovyDoc,
and configuration scripts.
GMaven 2
Unlike the name might seem to suggest, GMaven 2 is not aimed at replacing GMaven. In
fact, it removes the non-scripting features of the GMaven plugin. It has not yet had any
release and appears to be inactive currently.
Joint compilation can be enabled using the -j flag with the command-line compiler, or
using using a nested tag and all the attributes and further nested tags as required for the
Ant task.
It is important to know that if you don’t enable joint compilation and try to compile Java
source files with the Groovy compiler, the Java source files will be compiled as if they
were Groovy sources. In some situations, this might work since most of the Java syntax
is compatible with Groovy, but semantics would be different.
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:2.1.2'
classpath 'org.codehaus.groovy:groovy-android-gradle-plugin:1.0.0'
}
}
dependencies {
compile 'org.codehaus.groovy:groovy:2.4.7:grooid'
}
Note that if a Groovy jar does not provide a grooid classifier alternative, then it means
that the jar is directly compatible with Android. In that case, you can add the
dependency directly like this:
dependencies {
compile 'org.codehaus.groovy:groovy:2.4.7:grooid' // requires the
grooid classifier
compile ('org.codehaus.groovy:groovy-json:2.4.7') { // no grooid
version available
transitive = false // so do not
depend on non-grooid version
}
}
Note that the transitive=false parameter for groovy-json will let Gradle download
the JSON support jar without adding a dependency onto the normal jar of Groovy.
Please make sure to go to the plugin homepage in order to find the latest documentation
and version.
Features
• No need for go command to execute buffer.
• Rich cross-platform edit-line editing, history and completion thanks to JLine2.
• ANSI colors (prompt, exception traces, etc).
• Simple, yet robust, command system with online help, user alias support and more.
• User profile support
Evaluating Expressions
Simple Expressions
println "Hello"
Evaluation Result
When a complete expression is found, it is compiled and evaluated. The result of the
evaluation is stored into the _ variable.
Multi-line Expressions
Multi-line/complex expressions (like closure or class definitions) may be defined over
several lines. When the shell detects that it has a complete expression it will compile and
evaluate it.
Define a Class
class Foo {
def bar() {
println "baz"
}
}
Use the Class
foo = new Foo()
foo.bar()
Variables
Shell variables are all untyped (i.e. no def or other type information).
foo = "bar"
But, this will evaluate a local variable and will not be saved to the shell’s environment:
Functions
Functions can be defined in the shell, and will be saved for later use.
hello("Jason")
Internally the shell creates a closure to encapsulate the function and then binds the
closure to a variable. So variables and functions share the same namespace.
Commands
The shell has a number of different commands, which provide rich access to the shell’s
environment.
Commands all have a name and a shortcut (which is something like \h ). Commands may
also have some predefined system aliases. Users may also create their own aliases.
Recognized Commands
help
Display the list of commands (and aliases) or the help text for specific command.
groovy:000> :help
Available commands:
:help (:h ) Display this help message
? (:? ) Alias to: :help
:exit (:x ) Exit the shell
:quit (:q ) Alias to: :exit
import (:i ) Import a class into the namespace
:display (:d ) Display the current buffer
:clear (:c ) Clear the buffer and reset the prompt counter
:show (:S ) Show variables, classes or imports
:inspect (:n ) Inspect a variable or the last result with the GUI
object browser
:purge (:p ) Purge variables, classes, imports or preferences
:edit (:e ) Edit the current buffer
:load (:l ) Load a file or URL into the buffer
. (:. ) Alias to: :load
:save (:s ) Save the current buffer to a file
:record (:r ) Record the current session to a file
:history (:H ) Display, manage and recall edit-line history
:alias (:a ) Create an alias
:set (:= ) Set (or list) preferences
:grab (:g ) Add a dependency to the shell environment
:register (:rc) Register a new command with the shell
:doc (:D ) Open a browser window displaying the doc for the
argument
While in the interactive shell, you can ask for help for any command to get more details
about its syntax or function. Here is an example of what happens when you ask for help
for the help command:
This is the only way to exit the shell. Well, you can still CTRL-C , but the shell will
complain about an abnormal shutdown of the JVM.
import
Add a custom import which will be included for all shell evaluations.
This command can be given at any time to add new imports.
grab
Grab a dependency (Maven, Ivy, etc.) from Internet sources or cache, and add it to the
Groovy Shell environment.
display
This only displays the buffer of an incomplete expression. Once the expression is
complete, the buffer is rest. The prompt will update to show the size of the current
buffer as well.
Example
Clears the current buffer, resetting the prompt counter to 000. Can be used to recover fr
om compilation errors.
show
show variables
show imports
show preferences
show all
inspect
Opens the GUI object browser to inspect a variable or the result of the last evaluation.
purge
purge variables
purge classes
purge imports
purge preferences
purge all
edit
Currently only works on UNIX systems which have the EDITOR environment variable
set, or have configured the editor preference.
load
save
record
record start
record stop
record status
history
history show
history recall
history flush
history clear
alias
Create an alias.
doc
Opens a browser with documentation for the provided class. For example:
Preferences
Some of aspects of groovysh behaviors can be customized by setting preferences.
Preferences are set using the set command or the := shortcut.
Recognized Preferences
interpreterMode
Allows the use of typed variables (i.e. def or other type information):
groovy:000> def x = 3
===> 3
groovy:000> x
===> 3
It’s especially useful for copy&pasting code from tutorials etc. into the running session.
verbosity
• DEBUG
• VERBOSE
• INFO
• QUIET
Default is INFO .
If this preference is set to an invalid value, then the previous setting will be used, or if
there is none, then the preference is removed and the default is used.
colors
Default is true .
show-last-result
Default is true .
sanitize-stack-trace
Default is true .
editor
Mac OS XTo use TextEdit, the default text editor on Mac OS X, configure: set editor
/Applications/TextEdit.app/Contents/MacOS/TextEdit
Setting a Preference
Listing Preferences
To list the current set preferences (and their values):
$HOME/.groovy/groovysh.rc
This script, if it exists, is loaded when the shell enters interactive mode.
State
$HOME/.groovy/groovysh.history
Custom commands
The register command allows you to register custom commands in the shell. For
example, writing the following will register the Stats command:
import org.codehaus.groovy.tools.shell.CommandSupport
import org.codehaus.groovy.tools.shell.Groovysh
class Stats extends CommandSupport {
protected Stats(final Groovysh shell) {
super(shell, 'stats', 'T')
}
}
Then the command can be called using:
groovy:000> :stats
stats
Free memory: 139474880
groovy:000>
Note that the command class must be found on classpath: you cannot define a new
command from within the shell.
Troubleshooting
Please report any problems you run into. Please be sure to mark the JIRA issue with
the Groovysh component.
Platform Problems
Problems loading the JLine DLL
On Windows, JLine2 (which is used for the fancy shell input/history/completion fluff),
uses a tiny DLL file to trick the evil Windows faux-shell ( CMD.EXE or COMMAND.COM )
into providing Java with unbuffered input. In some rare cases, this might fail to load or
initialize.
One solution is to disable the frills and use the unsupported terminal instance. You can
do that on the command-line using the --terminal flag and set it to one of:
• none
• false
• off
• jline.UnsupportedTerminal
groovysh --terminal=none
Problems with Cygwin on Windows
Some people have issues when running groovysh with cygwin. If you have troubles, the
following may help:
stty -icanon min 1 -echo
groovysh --terminal=unix
stty icanon echo
2.4.2. Basics
2.4.3. Features
Command-line Options and Arguments
The Groovy Console supports several options to control classpath and other features.
./bin/groovyConsole --help
Usage: groovyConsole [options] [filename]
The Groovy Swing Console allows a user to enter and run Groovy scripts.
--configscript=PARAM A script for tweaking the compiler
configuration options
-cp, -classpath, --classpath
Specify where to find the class files - must
be first
argument
-D, --define=<name=value> Define a system property
-h, --help Display this help message
-i, --indy Enable InvokeDynamic (Indy) compilation for
scripts
-pa, --parameters Generate metadata for reflection on method
parameter
names (jdk8+ only)
-pr, --enable-preview Enable preview Java features (JEP 12)
(jdk12+ only)
-V, --version Display the version
Running Scripts
There are several shortcuts that you can use to run scripts or code snippets:
• Ctrl+Enter and Ctrl+R are both shortcut keys for Run Script .
• If you highlight just part of the text in the input area, then Groovy runs just that text.
• The result of a script is the value of the last expression executed.
• You can turn the System.out capture on and off by selecting Capture
System.out from the Actions menu
Editing Files
You can open any text file, edit it, run it (as a Groovy Script) and then save it again when
you are finished.
• Select File > New File (shortcut key ctrl+Q ) to start again with a blank input
area
Interrupting a script
The Groovy console is a very handy tool to develop scripts. Often, you will find yourself
running a script multiple times until it works the way you want it to. However, what if
your code takes too long to finish or worse, creates an infinite loop? Interrupting script
execution can be achieved by clicking the interrupt button on the small dialog box
that pops up when a script is executing or through the interrupt icon in the tool bar.
However, this may not be sufficient to interrupt a script: clicking the button will
interrupt the execution thread, but if your code doesn’t handle the interrupt flag, the
script is likely to keep running without you being able to effectively stop it. To avoid
that, you have to make sure that the Script > Allow interruption menu item is
flagged. This will automatically apply an AST transformation to your script which will
take care of checking the interrupt flag ( @ThreadInterrupt ). This way, you guarantee
that the script can be interrupted even if you don’t explicitly handle interruption, at the
cost of extra execution time.
And more
• You can change the font size by selecting Smaller Font or Larger Font from
the Actions menu
import groovy.ui.Console;
...
Console console = new Console();
console.setVariable("var1", getValueOfVar1());
console.setVariable("var2", getValueOfVar2());
console.run();
...
Once the console is launched, you can use the variable values in Groovy code.
What you see here is the usual textual representation of a Map. But, what if we enabled
custom visualization of certain results? The Swing console allows you to do just that.
First of all, you have to ensure that the visualization option is ticked: View →
Visualize Script Results — for the record, all settings of the Groovy Console are
stored and remembered thanks to the Preference API. There are a few result
visualizations built-in: if the script returns a java.awt.Image , a javax.swing.Icon ,
or a java.awt.Component with no parent, the object is displayed instead of
its toString() representation. Otherwise, everything else is still just represented as
text. Now, create the following Groovy script
in ~/.groovy/OutputTransforms.groovy :
import javax.swing.*
-classpath, -cp --classpath Specify where to find the class files - must be
first argument
Required taskdef
Assuming all the groovy jars you need are in my.classpath (this will be groovy-
VERSION.jar , groovy-ant-VERSION.jar , groovy-groovydoc-VERSION.jar plus any
modules and transitive dependencies you might be using) you will need to declare this
task at some point in the build.xml prior to the groovydoc task being invoked.
<groovydoc> Attributes
Attribute Description Required
link(packages:"java.,org.xml.,javax.,org.xml.",href:"http://docs.oracle.com
/javase/8/docs/api/")
link(packages:"groovy.,org.codehaus.groovy.",
href:"http://docs.groovy-lang.org/latest/html/api/")
link(packages:"org.apache.tools.ant.",
href:"http://docs.groovy-lang.org/docs/ant/api/")
link(packages:"org.junit.,junit.framework.",
href:"http://junit.org/junit4/javadoc/latest/")
link(packages:"org.codehaus.gmaven.",
href:"http://groovy.github.io/gmaven/apidocs/")
}
Custom templates
The groovydoc Ant task supports custom templates, but it requires two steps:
package org.codehaus.groovy.tools.groovydoc;
import org.codehaus.groovy.ant.Groovydoc;
/**
* Overrides GroovyDoc's default class template - for testing purpose only.
*/
public class CustomGroovyDoc extends Groovydoc {
@Override
protected String[] getClassTemplates() {
return new
String[]{"org/codehaus/groovy/tools/groovydoc/testfiles/classDocName.html"}
;
}
}
You can override the following methods:
VSCode Yes No No
TextMate Yes No No
vim Yes No No
Editor Syntax highlighting Code completion Refactoring
UltraEdit Yes No No
SlickEdit Yes No No
EditRocket Yes No No
3. User Guides
3.1. Getting started
3.1.1. Download
In this download area, you will be able to download the distribution (binary and source),
the Windows installer and the documentation for Groovy.
For a quick and effortless start on Mac OSX, Linux or Cygwin, you can use SDKMAN! (The
Software Development Kit Manager) to download and configure any Groovy version of
your choice. Basic instructions can be found below.
Stable
• Download zip: Binary Release | Source Release
• Download documentation: JavaDoc and zipped online documentation
• Combined binary / source / documentation bundle: Distribution bundle
You can learn more about this version in the release notes or in the changelog.
Snapshots
For those who want to test the very latest versions of Groovy and live on the bleeding
edge, you can use our snapshot builds. As soon as a build succeeds on our continuous
integration server a snapshot is deployed to Artifactory’s OSS snapshot repository.
Prerequisites
Groovy 2.5 requires Java 6+ with full support up to Java 8. There are currently some
known issues for some aspects when using Java 9 snapshots. The groovy-nio module
requires Java 7+. Using Groovy’s invokeDynamic features require Java 7+ but we
recommend Java 8.
The Groovy CI server is also useful to look at to confirm supported Java versions for
different Groovy releases. The test suite (getting close to 10000 tests) runs for the
currently supported streams of Groovy across all the main versions of Java each stream
supports.
Stable Release
Gradle Maven Explanation
are marked as
optional. You
may need to
include some of
the optional
dependencies
to use some
features of
Groovy, e.g.
AntBuilder,
GroovyMBeans,
etc.
To use the InvokeDynamic version of the jars just append ':indy' for Gradle or
<classifier>indy</classifier> for Maven.
$ source "$HOME/.sdkman/bin/sdkman-init.sh"
Then install the latest stable Groovy:
$ groovy -version
That’s all there is to it!
Homebrew
If you’re on MacOS and have Homebrew installed, you can run:
Installation on Windows
If you’re on Windows, you can also use the NSIS Windows installer.
Other Distributions
You may download other distributions of Groovy from this site.
Source Code
If you prefer to live on the bleeding edge, you can also grab the source code from GitHub.
IDE plugin
If you are an IDE user, you can just grab the latest IDE plugin and follow the plugin
installation instructions.
• First, Download a binary distribution of Groovy and unpack it into some file on your
local file system.
• Set your GROOVY_HOME environment variable to the directory you unpacked the
distribution.
• Add GROOVY_HOME/bin to your PATH environment variable.
groovysh
Which should create an interactive groovy shell where you can type Groovy statements.
Or to run the Swing interactive console type:
groovyConsole
To run a specific Groovy script type:
groovy SomeScript
Here we list all the major differences between Java and Groovy.
• java.io.*
• java.lang.*
• java.math.BigDecimal
• java.math.BigInteger
• java.net.*
• java.util.*
• groovy.lang.*
• groovy.util.*
3.2.2. Multi-methods
In Groovy, the methods which will be invoked are chosen at runtime. This is called
runtime dispatch or multi-methods. It means that the method will be chosen based on
the types of the arguments at runtime. In Java, this is the opposite: methods are chosen
at compile time, based on the declared types.
The following code, written as Java code, can be compiled in both Java and Groovy, but it
will behave differently:
assertEquals(2, result);
Whereas in Groovy:
assertEquals(1, result);
That is because Java will use the static information type, which is that o is declared as
an Object , whereas Groovy will choose at runtime, when the method is actually called.
Since it is called with a String , then the String version is called.
int[] array = { 1, 2, 3}
You actually have to use:
class Person {
String name
}
Instead, it is used to create a property, that is to say a private field, an
associated getter and an associatedsetter.
class Person {
@PackageScope String name
}
} catch (IOException e) {
e.printStackTrace();
}
can be written like this:
new File('/path/to/file').eachLine('UTF-8') {
println it
}
or, if you want a version closer to Java:
class A {
static class B {}
}
new A.B()
The usage of static inner classes is the best supported one. If you absolutely need an
inner class, you should make it a static one.
public class Y {
public class X {}
public X foo() {
return new X();
}
public static X createX(Y y) {
return y.new X();
}
}
Groovy doesn’t support the y.new X() syntax. Instead, you have to write new X(y) ,
like in the code below:
public class Y {
public class X {}
public X foo() {
return new X()
}
public static X createX(Y y) {
return new X(y)
}
}
Caution though, Groovy supports calling methods with one parameter without giving an argument. Th
then have the value null. Basically the same rules apply to calling a constructor. There is a danger that
X() instead of new X(this) for example. Since this might also be the regular way we have not yet found
prevent this problem.
3.2.7. Lambdas
Java 8 supports lambdas and method references:
3.2.8. GStrings
As double-quoted string literals are interpreted as GString values, Groovy may fail
with compile error or produce subtly different code if a class with String literal
containing a dollar character is compiled with Groovy and Java compiler.
While typically, Groovy will auto-cast between GString and String if an API declares
the type of a parameter, beware of Java APIs that accept an Object parameter and then
check the actual type.
assert 'c'.getClass()==String
assert "c".getClass()==String
assert "c${1}".getClass() in GString
Groovy will automatically cast a single-character String to char only when assigning
to a variable of type char . When calling methods with arguments of type char we need
to either cast explicitly or make sure the value has been cast in advance.
char a='a'
assert Character.digit(a, 16)==10 : 'But Groovy does boxing'
assert Character.digit((char) 'a', 16)==10
try {
assert Character.digit('a', 16)==10
assert false: 'Need explicit cast'
} catch(MissingMethodException e) {
}
Groovy supports two styles of casting and in the case of casting to char there are subtle
differences when casting a multi-char strings. The Groovy style cast is more lenient and
will take the first character, while the C-style cast will fail with exception.
int i
m(i)
void m(long l) {
println "in m(long)"
}
void m(Integer i) {
println "in m(Integer)"
}
This is the method that Java would call, since widening has precedence over unboxing.
This is the method Groovy actually calls, since all primitive references use their wrapper class.
3.2.11. Behaviour of ==
In Java == means equality of primitive types or identity for objects. In
Groovy == translates to a.compareTo(b)==0 , if they are Comparable ,
and a.equals(b) otherwise. To check for identity, there is is . E.g. a.is(b) .
3.2.12. Conversions
Java does automatic widening and narrowing conversions.
Converts to
boolean - N N N N N N N
byte N - Y C Y Y Y Y
short N C - C Y Y Y Y
char N C C - Y Y Y Y
int N C C C - Y T Y
long N C C C C - T T
float N C C C C C - Y
double N C C C C C C -
*'Y' indicates a conversion Java can make, 'C' indicates a conversion Java can make when
there is an explicit cast, 'T` indicates a conversion Java can make but data is truncated,
'N' indicates a conversion Java can’t make.
Converts to
C b B b B s S c C i I l L B f F d D Bi
o o o y y h h h h n n o o ig l l o o g
n o o t t o o a a t t n n I o o u u D
v l l e e r r r r e g g n a a b b ec
er e e t t a g te t t l l i
ts a a ct e g e e m
fr n n e r e al
o r r
m
b - B N N N N N N N N N N N N N N N N
o
ol
e
a
n
B B - N N N N N N N N N N N N N N N N
o
ol
e
a
n
b T T - B Y Y Y D Y Y Y Y Y Y Y Y Y Y
yt
e
B T T B - Y Y Y D Y Y Y Y Y Y Y Y Y Y
yt
e
s T T D D - B Y D Y Y Y Y Y Y Y Y Y Y
h
o
rt
S T T D T B - Y D Y Y Y Y Y Y Y Y Y Y
h
o
rt
c T T Y D Y D - D Y D Y D D Y D Y D D
h
ar
C T T D D D D D - D D D D D D D D D D
h
ar
ac
te
r
in T T D D D D Y D - B Y Y Y Y Y Y Y Y
t
In T T D D D D Y D B - Y Y Y Y Y Y Y Y
te
g
er
lo T T D D D D Y D D D - B Y T T T T Y
n
g
L T T D D D T Y D D T B - Y T T T T Y
o
n
g
Bi T T D D D D D D D D D D - D D D D T
gI
nt
e
g
er
fl T T D D D D T D D D D D D - B Y Y Y
o
at
Fl T T D T D T T D D T D T D B - Y Y Y
o
at
d T T D D D D T D D D D D D D D - B Y
o
u
bl
e
D T T D T D T T D D T D T D D T B - Y
o
u
bl
e
Bi T T D D D D D D D D D D D T D T D -
g
D
ec
i
m
al
*'Y' indicates a conversion Groovy can make, 'D' indicates a conversion Groovy can make
when compiled dynamically or explicitly cast, 'T` indicates a conversion Groovy can
make but data is truncated, 'B' indicates a boxing/unboxing operation, 'N' indicates a
conversion Groovy can’t make.
The truncation uses Groovy Truth when converting to boolean / Boolean . Converting
from a number to a character casts the Number.intvalue() to char . Groovy
constructs BigInteger and BigDecimal using Number.doubleValue() when
converting from a Float or Double , otherwise it constructs using toString() . Other
conversions have their behavior defined by java.lang.Number .
3.2.13. Extra keywords
There are a few more keywords in Groovy than in Java. Don’t use them for variable
names etc.
• as
• def
• in
• trait
Reading files
As a first example, let’s see how you would print all lines of a text file in Groovy:
For example in some cases you will prefer to use a Reader , but still benefit from the
automatic resource management from Groovy. In the next example, the reader will be
closed even if the exception occurs:
Writing files
Of course in some cases you won’t want to read but write a file. One of the options is to
use a Writer :
file.bytes = [66,22,11]
Of course you can also directly deal with output streams. For example, here is how you
would create an output stream to write into a file:
executes the closure code on files in the directory matching the specified pattern
Often you will have to deal with a deeper hierarchy of files, in which case you can
use eachFileRecurse :
For more complex traversal techniques you can use the traverse method, which
requires you to set a special flag indicating what to do with the traversal:
}
if the current file is a directory and its name is bin , stop the traversal
boolean b = true
String message = 'Hello from Groovy'
// Serialize data into a file
file.withDataOutputStream { out ->
out.writeBoolean(b)
out.writeUTF(message)
}
// ...
// Then read it back
file.withDataInputStream { input ->
assert input.readBoolean() == b
assert input.readUTF() == message
}
And similarily, if the data you want to serialize implements the Serializable interface,
you can proceed with an object output stream, as illustrated here:
Groovy provides a simple way to execute command line processes. Simply write the
command line as a string and call the execute() method. E.g., on a *nix machine (or a
windows machine with appropriate *nix commands installed), you can execute this:
e.g. here is the same command as above but we will now process the resulting stream a
line at a time:
It is worth noting that in corresponds to an input stream to the standard output of the
command. out will refer to a stream where you can send data to the process (its
standard input).
Remember that many commands are shell built-ins and need special handling. So if you
want a listing of files in a directory on a Windows machine and write:
This is because dir is built-in to the Windows shell ( cmd.exe ) and can’t be run as a
simple executable. Instead, you will need to write:
Because some native platforms only provide limited buffer size for standard input and
output streams, failure to promptly write the input stream or read the output stream of
the subprocess may cause the subprocess to block, and even deadlock
Because of this, Groovy provides some additional helper methods which make stream
handling for processes easier.
Here is how to gobble all of the output (including the error stream output) from your
process:
Pipes in action
proc1 = 'ls'.execute()
proc2 = 'tr -d o'.execute()
proc3 = 'tr -d e'.execute()
proc4 = 'tr -d i'.execute()
proc1 | proc2 | proc3 | proc4
proc4.waitFor()
if (proc4.exitValue()) {
println proc4.err.text
} else {
println proc4.text
}
Consuming errors
Lists
List literals
You can create lists as follows. Notice that [] is the empty list expression.
def emptyList = []
assert emptyList.size() == 0
emptyList.add(5)
assert emptyList.size() == 1
Each list expression creates an implementation of java.util.List.
list[2] = 9
assert list == [5, 6, 9, 8,] // trailing comma OK
assert ['a', 1, 'a', 'a', 2.5, 2.5f, 2.5d, 'hello', 7g, null, 9 as byte]
//objects can be of different types; duplicates allowed
Iterating on a list
Iterating on elements of a list is usually done calling
the each and eachWithIndex methods, which execute code on each item of a list:
[1, 2, 3].each {
println "Item: $it" // `it` is an implicit parameter corresponding to
the current element
}
['a', 'b', 'c'].eachWithIndex { it, i -> // `it` is the current element,
while `i` is the index
println "$i: $it"
}
In addition to iterating, it is often useful to create a new list by transforming each of its
elements into something else. This operation, often called mapping, is done in Groovy
thanks to the collect method:
Manipulating lists
Filtering and searching
The Groovy development kit contains a lot of methods on collections that enhance the
standard collections with pragmatic methods, some of which are illustrated here:
We can use [] to assign a new empty list and << to append items to it:
def list = []
assert list.empty
list << 5
assert list.size() == 1
def a = [1, 2, 3]
a += 4 // creates a new list and assigns it to `a`
a += [5, 6]
assert a == [1, 2, 3, 4, 5, 6]
list = [1, 2]
list.add(1, 3) // add 3 just before index 1
assert list == [1, 3, 2]
The Groovy development kit also contains methods allowing you to easily remove
elements from a list by value:
The Groovy development kit also includes methods making it easy to reason on sets:
Working with collections often implies sorting. Groovy offers a variety of options to sort
lists, from using closures to comparators, as in the following examples:
// JDK 8+ only
// list2.sort(mc)
// assert list2 == [-1, 2, 3, 4, 5, -6, 7, -9, 11, -13]
Collections.sort(list3)
assert list3 == [-7, -3, 1, 2, 5, 6, 9]
Collections.sort(list3, mc)
assert list3 == [1, 2, -3, 5, 6, -7, 9]
Duplicating elements
The Groovy development kit also takes advantage of operator overloading to provide
methods allowing duplication of elements of a list:
// nCopies from the JDK has different semantics than multiply for lists
assert Collections.nCopies(2, [1, 2]) == [[1, 2], [1, 2]] //not [1,2,1,2]
Maps
Map literals
In Groovy, maps (also known as associative arrays) can be created using the map literal
syntax: [:] :
def a = 'Bob'
def ages = [a: 43]
assert ages['Bob'] == null // `Bob` is not found
assert ages['a'] == 43 // because `a` is a literal!
def map = [
simple : 123,
complex: [a: 1, b: 2]
]
def map2 = map.clone()
assert map2.get('simple') == map.get('simple')
assert map2.get('complex') == map.get('complex')
map2.get('complex').put('c', 3)
assert map.get('complex').get('c') == 3
The resulting map is a shallow copy of the original one, as illustrated in the previous
example.
Iterating on maps
As usual in the Groovy development kit, idiomatic iteration on maps makes use of
the each and eachWithIndex methods. It’s worth noting that maps created using the
map literal notation are ordered, that is to say that if you iterate on map entries, it is
guaranteed that the entries will be returned in the same order they were added in the
map.
def map = [
Bob : 42,
Alice: 54,
Max : 33
]
Manipulating maps
Adding or removing elements
Adding an element to a map can be done either using the put method, the subscript
operator or using putAll :
It is also worth noting that you should never use a GString as the key of a map,
because the hash code of a GString is not the same as the hash code of an
equivalent String :
The Groovy development kit contains filtering, searching and collecting methods similar
to those found for lists:
def people = [
1: [name:'Bob', age: 32, gender: 'M'],
2: [name:'Johnny', age: 36, gender: 'M'],
3: [name:'Claire', age: 21, gender: 'F'],
4: [name:'Amy', age: 54, gender:'F']
]
// both return entries, but you can use collect to retrieve the ages for
example
def ageOfBob = bob.value.age
def agesOfFemales = females.collect {
it.value.age
}
assert ageOfBob == 32
assert agesOfFemales == [21,54]
// but you could also use a key/pair value as the parameters of the
closures
def agesOfMales = people.findAll { id, person ->
person.gender == 'M'
}.collect { id, person ->
person.age
}
assert agesOfMales == [32, 36]
assert [
[name: 'Clark', city: 'London'], [name: 'Sharma', city: 'London'],
[name: 'Maradona', city: 'LA'], [name: 'Zhang', city: 'HK'],
[name: 'Ali', city: 'HK'], [name: 'Liu', city: 'HK'],
].groupBy { it.city } == [
London: [[name: 'Clark', city: 'London'],
[name: 'Sharma', city: 'London']],
LA : [[name: 'Maradona', city: 'LA']],
HK : [[name: 'Zhang', city: 'HK'],
[name: 'Ali', city: 'HK'],
[name: 'Liu', city: 'HK']],
]
Ranges
Ranges allow you to create a list of sequential values. These can be used
as List since Range extendsjava.util.List.
Ranges defined with the .. notation are inclusive (that is the list contains the from and
to value).
Ranges defined with the ..< notation are half-open, they include the first value but not
the last value.
// an inclusive range
def range = 5..8
assert range.size() == 4
assert range.get(2) == 7
assert range[2] == 7
assert range instanceof java.util.List
assert range.contains(5)
assert range.contains(8)
Ranges can be used for any Java object which implements java.lang.Comparable for
comparison and also have methods next() and previous() to return the next /
previous item in the range. For example, you can create a range of String elements:
// an inclusive range
def range = 'a'..'d'
assert range.size() == 4
assert range.get(2) == 'c'
assert range[2] == 'c'
assert range instanceof java.util.List
assert range.contains('a')
assert range.contains('d')
assert !range.contains('e')
You can iterate on a range using a classic for loop:
for (i in 1..10) {
println "Hello ${i}"
}
but alternatively you can achieve the same effect in a more Groovy idiomatic style, by
iterating a range with each method:
(1..10).each { i ->
println "Hello ${i}"
}
Ranges can be also used in the switch statement:
switch (years) {
case 1..10: interestRate = 0.076; break;
case 11..25: interestRate = 0.052; break;
default: interestRate = 0.037;
}
def listOfMaps = [['a': 11, 'b': 12], ['a': 21, 'b': 22]]
assert listOfMaps.a == [11, 21] //GPath notation
assert listOfMaps*.a == [11, 21] //spread dot notation
listOfMaps = [['a': 11, 'b': 12], ['a': 21, 'b': 22], null]
assert listOfMaps*.a == [11, 21, null] // caters for null values
assert listOfMaps*.a == listOfMaps.collect { it?.a } //equivalent notation
// But this will only collect non-null values
assert listOfMaps.a == [11,21]
Spread operator
The spread operator can be used to "inline" a collection into another. It is syntactic sugar
which often avoids calls to putAll and facilitates the realization of one-liners:
f = { m, i, j, k -> [m, i, j, k] }
//using spread map notation with mixed unnamed and named arguments
assert f('e': 100, *[4, 5], *: ['a': 10, 'b': 20, 'c': 30], 6) ==
[["e": 100, "b": 20, "c": 30, "a": 10], 4, 5, 6]
class Person {
String name
int age
}
def persons = [new Person(name:'Hugo', age:17), new
Person(name:'Sandra',age:19)]
assert [17, 19] == persons*.age
assert x == 'c'
assert x.class == String
list = 100..200
sub = list[1, 3, 20..25, 33]
assert sub == [101, 103, 120, 121, 122, 123, 124, 125, 133]
The subscript operator can be used to update an existing collection (for collection type
which are not immutable):
list = ['a','x','x','d']
list[1..2] = ['b','c']
assert list == ['a','b','c','d']
It is worth noting that negative indices are allowed, to extract more easily from the end
of a collection:
In particular, we invite you to read the Groovy development kit API docs and
specifically:
You can access the properties of a Date or Calendar using the normal array index
notation with the constant field numbers from the Calendar class as shown in the
following example:
Groovy supports arithmetic on and iteration between Date and Calendar instances as
shown in the following example:
int count = 0
prev.upto(next) { count++ }
assert count == 3
You can parse strings into dates and output dates into formatted strings:
For parsing, Groovy adds a static parse method to many of the JSR 310 types. The
method takes two arguments: the value to be formatted and the pattern to use. The
pattern is defined by the java.time.format.DateTimeFormatter API. As an example:
def date = LocalDate.parse('Jun 3, 04', 'MMM d, yy')
assert date == LocalDate.of(2004, Month.JUNE, 3)
Manipulating date/time
Addition and subtraction
Temporal types have plus and minus methods for adding or subtracting a
provided java.time.temporal.TemporalAmount argument. Because Groovy maps
the + and - operators to single-argument methods of these names, a more natural
expression syntax can be used to add and subtract.
Negation
The Duration and Period types represent a negative or positive length of time. These
can be negated with the unary - operator.
The unit of iteration for upto , downto , and ranges is the same as the unit for addition
and subtraction: LocalDate iterates by one day at a time, YearMonth iterates by one
month, Year by one year, and everything else by one second. Both methods also
support an optional a TemporalUnit argument to change the unit of iteration.
Consider the following example, where March 1st, 2018 is iterated up to March 2nd,
2018 using an iteration unit of months.
int iterationCount = 0
start.upto(end, ChronoUnit.MONTHS) { next ->
println next
++iterationCount
}
assert iterationCount == 1
Since the start date is inclusive, the closure is called with a next date value of March
1st. The upto method then increments the date by one month, yielding the date, April
1st. Because this date is after the specified end date of March 2nd, the iteration stops
immediately, having only called the closure once. This behavior is the same for
the downto method except that the iteration will stop as soon as the value
of next becomes earlier than the targeted end date.
In short, when iterating with the upto or downto methods with a custom unit of
iteration, the current value of iteration will never exceed the end value.
Most JSR types have been fitted with toDate() and toCalendar() methods for
converting to relatively equivalent java.util.Date and java.util.Calendar values.
Both ZoneId and ZoneOffset have been given a toTimeZone() method for converting
to java.util.TimeZone .
// LocalDate to java.util.Date
def valentines = LocalDate.of(2018, Month.FEBRUARY, 14)
assert valentines.toDate().format('MMMM dd, yyyy') == 'February 14, 2018'
// LocalTime to java.util.Date
def noon = LocalTime.of(12, 0, 0)
assert noon.toDate().format('HH:mm:ss') == '12:00:00'
// ZoneId to java.util.TimeZone
def newYork = ZoneId.of('America/New_York')
assert newYork.toTimeZone() == TimeZone.getTimeZone('America/New_York')
// ZonedDateTime to java.util.Calendar
def valAtNoonInNY = ZonedDateTime.of(valentines, noon, newYork)
assert valAtNoonInNY.toCalendar().getTimeZone().toZoneId() == newYork
Note that when converting to a legacy type:
In the case of a dot being part of a configuration variable name, it can be escaped by
using single or double quotes.
assert config.app."person.age" == 42
In addition, ConfigSlurper comes with support for environments .
The environments method can be used to hand over a Closure instance that itself may
consist of a several sections. Let’s say we wanted to create a particular configuration
value for the development environment. When creating the ConfigSlurper instance
we can use the ConfigSlurper(String) constructor to specify the target environment.
test {
app.port = 8082
}
production {
app.port = 80
}
}
''')
The ConfigSlurper environments aren’t restricted to any particular environment names. It solely
the ConfigSlurper client code what value are supported and interpreted accordingly.
The environments method is built-in but the registerConditionalBlock method
can be used to register other method names in addition to the environments name.
myProject {
developers {
sendMail = false
}
}
''')
assert !config.sendMail
Once the new block is registered ConfigSlurper can parse it.
For Java integration purposes the toProperties method can be used to convert
the ConfigObject to a java.util.Properties object that might be stored to
a *.properties text file. Be aware though that the configuration values are converted
to String instances during adding them to the newly created Properties instance.
Expando
The Expando class can be used to create a dynamically expandable object. Despite its
name it does not use the ExpandoMetaClass underneath. Each Expando object
represents a standalone, dynamically-crafted instance that can be extended with
properties (or methods) at runtime.
Depending on the type of change that has happened, observable collections might fire
more specialized PropertyChangeEvent types. For example, adding an element to an
observable list fires an ObservableList.ElementAddedEvent event.
def event
def listener = {
if (it instanceof ObservableList.ElementEvent) {
event = it
}
} as PropertyChangeListener
observable.add 42
Be aware that adding an element in fact causes two events to be triggered. The
first is of type ObservableList.ElementAddedEvent , the second is a
plain PropertyChangeEvent that informs listeners about the change of
property size .
def event
def listener = {
if (it instanceof ObservableList.ElementEvent) {
event = it
}
} as PropertyChangeListener
observable.clear()
ObservableMap and ObservableSet come with the same concepts as we have seen
for ObservableList in this section.
3.4. Metaprogramming
The Groovy language supports two flavors of metaprogramming: runtime and compile-
time. The first allows altering the class model and the behavior of a program at runtime
while the second only occurs at compile-time. Both have pros and cons that we will
detail in this section.
3.4.1. Runtime metaprogramming
With runtime metaprogramming we can postpone to runtime the decision to intercept,
inject and even synthesize methods of classes and interfaces. For a deep understanding
of Groovy’s metaobject protocol (MOP) we need to understand Groovy objects and
Groovy’s method handling. In Groovy we work with three kinds of objects: POJO, POGO
and Groovy Interceptors. Groovy allows metaprogramming for all types of objects but in
a different manner.
• POJO - A regular Java object whose class can be written in Java or any other language
for the JVM.
• POGO - A Groovy object whose class is written in Groovy. It
extends java.lang.Object and implements the groovy.lang.GroovyObject interface
by default.
• Groovy Interceptor - A Groovy object that implements
the groovy.lang.GroovyInterceptable interface and has method-interception
capability which is discussed in the GroovyInterceptable section.
For every method call Groovy checks whether the object is a POJO or a POGO. For POJOs,
Groovy fetches its MetaClass from the groovy.lang.MetaClassRegistry and delegates
method invocation to it. For POGOs, Groovy takes more steps, as illustrated in the
following figure:
GroovyObject interface
groovy.lang.GroovyObject is the main interface in Groovy as the Object class is in
Java. GroovyObject has a default implementation in
the groovy.lang.GroovyObjectSupport class and it is responsible to transfer invocation to
the groovy.lang.MetaClass object. The GroovyObject source looks like this:
package groovy.lang;
MetaClass getMetaClass();
It is also invoked when the method called is not present on a Groovy object. Here is a
simple example using an overridden invokeMethod() method:
class SomeGroovyClass {
def test() {
return 'method exists'
}
}
get/setProperty
Every read access to a property can be intercepted by overriding
the getProperty() method of the current object. Here is a simple example:
class SomeGroovyClass {
def getField1() {
return 'getHa'
}
You can intercept write access to properties by overriding the setProperty() method:
class POGO {
String property
get/setMetaClass
You can access an object’s metaClass or set your own MetaClass implementation for
changing the default interception mechanism. For example, you can write your own
implementation of the MetaClass interface and assign it to objects in order to change
the interception mechanism:
// getMetaclass
someObject.metaClass
// setMetaClass
someObject.metaClass = new OwnMetaClassImplementation()
You can find an additional example in the GroovyInterceptable topic.
get/setAttribute
This functionality is related to the MetaClass implementation. In the default
implementation you can access fields without invoking their getters and setters. The
examples below demonstrates this approach:
class SomeGroovyClass {
def getField1() {
return 'getHa'
}
}
methodMissing
Groovy supports the concept of methodMissing . This method differs
from invokeMethod in that it is only invoked in the case of a failed method dispatch
when no method can be found for the given name and/or the given arguments:
class Foo {
For example, consider dynamic finders in GORM. These are implemented in terms
of methodMissing . The code resembles something like this:
class GORM {
propertyMissing
Groovy supports the concept of propertyMissing for intercepting otherwise failing
property resolution attempts. In the case of a getter method, propertyMissing takes a
single String argument containing the property name:
class Foo {
def propertyMissing(String name) { name }
}
For setter methods a second propertyMissing definition can be added that takes an
additional value argument:
class Foo {
def storage = [:]
def propertyMissing(String name, value) { storage[name] = value }
def propertyMissing(String name) { storage[name] }
}
static methodMissing
Static variant of methodMissing method can be added via the ExpandoMetaClass or can
be implemented at the class level with $static_methodMissing method.
class Foo {
static def $static_methodMissing(String name, Object args) {
return "Missing static method name is $name"
}
}
static propertyMissing
Static variant of propertyMissing method can be added via the ExpandoMetaClass or
can be implemented at the class level with $static_propertyMissing method.
class Foo {
static def $static_propertyMissing(String name) {
return "Missing static property name is $name"
}
}
GroovyInterceptable
The groovy.lang.GroovyInterceptable interface is marker interface that
extends GroovyObject and is used to notify the Groovy runtime that all methods
should be intercepted through the method dispatcher mechanism of the Groovy
runtime.
package groovy.lang;
def definedMethod() { }
void testCheckInterception() {
def interception = new Interception()
assert interception.definedMethod() == 'invokedMethod'
assert interception.someMethod() == 'invokedMethod'
}
}
We cannot use default groovy methods like println because these methods are injected into all G
they will be intercepted too.
void testPOJOMetaClassInterception() {
String invoking = 'ha'
invoking.metaClass.invokeMethod = { String name, Object args ->
'invoked'
}
void testPOGOMetaClassInterception() {
Entity entity = new Entity('Hello')
entity.metaClass.invokeMethod = { String name, Object args ->
'invoked'
}
Categories
There are situations where it is useful if a class not under control had additional
methods. In order to enable this capability, Groovy implements a feature borrowed from
Objective-C, called Categories.
Categories are implemented with so-called category classes. A category class is special in
that it needs to meet certain pre-defined rules for defining extension methods.
There are a few categories that are included in the system for adding functionality to
classes that make them more usable within the Groovy environment:
• groovy.time.TimeCategory
• groovy.servlet.ServletCategory
• groovy.xml.dom.DOMCategory
Category classes aren’t enabled by default. To use the methods defined in a category
class it is necessary to apply the scoped use method that is provided by the GDK and
available from inside every Groovy object instance:
use(TimeCategory) {
println 1.minute.from.now
println 10.hours.ago
The use method takes the category class as its first parameter and a closure code block
as second parameter. Inside the Closure access to the category methods is available. As
can be seen in the example above even JDK classes
like java.lang.Integer or java.util.Date can be enriched with user-defined
methods.
A category needs not to be directly exposed to the user code, the following will also do:
class JPACategory{
// Let's enhance JPA EntityManager without getting into the JSR committee
static void persistAll(EntityManager em , Object[] entities) { //add an
interface to save all
entities?.each { em.persist(it) }
}
}
def transactionContext = {
EntityManager em, Closure c ->
def tx = em.transaction
try {
tx.begin()
use(JPACategory) {
c()
}
tx.commit()
} catch (e) {
tx.rollback()
} finally {
//cleanup your resource here
}
}
// user code, they always forget to close resource in exception, some even
forget to commit, let's not rely on them.
EntityManager em; //probably injected
transactionContext (em) {
em.persistAll(obj1, obj2, obj3)
// let's do some logics here to make the example sensible
em.persistAll(obj2, obj4, obj6)
}
When we have a look at the groovy.time.TimeCategory class we see that the
extension methods are all declared as static methods. In fact, this is one of the
requirements that must be met by category classes for its methods to be successfully
added to a class inside the use code block:
cal.setTime(date);
cal.add(Calendar.YEAR, -duration.getYears());
cal.add(Calendar.MONTH, -duration.getMonths());
cal.add(Calendar.DAY_OF_YEAR, -duration.getDays());
cal.add(Calendar.HOUR_OF_DAY, -duration.getHours());
cal.add(Calendar.MINUTE, -duration.getMinutes());
cal.add(Calendar.SECOND, -duration.getSeconds());
cal.add(Calendar.MILLISECOND, -duration.getMillis());
return cal.getTime();
}
// ...
Another requirement is the first argument of the static method must define the type the
method is attached to once being activated. The other arguments are the normal
arguments the method will take as parameters.
Because of the parameter and static method convention, category method definitions
may be a bit less intuitive than normal method definitions. As an alternative Groovy
comes with a @Category annotation that transforms annotated classes into category
classes at compile-time.
class Distance {
def number
String toString() { "${number}m" }
}
@Category(Number)
class NumberCategory {
Distance getMeters() {
new Distance(number: this)
}
}
use (NumberCategory) {
assert 42.meters.toString() == '42m'
}
Applying the @Category annotation has the advantage of being able to use instance
methods without the target type as a first parameter. The target type class is given as an
argument to the annotation instead.
Metaclasses
As explained earlier, Metaclasses play a central role in method resolution. For every
method invocation from groovy code, Groovy will find the MetaClass for the given
object and delegate the method resolution to the metaclass
via MetaClass#invokeMethod which should not be confused
with GroovyObject#invokeMethod which happens to be a method that the metaclass
may eventually call.
class Foo {}
Custom metaclasses
You can change the metaclass of any object or class and replace with a custom
implementation of the MetaClass interface. Usually you will want to subclass one of the
existing
metaclasses MetaClassImpl , DelegatingMetaClass , ExpandoMetaClass , ProxyMet
aClass , etc. otherwise you will need to implement the complete method lookup logic.
Before using a new metaclass instance you should
callgroovy.lang.MetaClass#initialize() otherwise the metaclass may or may not behave
as expected.
Delegating metaclass
Foo.metaClass = mc
def f = new Foo()
assert f.BAR() == "BAR" // the new metaclass routes .BAR() to .bar() and
uppercases the result
Magic package
It is possible to change the metaclass at startup time by giving the metaclass a specially
crafted (magic) class name and package name. In order to change the metaclass
for java.lang.Integer it’s enough to put a
class groovy.runtime.metaclass.java.lang.IntegerMetaClass in the classpath.
This is useful, for example, when working with frameworks if you want to to metaclass
changes before your code is executed by the framework. The general form of the magic
package is groovy.runtime.metaclass.[package].[class]MetaClass . In the
example below the [package] is java.lang and the [class] is Integer :
// file: IntegerMetaClass.groovy
package groovy.runtime.metaclass.java.lang;
// File testInteger.groovy
def i = 10
assert i.isBiggerThan5()
assert !i.isBiggerThan15()
println i.isBiggerThan5()
By running that file with groovy -cp .
testInteger.groovy the IntegerMetaClass will be in the classpath and therefore it
will become the metaclass for java.lang.Integer intercepting the method calls
to isBiggerThan*() methods.
The following sections go into detail on how ExpandoMetaClass can be used in various
scenarios.
Methods
Note that the left shift operator is used to append a new method. If a public method with the same n
types is declared by the class or interface, including those inherited from superclasses and superinter
those added to the metaClass at runtime, an exception will be thrown. If you want to replace a me
the class or interface you can use the = operator.
class Book {
String title
}
Firstly, it has support for declaring a mutable property by simply assigning a value to a
property of metaClass :
class Book {
String title
}
class Book {
String title
}
Book.metaClass.getAuthor << {-> "Stephen King" }
class Book {
String title
}
Constructors
class Book {
String title
}
Book.metaClass.constructor << { String title -> new Book(title:title) }
Static Methods
Static methods can be added using the same technique as instance methods with the
addition of the static qualifier before the method name.
class Book {
String title
}
class Person {
String name
}
class MortgageLender {
def borrowMoney() {
"buy house"
}
}
Person.metaClass.buyHouse = lender.&borrowMoney
Since Groovy allows you to use Strings as property names this in turns allows you to
dynamically create method and property names at runtime. To create a method with a
dynamic name simply use the language feature of reference property names as strings.
class Person {
String name = "Fred"
}
p.changeNameToBob()
One application of dynamic method names can be found in the Grails web application
framework. The concept of "dynamic codecs" is implemented by using dynamic method
names.
HTMLCodec Class
class HTMLCodec {
static encode = { theTarget ->
HtmlUtils.htmlEscape(theTarget.toString())
}
At runtime it is often useful to know what other methods or properties exist at the time
the method is executed. ExpandoMetaClass provides the following methods as of this
writing:
• getMetaMethod
• hasMetaMethod
• getMetaProperty
• hasMetaProperty
Why can’t you just use reflection? Well because Groovy is different, it has the methods
that are "real" methods and methods that are available only at runtime. These are
sometimes (but not always) represented as MetaMethods. The MetaMethods tell you
what methods are available at runtime, thus your code can adapt.
GroovyObject Methods
class Stuff {
def invokeMe() { "foo" }
}
class Person {
String name = "Fred"
}
class Stuff {
static invokeMe() { "foo" }
}
Extending Interfaces
def list = []
list << 1
list << 2
assert 4 == list.sizeDoubled()
Extension modules
Extending existing classes
An extension module allows you to add new methods to existing classes, including
classes which are precompiled, like classes from the JDK. Those new methods, unlike
those defined through a metaclass or using a category, are available globally. For
example, when you write:
ResourceGroovyMethods.java
Instance methods
To add an instance method to an existing class, you need to create an extension class.
For example, let’s say you want to add a maxRetries method on Integer which
accepts a closure and executes it at most n times until no exception is thrown. To do that,
you only need to write the following:
MaxRetriesExtension.groovy
class MaxRetriesExtension {
static void maxRetries(Integer self, Closure code) {
assert self >= 0
int retries = self
Throwable e = null
while (retries > 0) {
try {
code.call()
break
} catch (Throwable err) {
e = err
retries--
}
}
if (retries == 0 && e) {
throw e
}
}
}
The extension class
First argument of the static method corresponds to the receiver of the message, that is to say
the extended instance
Then, after having declared your extension class, you can call it this way:
int i=0
5.maxRetries {
i++
}
assert i == 1
i=0
try {
5.maxRetries {
i++
throw new RuntimeException("oops")
}
} catch (RuntimeException e) {
assert i == 5
}
Static methods
It is also possible to add static methods to a class. In that case, the static method needs to
be defined in its ownfile. Static and instance extension methods cannot be present in
the same class.
StaticStringExtension.groovy
class StaticStringExtension {
static String greeting(String self) {
'Hello, world!'
}
}
The static extension class
First argument of the static method corresponds to the class being extended and is unused
Module descriptor
For Groovy to be able to load your extension methods, you must declare your extension
helper classes. You must create a file
named org.codehaus.groovy.runtime.ExtensionModule into the META-
INF/groovy directory:
org.codehaus.groovy.runtime.ExtensionModule
In this section, we will start with explaining the various compile-time transformations
that are bundled with the Groovy distribution. In a subsequent section, we will describe
how you can implement your own AST transformations and what are the disadvantages
of this technique.
Available AST transformations
Groovy comes with various AST transformations covering different needs: reducing
boilerplate (code generation), implementing design patterns (delegation, …), logging,
declarative concurrency, cloning, safer scripting, tweaking the compilation,
implementing Swing patterns, testing and eventually managing dependencies. If none of
those AST transformations cover your needs, you can still implement your own, as show
in section Developing your own AST transformations.
• global AST transformations are applied transparently, globally, as soon as they are
found on compile classpath
• local AST transformations are applied by annotating the source code with markers.
Unlike global AST transformations, local AST transformations may support
parameters.
Groovy doesn’t ship with any global AST transformation, but you can find a list of local
AST transformations available for you to use in your code here:
@groovy.transform.ToString
import groovy.transform.ToString
@ToString
class Person {
String firstName
String lastName
}
With this definition, then the following assertion passes, meaning that
a toString method taking the field values from the class and printing them out has
been generated:
def p = new
Person(firstName: 'Jack',
lastName: 'Nicholson')
assert p.toString() ==
'Person(Nicholson)'
generated }
toString.
def p = new
Person(firstName: 'Jack',
lastName: 'Nicholson')
assert p.toString() ==
'Person(firstName:Jack,
lastName:Nicholson)'
def p = new
Person(firstName: 'Jack',
lastName: 'Nicholson')
p.test()
assert p.toString() ==
'Person(Jack, Nicholson,
42)'
assert bono.toString() ==
'BandMember(bandName:U2,
name:Bono)'
Attribute Default Description Example
value
assert bono.toString() ==
'BandMember(bandName:U2,
name:Bono)'
def p = new
Person(firstName: 'Jack')
assert p.toString() ==
'Person(Jack)'
def p = new
Person(firstName: 'Jack')
assert p.toString() ==
'acme.Person(firstName:Jack
, lastName:Nicholson)'
@groovy.transform.EqualsAndHashCode
import groovy.transform.EqualsAndHashCode
@EqualsAndHashCode
class Person {
String firstName
String lastName
}
assert p1==p2
assert p1.hashCode() == p2.hashCode()
There are several options available to tweak the behavior of @EqualsAndHashCode :
assert p1==p2
assert p1.hashCode() ==
p2.hashCode()
assert p1==p2
assert p1.hashCode() ==
p2.hashCode()
@EqualsAndHashCode(cache=true)
@Immutable
class Person {
SlowHashCode slowHashCode = new
SlowHashCode()
}
def start =
System.currentTimeMillis()
p.hashCode()
assert System.currentTimeMillis() -
start < 100
@EqualsAndHashCode(callSuper=true)
class Person extends Living {
String firstName
String lastName
}
Attribute Default Description Example
value
assert p1!=p2
assert p1.hashCode() !=
p2.hashCode()
assert p1 == p2
assert p1 != p3
assert p1 == p2
assert p1.hashCode() ==
p2.hashCode()
assert p1 != p3
assert p1.hashCode() !=
p3.hashCode()
assert p1 != p2
assert p1.hashCode() !=
p2.hashCode()
@groovy.transform.TupleConstructor
Implementation Details
Normally you don’t need to understand the imp[ementation details of the generated
constructor(s); you just use them in the normal way. However, if you want to add
multiple constructors, understand Java integration options or meet requirements of
some dependency injection frameworks, then some details are useful.
As previously mentioned, the generated constructor has default values applied. In later
compilation phases, the Groovy compiler’s standard default value processing behavior is
then applied. The end result is that multiple constructors are placed within the bytecode
of your class. This provides a well understood semantics and is also useful for Java
integration purposes. As an example, the following code will generate 3 constructors:
import groovy.transform.TupleConstructor
@TupleConstructor
class Person {
String firstName
String lastName
}
The other constructors are generated by taking the properties in the order they are
defined. Groovy will generate as many constructors as there are properties (or fields,
depending on the options).
Setting the defaults attribute (see the available configuration options table) to false ,
disables the normal default values behavior which means:
Immutability support
assert e.message.contains
('Could not find matching
constructor')
}
try {
def p2 = new Person('Jack',
'Nicholson')
} catch(e) {
// will fail because
properties are not included
}
assert p1.firstName ==
p2.firstName
assert p1.lastName == p2.lastName
assert p1.toString() == 'Jack
Nicholson: Actor'
assert p1.toString() ==
p2.toString()
assert p1.firstName ==
p2.firstName
assert p1.lastName == p2.lastName
assert p1.toString() == 'Jack
Nicholson: null'
assert p2.toString() == 'Jack
Nicholson: Actor'
r
generation @TupleConstructor(includeSuperFie
lds=true)
class Person extends Base {
String firstName
String lastName
public String toString() {
"$firstName $lastName:
${occupation()}"
}
}
assert p1.firstName ==
p2.firstName
assert p1.lastName == p2.lastName
assert p1.toString() == 'Jack
Nicholson: Actor'
assert p2.toString() ==
p1.toString()
assert p1.firstName ==
p2.firstName
assert p1.lastName == p2.lastName
assert p1.toString() == 'Jack
Nicholson: null'
assert p2.toString() == 'Jack
Nicholson: actor'
instead
call setters
if they
exist. It’s
usually
deemed
bad style
from
within a
constructo
r to call
setters
that can be
overridde
n. It’s your
responsibi
lity to
avoid such
bad style.
Setting the defaults annotation attribute to false and the force annotation
attribute to true allows multiple tuple constructors to be created by using different
customization options for the different cases (provided each case has a different type
signature) as shown in the following example:
class Named {
String name
}
import groovy.transform.*
@ToString
@MapConstructor
class Person {
String firstName
String lastName
}
import groovy.transform.Canonical
@Canonical
class Person {
String firstName
String lastName
}
def p1 = new Person(firstName: 'Jack', lastName: 'Nicholson')
assert p1.toString() == 'Person(Jack, Nicholson)' // Effect of @ToString
import groovy.transform.Canonical
@Canonical(excludes=['lastName'])
class Person {
String firstName
String lastName
}
def p1 = new Person(firstName: 'Jack', lastName: 'Nicholson')
assert p1.toString() == 'Person(Jack)' // Effect of
@ToString(excludes=['lastName'])
@Canonical(excludes=['lastName'])
class Person {
String firstName
String lastName
}
def p1 = new Person(firstName: 'Jack', lastName: 'Nicholson')
assert p1.toString() == 'Person(Jack)' // Effect of
@ToString(excludes=['lastName'])
@groovy.transform.InheritConstructors
import groovy.transform.InheritConstructors
@InheritConstructors
class CustomException extends Exception {}
// Java 7 only
// new CustomException("A custom message", new RuntimeException(), false,
true)
The @InheritConstructor AST transformation supports the following configuration
options:
Attribute Def Descrip Example
ault tion
valu
e
@groovy.lang.Category
class TripleCategory {
public static Integer triple(Integer self) {
3*self
}
}
use (TripleCategory) {
assert 9 == 3.triple()
}
The @Category transformation lets you write the same using an instance-style class,
rather than a static class style. This removes the need for having the first argument of
each method being the receiver. The category can be written like this:
@Category(Integer)
class TripleCategory {
public Integer triple() { 3*this }
}
use (TripleCategory) {
assert 9 == 3.triple()
}
Note that the mixed in class can be referenced using this instead. It’s also worth noting
that using instance fields in a category class is inherently unsafe: categories are not
stateful (like traits).
@groovy.transform.IndexedProperty
class SomeBean {
@IndexedProperty String[] someArray = new String[2]
@IndexedProperty List someList = []
}
The @Lazy AST transformation implements lazy initialization of fields. For example, the
following code:
class SomeBean {
@Lazy LinkedList myField
}
will produce the following code:
List $myField
List getMyField() {
if ($myField!=null) { return $myField }
else {
$myField = new LinkedList()
return $myField
}
}
The default value which is used to initialize the field is the default constructor of the
declaration type. It is possible to define a default value by using a closure on the right
hand side of the property assignment, as in the following example:
class SomeBean {
@Lazy LinkedList myField = { ['a','b','c']}()
}
In that case, the generated code looks like the following:
List $myField
List getMyField() {
if ($myField!=null) { return $myField }
else {
$myField = { ['a','b','c']}()
return $myField
}
}
If the field is declared volatile then initialization will be synchronized using the double-
checked locking pattern.
Using the soft=true parameter, the helper field will use a SoftReference instead,
providing a simple way to implement caching. In that case, if the garbage collector
decides to collect the reference, initialization will occur the next time the field is
accessed.
@groovy.lang.Newify
@Newify([Tree,Leaf])
class TreeBuilder {
Tree tree = Tree(Leaf('A'),Leaf('B'),Tree(Leaf('C')))
}
• or using the Ruby style:
@Newify([Tree,Leaf])
class TreeBuilder {
Tree tree =
Tree.new(Leaf.new('A'),Leaf.new('B'),Tree.new(Leaf.new('C')))
}
The Ruby version can be disabled by setting the auto flag to false .
@groovy.transform.Sortable
import groovy.transform.Sortable
def people = [
new Person(first: 'Johnny', last: 'Depp', born: 1963),
new Person(first: 'Keira', last: 'Knightley', born: 1985),
new Person(first: 'Geoffrey', last: 'Rush', born: 1951),
new Person(first: 'Orlando', last: 'Bloom', born: 1977)
]
def people = [
new Person(first: 'Ben', last: 'Affleck', born: 1972),
new Person(first: 'Ben', last: 'Stiller', born: 1965)
]
def finalists = [
new Player('Serena'),
new Player('Venus'),
new Player('CoCo'),
new Player('Mirjana')
]
Attribute Defaul Description Example
t value
assert finalists.sort()*.name
== ['CoCo', 'Venus', 'Serena',
'Mirjana']
def finalists = [
new Player('USA', 'Serena'),
new Player('USA', 'Venus'),
new Player('USA', 'CoCo'),
new Player('Croatian',
'Mirjana')
]
assert finalists.sort()*.name
== ['Mirjana', 'CoCo',
'Serena', 'Venus']
def people = [
new Citizen('Bob', 'Italy'),
new Citizen('Cathy',
'Hungary'),
new Citizen('Cathy',
'Egypt'),
new Citizen('Bob',
'Germany'),
new Citizen('Alan',
'France')
]
Attribute Defaul Description Example
t value
assert people.sort()*.name ==
['Alan', 'Bob', 'Bob',
'Cathy', 'Cathy']
assert people.sort()*.country
== ['France', 'Germany',
'Italy', 'Egypt', 'Hungary']
@groovy.transform.builder.Builder
The @Builder AST transformation is used to help write classes that can be created
using fluent api calls. The transform supports multiple building strategies to cover a
range of cases and there are a number of configuration options to customize the building
process. If you’re an AST hacker, you can also define your own strategy class. The
following table lists the available strategies that are bundled with Groovy and the
configuration options each strategy supports.
import groovy.transform.builder.*
@Builder(builderStrategy=SimpleStrategy)
class Person {
String first
String last
Integer born
}
Then, just call the setters in a chained fashion as shown here:
def p1 = new Person().setFirst('Johnny').setLast('Depp').setBorn(1963)
assert "$p1.first $p1.last" == 'Johnny Depp'
For each property, a generated setter will be created which looks like this:
import groovy.transform.builder.*
@Builder(builderStrategy=SimpleStrategy, prefix="")
class Person {
String first
String last
Integer born
}
And calling the chained setters would look like this:
The annotation attribute useSetters can be used if you have a setter which you want
called as part of the construction process. See the JavaDoc for details.
The annotation
attributes builderClassName , buildMethodName , builderMethodName , forClass an
d includeSuperProperties are not supported for this strategy.
Groovy already has built-in building mechanisms. Don’t rush to using @Builder if the built-in mech
needs. Some examples:
To use the ExternalStrategy , create and annotate a Groovy builder class using
the @Builder annotation, specify the class the builder is for using forClass and
indicate use of the ExternalStrategy . Suppose you have the following class you would
like a builder for:
class Person {
String first
String last
int born
}
you explicitly create and use your builder class as follows:
import groovy.transform.builder.*
@Builder(builderStrategy=ExternalStrategy, forClass=Person)
class PersonBuilder { }
import groovy.transform.builder.*
@Builder(builderStrategy=ExternalStrategy,
forClass=javax.swing.DefaultButtonModel)
class ButtonModelBuilder {}
import groovy.transform.builder.*
import groovy.transform.Canonical
@Canonical
class Person {
String first
String last
int born
}
@Builder(builderStrategy=ExternalStrategy, forClass=Person,
includes=['first', 'last'], buildMethodName='create', prefix='with')
class PersonBuilder { }
DefaultStrategy
import groovy.transform.builder.Builder
@Builder
class Person {
String firstName
String lastName
int age
}
def person =
Person.builder().firstName("Robert").lastName("Lewandowski").age(21).build(
)
assert person.firstName == "Robert"
assert person.lastName == "Lewandowski"
assert person.age == 21
If you want, you can customize various aspects of the building process using
the builderClassName , buildMethodName , builderMethodName , prefix , includes
and excludes annotation attributes, some of which are used in the example here:
import groovy.transform.builder.Builder
def p =
Person.maker().withFirstName("Robert").withLastName("Lewandowski").make()
assert "$p.firstName $p.lastName" == "Robert Lewandowski"
This strategy also supports annotating static methods and constructors. In this case, the
static method or constructor parameters become the properties to use for building
purposes and in the case of static methods, the return type of the method becomes the
target class being built. If you have more than one @Builder annotation used within a
class (at either the class, method or constructor positions) then it is up to you to ensure
that the generated helper classes and factory methods have unique names (i.e. no more
than one can use the default name values). Here is an example highlighting method and
constructor usage (and also illustrating the renaming required for unique names).
import groovy.transform.builder.*
import groovy.transform.*
@ToString
@Builder
class Person {
String first, last
int born
Person(){}
@Builder(builderClassName='MovieBuilder',
builderMethodName='byRoleBuilder')
Person(String roleName) {
if (roleName == 'Jack Sparrow') {
this.first = 'Johnny'; this.last = 'Depp'; this.born = 1963
}
}
@Builder(builderClassName='NameBuilder', builderMethodName='nameBuilder',
prefix='having', buildMethodName='fullName')
static String join(String first, String last) {
first + ' ' + last
}
@Builder(builderClassName='SplitBuilder',
builderMethodName='splitBuilder')
static Person split(String name, int year) {
def parts = name.split(' ')
new Person(first: parts[0], last: parts[1], born: year)
}
}
assert Person.splitBuilder().name("Johnny
Depp").year(1963).build().toString() == 'Person(Johnny, Depp, 1963)'
assert Person.byRoleBuilder().roleName("Jack Sparrow").build().toString()
== 'Person(Johnny, Depp, 1963)'
assert
Person.nameBuilder().havingFirst('Johnny').havingLast('Depp').fullName() ==
'Johnny Depp'
assert
Person.builder().first("Johnny").last('Depp').born(1963).build().toString()
== 'Person(Johnny, Depp, 1963)'
The forClass annotation attribute is not supported for this strategy.
InitializerStrategy
import groovy.transform.builder.*
import groovy.transform.*
@ToString
@Builder(builderStrategy=InitializerStrategy)
class Person {
String firstName
String lastName
int age
}
Your class will be locked down to have a single public constructor taking a "fully set"
initializer. It will also have a factory method to create the initializer. These are used as
follows:
@CompileStatic
def firstLastAge() {
assert new
Person(Person.createInitializer().firstName("John").lastName("Smith").age(2
1)).toString() == 'Person(John, Smith, 21)'
}
firstLastAge()
Any attempt to use the initializer which doesn’t involve setting all the properties
(though order is not important) will result in a compilation error. If you don’t need this
level of strictness, you don’t need to use @CompileStatic .
@Builder(builderStrategy=InitializerStrategy)
@Immutable
@VisibilityOptions(PRIVATE)
class Person {
String first
String last
int born
}
@CompileStatic
def createFirstLastBorn() {
def p = new
Person(Person.createInitializer().first('Johnny').last('Depp').born(1963))
assert "$p.first $p.last $p.born" == 'Johnny Depp 1963'
}
createFirstLastBorn()
The annotation attribute useSetters can be used if you have a setter which you want
called as part of the construction process. See the JavaDoc for details.
This strategy also supports annotating static methods and constructors. In this case, the
static method or constructor parameters become the properties to use for building
purposes and in the case of static methods, the return type of the method becomes the
target class being built. If you have more than one @Builder annotation used within a
class (at either the class, method or constructor positions) then it is up to you to ensure
that the generated helper classes and factory methods have unique names (i.e. no more
than one can use the default name values). For an example of method and constructor
usage but using the DefaultStrategy strategy, consult that strategy’s documentation.
@groovy.transform.AutoImplement
• essentially empty (exactly true for void methods and for methods with a return type,
returns the default value for that type)
• a statement that throws a specified exception (with optional message)
• some user supplied code
The first example illustrates the default case. Our class is annotated
with @AutoImplement , has a superclass and a single interface as can be seen here:
import groovy.transform.AutoImplement
@AutoImplement
class MyNames extends AbstractList<String> implements Closeable { }
A void close() method from the Closeable interface is supplied and left empty.
Implementations are also supplied for the three abstract methods from the super class.
The get , addAll and size methods have return types
of String , boolean and int respectively with default values null , false and 0 . We
can use our class (and check the expected return type for one of the methods) using the
following code:
int size() {
return 0
}
}
The second example illustrates the simplest exception case. Our class is annotated
with @AutoImplement , has a superclass and an annotation attribute indicates that
an IOException should be thrown if any of our "dummy" methods are called. Here is
the class definition:
@AutoImplement(exception=IOException)
class MyWriter extends Writer { }
We can use the class (and check the expected exception is thrown for one of the
methods) using the following code:
shouldFail(IOException) {
new MyWriter().flush()
}
It is also worthwhile examining the equivalent generated code where three void
methods have been provided all of which throw the supplied exception:
}
The third example illustrates the exception case with a supplied message. Our class is
annotated with @AutoImplement , implements an interface, and has annotation
attributes to indicate that an UnsupportedOperationException with Not supported
by MyIterator as the message should be thrown for any supplied methods. Here is the
class definition:
@AutoImplement(exception=UnsupportedOperationException, message='Not
supported by MyIterator')
class MyIterator implements Iterator<String> { }
We can use the class (and check the expected exception is thrown and has the correct
message for one of the methods) using the following code:
def ex = shouldFail(UnsupportedOperationException) {
new MyIterator().hasNext()
}
assert ex.message == 'Not supported by MyIterator'
It is also worthwhile examining the equivalent generated code where three void
methods have been provided all of which throw the supplied exception:
boolean hasNext() {
throw new UnsupportedOperationException('Not supported by
MyIterator')
}
String next() {
throw new UnsupportedOperationException('Not supported by
MyIterator')
}
}
The fourth example illustrates the case of user supplied code. Our class is annotated
with @AutoImplement , implements an interface, has an explcitly
overriden hasNext method, and has an annotation attribute containing the supplied
code for any supplied methods. Here is the class definition:
def ex = shouldFail(UnsupportedOperationException) {
new EmptyIterator().next()
}
assert ex.message.startsWith('Should never be called but was called on ')
It is also worthwhile examining the equivalent generated code where the next method
has been supplied:
boolean hasNext() {
false
}
String next() {
throw new UnsupportedOperationException('Should never be called but
was called on ' + new Date())
}
@groovy.transform.BaseScript
@BaseScript is used within scripts to indicate that the script should extend fron a
custom script base class rather than groovy.lang.Script . See the documentation
for domain specific languages for further details.
@groovy.lang.Delegate
class Event {
@Delegate Date when
String title
}
The when property is annotated with @Delegate , meaning that the Event class will
delegate calls to Date methods to the when property. In this case, the generated code
looks like this:
class Event {
Date when
String title
boolean before(Date other) {
when.before(other)
}
// ...
}
Then you can call the before method, for example, directly on the Event class:
class Test {
private int robinCount = 0
private List<List> items = [[0], [1], [2]]
@Delegate
List getRoundRobinList() {
items[robinCount++ % items.size()]
}
@Delegate(includeTypes=AppendStringSele
ctor)
StringBuilder sb2 = new
StringBuilder()
@groovy.transform.Immutable
import groovy.transform.Immutable
@Immutable
class Point {
int x
int y
}
One of the requirements for immutable classes is that there is no way to modify any
state information within the class. One requirement to achieve this is to use immutable
classes for each property or alternatively perform special coding such as defensive copy
in and defensive copy out for any mutable properties within the constructors and
property getters.
Between @ImmutableBase , @MapConstructor and @TupleConstructor properties are
either identified as immutable or the special coding for numerous known cases is
handled automatically. Various mechanisms are provided for you to extend the handled
property types which are allowed. See @ImmutableOptions and @KnownImmutable for
details.
The results of applying @Immutable to a class are pretty similar to those of applying
the @Canonical meta-annotation but the generated class will have extra logic to handle
immutability. You will observe this by, for instance, trying to modify a property which
will result in a ReadOnlyPropertyException being thrown since the backing field for
the property will have been automatically made final.
@groovy.transform.ImmutableBase
Immutable classes generated with @ImmutableBase are automatically made final. Also,
the type of each property is checked and various checks are made on the class, for
example, public instance fields currently aren’t allowed. It also generates
a copyWith constructor if desired.
@groovy.transform.PropertyOptions
@groovy.transform.VisibilityOptions
This annotation allows you to specify a custom visibility for a construct generated by
another transformation. It is ignored by the main Groovy compiler but is referenced by
other transformations like @TupleConstructor , @MapConstructor ,
and @NamedVariant .
@groovy.transform.ImmutableOptions
@Immutable(knownImmutableClasses=
[Point])
class Triangle {
Point a,b,c
}
@Immutable(knownImmutables=['a','
b','c'])
class Triangle {
Point a,b,c
}
If you deem a type as immutable and it isn’t one of the ones automatically handled, then
it is up to you to correctly code that class to ensure immutability.
@groovy.transform.KnownImmutable
The @KnownImmutable annotation isn’t actually one that triggers any AST
transformations. It is simply a marker annotation. You can annotate your classes with
the annotation (including Java classes) and they will be recognized as acceptable types
for members within an immutable class. This saves you having to explicitly use
the knownImmutables or knownImmutableClasses annotation attributes
from @ImmutableOptions .
@groovy.transform.Memoized
def x = longComputation(1)
def y = longComputation(1)
assert x!=y
Adding @Memoized changes the semantics of the method by adding caching, based on
the parameters:
@Memoized
long longComputation(int seed) {
// slow computation
Thread.sleep(100*seed)
System.nanoTime()
}
@groovy.transform.TailRecursive
The @TailRecursive annotation can be used to automatically transform a recursive
call at the end of a method into an equivalent iterative version of the same code. This
avoids stack overflow due to too many recursive calls. Below is an example of use when
calculating factorial:
import groovy.transform.CompileStatic
import groovy.transform.TailRecursive
@CompileStatic
class Factorial {
@TailRecursive
static BigInteger factorial( BigInteger i, BigInteger product = 1) {
if( i == 1) {
return product
}
return factorial(i-1, product*i)
}
}
assert Factorial.factorial(1) == 1
assert Factorial.factorial(3) == 6
assert Factorial.factorial(5) == 120
assert Factorial.factorial(50000).toString().size() == 213237 // Big number
and no Stack Overflow
Currently, the annotation will only work for self-recursive method calls, i.e. a single
recursive call to the exact same method again. Consider using Closures
and trampoline() if you have a scenario involving simple mutual recursion. Also note
that only non-void methods are currently handled (void calls will result in a compilation
error).
Currently, some forms of method overloading can trick the compiler, and some non-tail recursive call
treated as tail recursive.
@groovy.lang.Singleton
The @Singleton annotation can be used to implement the singleton design pattern on a
class. The singleton instance is defined eagerly by default, using class initialization, or
lazily, in which case the field is initialized using double checked locking.
@Singleton
class GreetingService {
String greeting(String name) { "Hello, $name!" }
}
assert GreetingService.instance.greeting('Bob') == 'Hello, Bob!'
By default, the singleton is created eagerly when the class is initialized and available
through the instance property. It is possible to change the name of the singleton using
the property parameter:
@Singleton(property='theOne')
class GreetingService {
String greeting(String name) { "Hello, $name!" }
}
class Collaborator {
public static boolean init = false
}
@Singleton(lazy=true,strict=false)
class GreetingService {
static void init() {}
GreetingService() {
Collaborator.init = true
}
String greeting(String name) { "Hello, $name!" }
}
GreetingService.init() // make sure class is initialized
assert Collaborator.init == false
GreetingService.instance
assert Collaborator.init == true
assert GreetingService.instance.greeting('Bob') == 'Hello, Bob!'
In this example, we also set the strict parameter to false, which allows us to define
our own constructor.
@groovy.lang.Mixin
Logging improvements
Groovy provides AST transformation that helps integrating with the most widely used
logging frameworks. It’s worth noting that annotating a class with one of those
annotations doesn’t prevent you from adding the appropriate logging framework on
classpath.
• category (defaults to the class name) is the name of the logger category
@groovy.util.logging.Log
The first logging AST transformation available is the @Log annotation which relies on
the JDK logging framework. Writing:
@groovy.util.logging.Log
class Greeter {
void greet() {
log.info 'Called greeter'
println 'Hello, world!'
}
}
is equivalent to writing:
import java.util.logging.Level
import java.util.logging.Logger
class Greeter {
private static final Logger log = Logger.getLogger(Greeter.name)
void greet() {
if (log.isLoggable(Level.INFO)) {
log.info 'Called greeter'
}
println 'Hello, world!'
}
}
@groovy.util.logging.Commons
@groovy.util.logging.Commons
class Greeter {
void greet() {
log.debug 'Called greeter'
println 'Hello, world!'
}
}
is equivalent to writing:
import org.apache.commons.logging.LogFactory
import org.apache.commons.logging.Log
class Greeter {
private static final Log log = LogFactory.getLog(Greeter)
void greet() {
if (log.isDebugEnabled()) {
log.debug 'Called greeter'
}
println 'Hello, world!'
}
}
@groovy.util.logging.Log4j
Groovy supports the Apache Log4j 1.x framework using to the @Log4j annotation.
Writing:
@groovy.util.logging.Log4j
class Greeter {
void greet() {
log.debug 'Called greeter'
println 'Hello, world!'
}
}
is equivalent to writing:
import org.apache.log4j.Logger
class Greeter {
private static final Logger log = Logger.getLogger(Greeter)
void greet() {
if (log.isDebugEnabled()) {
log.debug 'Called greeter'
}
println 'Hello, world!'
}
}
@groovy.util.logging.Log4j2
Groovy supports the Apache Log4j 2.x framework using to the @Log4j2 annotation.
Writing:
@groovy.util.logging.Log4j2
class Greeter {
void greet() {
log.debug 'Called greeter'
println 'Hello, world!'
}
}
is equivalent to writing:
import org.apache.logging.log4j.LogManager
import org.apache.logging.log4j.Logger
class Greeter {
private static final Logger log = LogManager.getLogger(Greeter)
void greet() {
if (log.isDebugEnabled()) {
log.debug 'Called greeter'
}
println 'Hello, world!'
}
}
@groovy.util.logging.Slf4j
Groovy supports the Simple Logging Facade for Java (SLF4J) framework using to
the @Slf4j annotation. Writing:
@groovy.util.logging.Slf4j
class Greeter {
void greet() {
log.debug 'Called greeter'
println 'Hello, world!'
}
}
is equivalent to writing:
import org.slf4j.LoggerFactory
import org.slf4j.Logger
class Greeter {
private static final Logger log = LoggerFactory.getLogger(Greeter)
void greet() {
if (log.isDebugEnabled()) {
log.debug 'Called greeter'
}
println 'Hello, world!'
}
}
Declarative concurrency
The Groovy language provides a set of annotations aimed at simplifying common
concurrency patterns in a declarative approach.
@groovy.transform.Synchronized
import groovy.transform.Synchronized
import java.util.concurrent.Executors
import java.util.concurrent.TimeUnit
class Counter {
int cpt
@Synchronized
int incrementAndGet() {
cpt++
}
int get() {
cpt
}
}
Writing this is equivalent to creating a lock object and wrapping the whole method into
a synchronized block:
class Counter {
int cpt
private final Object $lock = new Object()
int incrementAndGet() {
synchronized($lock) {
cpt++
}
}
int get() {
cpt
}
}
By default, @Synchronized creates a field named $lock (or $LOCK for a static method)
but you can make it use any field you want by specifying the value attribute, like in the
following example:
import groovy.transform.Synchronized
import java.util.concurrent.Executors
import java.util.concurrent.TimeUnit
class Counter {
int cpt
private final Object myLock = new Object()
@Synchronized('myLock')
int incrementAndGet() {
cpt++
}
int get() {
cpt
}
}
@groovy.transform.WithReadLock and @groovy.transform.WithWriteLock
import groovy.transform.WithReadLock
import groovy.transform.WithWriteLock
class Counters {
public final Map<String,Integer> map = [:].withDefault { 0 }
@WithReadLock
int get(String id) {
map.get(id)
}
@WithWriteLock
void add(String id, int num) {
Thread.sleep(200) // emulate long computation
map.put(id, map.get(id)+num)
}
}
is equivalent to this:
import groovy.transform.WithReadLock
import groovy.transform.WithWriteLock
import java.util.concurrent.locks.ReentrantReadWriteLock
class Counters {
public final Map<String,Integer> map = [:].withDefault { 0 }
private final ReentrantReadWriteLock customLock = new
ReentrantReadWriteLock()
@WithReadLock('customLock')
int get(String id) {
map.get(id)
}
@WithWriteLock('customLock')
void add(String id, int num) {
Thread.sleep(200) // emulate long computation
map.put(id, map.get(id)+num)
}
}
For details
@groovy.transform.AutoClone
import groovy.transform.AutoClone
@AutoClone
class Book {
String isbn
String title
List<String> authors
Date publicationDate
}
is equivalent to this:
Date publicationDate
}
@AutoClone(style=AutoCloneStyle.
SIMPLE,includeFields=true)
class Book {
String isbn
String title
List authors
protected Date
publicationDate
}
@groovy.transform.AutoExternalize
import groovy.transform.AutoExternalize
@AutoExternalize
class Book {
String isbn
String title
float price
}
will be converted into:
}
The @AutoExternalize annotation supports two parameters which will let you slightly
customize its behavior:
@AutoExternalize(includ
eFields=true)
class Book {
String isbn
String title
protected float
price
}
Safer scripting
The Groovy language makes it easy to execute user scripts at runtime (for example
using groovy.lang.GroovyShell), but how do you make sure that a script won’t eat all CPU
(infinite loops) or that concurrent scripts won’t slowly consume all available threads of a
thread pool? Groovy provides several annotations which are aimed towards safer
scripting, generating code which will for example allow you to interrupt execution
automatically.
@groovy.transform.ThreadInterrupt
One complicated situation in the JVM world is when a thread can’t be stopped.
The Thread#stop method exists but is deprecated (and isn’t reliable) so your only
chance relies in Thread#interrupt . Calling the latter will set the interrupt flag on
the thread, but it will not stop the execution of the thread. This is problematic because
it’s the responsibility of the code executing in the thread to check the interrupt flag and
properly exit. This makes sense when you, as a developer, know that the code you are
executing is meant to be run in an independent thread, but in general, you don’t know it.
It’s even worse with user scripts, who might not even know what a thread is (think of
DSLs).
while (true) {
i++
}
This is an obvious infinite loop. If this code executes in its own thread, interrupting
wouldn’t help: if you join on the thread, then the calling code would be able to continue,
but the thread would still be alive, running in background without any ability for you to
stop it, slowly causing thread starvation.
One possibility to work around this is to set up your shell this way:
def t = Thread.start {
shell.evaluate(userCode)
}
t.join(1000) // give at most 1000ms for the script to complete
if (t.alive) {
t.interrupt()
}
The transformation automatically modified user code like this:
while (true) {
if (Thread.currentThread().interrupted) {
throw new InterruptedException('The current thread has been
interrupted.')
}
i++
}
The check which is introduced inside the loop guarantees that if the interrupt flag is
set on the current thread, an exception will be thrown, interrupting the execution of the
thread.
@ThreadInterrupt supports multiple options that will let you further customize the
behavior of the transformation:
def t = Thread.start {
shell.evaluate(userCode
)
}
t.join(1000) // give at
most 1s for the script
to complete
assert binding.i > 0
if (t.alive) {
t.interrupt()
}
Thread.sleep(500)
assert binding.i == -
1'''
@groovy.transform.TimedInterrupt
The @TimedInterrupt AST transformation tries to solve a slightly different problem
from @groovy.transform.ThreadInterrupt : instead of checking the interrupt flag
of the thread, it will automatically throw an exception if the thread has been running for
too long.
This annotation does not spawn a monitoring thread. Instead, it works in a similar manner as @Threa
placing checks at appropriate places in the code. This means that if you have a thread blocked by I/O,
interrupted.
result = fib(600)
The implementation of the famous Fibonacci number computation here is far from
optimized. If it is called with a high n value, it can take minutes to answer.
With @TimedInterrupt , you can choose how long a script is allowed to run. The
following setup code will allow the user script to run for 1 second at max:
@TimedInterrupt(value=1, unit=TimeUnit.SECONDS)
class MyClass {
def fib(int n) {
n<2?n:fib(n-1)+fib(n-2)
}
}
@TimedInterrupt supports multiple options that will let you further customize the
behavior of the transformation:
def fib(n) {
n<2?n:fib(n-
1)+fib(n-2) }
}
def result
def t =
Thread.start {
result = new
Slow().fib(500)
}
t.join(5000)
assert result ==
null
assert !t.alive
def t =
Thread.start {
try {
result =
new Slow().fib(50)
} catch
(TooLongException
e) {
result = -1
}
}
t.join(5000)
assert result == -1
@groovy.transform.ConditionalInterrupt
The last annotation for safer scripting is the base annotation when you want to interrupt
a script using a custom strategy. In particular, this is the annotation of choice if you want
to use resource management (limit the number of calls to an API, …). In the following
example, user code is using an infinite loop, but @ConditionalInterrupt will allow us
to check a quota manager and interrupt automatically the script:
@ConditionalInterrupt({Quotas.disallow('user')})
class UserCode {
void doSomething() {
int i=0
while (true) {
println "Consuming resources ${++i}"
}
}
}
The quota checking is very basic here, but it can be any code:
class Quotas {
static def quotas = [:].withDefault { 10 }
static boolean disallow(String userName) {
println "Checking quota for $userName"
(quotas[userName]--)<0
}
}
We can make sure @ConditionalInterrupt works properly using this test code:
assert Quotas.quotas['user'] == 10
def t = Thread.start {
new UserCode().doSomething()
}
t.join(5000)
assert !t.alive
assert Quotas.quotas['user'] < 0
Of course, in practice, it is unlikely that @ConditionalInterrupt will be itself added by
hand on user code. It can be injected in a similar manner as the example shown in
the ThreadInterrupt section, using
the org.codehaus.groovy.control.customizers.ASTTransformationCustomizer :
assert Quotas.quotas['user'] == 10
def t = Thread.start {
shell.evaluate(userCode)
}
t.join(5000)
assert !t.alive
assert Quotas.quotas['user'] < 0
@ConditionalInterrupt supports multiple options that will let you further customize
the behavior of the transformation:
def t = Thread.start
{
try {
shell.evaluate(userCo
de)
} catch
(QuotaExceededExcepti
on) {
Quotas.quotas['user']
= 'Quota exceeded'
}
}
t.join(5000)
assert !t.alive
assert
Quotas.quotas['user']
== 'Quota exceeded'
void method2() {
... } // no interrupt
checks
}
Compiler directives
This category of AST transformations groups annotations which have a direct impact on
the semantics of the code, rather than focusing on code generation. With that regards,
they can be seen as compiler directives that either change the behavior of a program at
compile time or runtime.
@groovy.transform.Field
The @Field annotation only makes sense in the context of a script and aims at solving a
common scoping error with scripts. The following example will for example fail at
runtime:
def x
String line() {
"="*x
}
x=3
assert "===" == line()
x=5
assert "=====" == line()
The error that is thrown may be difficult to interpret:
groovy.lang.MissingPropertyException: No such property: x. The reason is that scripts
are compiled to classes and the script body is itself compiled as a single run()method.
Methods which are defined in the scripts are independent, so the code above is
equivalent to this:
String line() {
"="*x
}
@Field def x
String line() {
"="*x
}
x=3
assert "===" == line()
x=5
assert "=====" == line()
The resulting, equivalent, code is now:
def x
String line() {
"="*x
}
By default, Groovy visibility rules imply that if you create a field without specifying a
modifier, then the field is interpreted as a property:
class Person {
String name // this is a property
}
Should you want to create a package private field instead of a property (private
field+getter/setter), then annotate your field with @PackageScope :
class Person {
@PackageScope String name // not a property anymore
}
The @PackageScope annotation can also be used for classes, methods and constructors.
In addition, by specifying a list of PackageScopeTarget values as the annotation
attribute at the class level, all members within that class that don’t have an explicit
modifier and match the provided PackageScopeTarget will remain package protected.
For example to apply to fields within a class use the following annotation:
@groovy.transform.AutoFinal
The @AutoFinal annotation instructs the compiler to automatically insert the final
modifier in numerous places within the annotated node. If applied on a method (or
constructor), the parameters for that method (or constructor) will be marked as final. If
applied on a class definition, the same treatment will occur for all declared methods and
constructors within that class.
The following example illustrates applying the annotation at the class level:
import groovy.transform.AutoFinal
@AutoFinal
class Person {
private String first, last
The following example illustrates applying the annotation at the method level:
class Calc {
@AutoFinal
int add(int a, int b) { a + b }
@groovy.transform.AnnotationCollector
@groovy.transform.TypeChecked
@TypeChecked activates compile-time type checking on your Groovy code. See section
on type checking for details.
@groovy.transform.CompileStatic
@CompileStatic activates static compilation on your Groovy code. See section on type
checking for details.
@groovy.transform.CompileDynamic
@CompileDynamic disables static compilation on parts of your Groovy code. See section
on type checking for details.
@groovy.lang.DelegatesTo
@groovy.transform.SelfType
@SelfType is not an AST transformation but rather a marker interface used with traits.
See the traits documentation for further details.
Swing patterns
@groovy.beans.Bindable
import groovy.beans.Bindable
@Bindable
class Person {
String name
int age
}
This is equivalent to writing this:
import java.beans.PropertyChangeListener
import java.beans.PropertyChangeSupport
class Person {
final private PropertyChangeSupport this$propertyChangeSupport
String name
int age
import groovy.beans.Bindable
class Person {
String name
@Bindable int age
}
@groovy.beans.ListenerList
The @ListenerList AST transformation generates code for adding, removing and
getting the list of listeners to a class, just by annotating a collection property:
import java.awt.event.ActionListener
import groovy.beans.ListenerList
class Component {
@ListenerList
List<ActionListener> listeners;
}
The transform will generate the appropriate add/remove methods based on the generic
type of the list. In addition, it will also create fireXXX methods based on the public
methods declared on the class:
import java.awt.event.ActionEvent
import java.awt.event.ActionListener as ActionListener
import groovy.beans.ListenerList as ListenerList
@ListenerList
private List<ActionListener> listeners
List<ActionListener>
listeners;
}
@groovy.beans.Vetoable
import groovy.beans.Vetoable
import java.beans.PropertyVetoException
import java.beans.VetoableChangeListener
@Vetoable
class Person {
String name
int age
}
is equivalent to writing this:
import groovy.beans.Vetoable
class Person {
String name
@Vetoable int age
}
Test assistance
@groovy.transform.NotYetImplemented
import groovy.transform.NotYetImplemented
class Maths {
static int fib(int n) {
// todo: implement later
}
}
@groovy.transform.ASTTest
• phase: sets at which phase at which @ASTTest will be triggered. The test code will
work on the AST tree at the end of this phase.
• value: the code which will be executed once the phase is reached, on the annotated
node
import groovy.transform.ASTTest
import org.codehaus.groovy.ast.ClassNode
import static org.codehaus.groovy.control.CompilePhase.*
@ASTTest(phase=CONVERSION, value={
assert node instanceof ClassNode
assert node.name == 'Person'
})
class Person {
}
we’re checking the state of the Abstract Syntax Tree after the CONVERSION phase
One interesting feature of @ASTTest is that if an assertion fails, then compilation will
fail. Now imagine that we want to check the behavior of an AST transformation at
compile time. We will take @PackageScope here, and we will want to verify that a
property annotated with @PackageScope becomes a package private field. For this, we
have to know at which phase the transform runs, which can be found
in org.codehaus.groovy.transform.PackageScopeASTTransformation : semantic analysis.
Then a test can be written like this:
import groovy.transform.ASTTest
import groovy.transform.PackageScope
@ASTTest(phase=SEMANTIC_ANALYSIS, value= {
def nameNode = node.properties.find { it.name == 'name' }
def ageNode = node.properties.find { it.name == 'age' }
assert nameNode
assert ageNode == null // shouldn't be a property anymore
def ageField = node.getDeclaredField 'age'
assert ageField.modifiers == 0
})
class Person {
String name
@PackageScope int age
}
The @ASTTest annotation can only be placed wherever the grammar allows it.
Sometimes, you would like to test the contents of an AST node which is not annotable. In
this case, @ASTTest provides a convenient lookup method which will search the AST
for nodes which are labelled with a special token:
Imagine, for example, that you want to test the declared type of a for loop variable. Then
you can do it like this:
import groovy.transform.ASTTest
import groovy.transform.PackageScope
import org.codehaus.groovy.ast.ClassHelper
import org.codehaus.groovy.ast.expr.DeclarationExpression
import org.codehaus.groovy.ast.stmt.ForStatement
class Something {
@ASTTest(phase=SEMANTIC_ANALYSIS, value= {
def forLoop = lookup('anchor')[0]
assert forLoop instanceof ForStatement
def decl = forLoop.collectionExpression.expressions[0]
assert decl instanceof DeclarationExpression
assert decl.variableExpression.name == 'i'
assert decl.variableExpression.originType == ClassHelper.int_TYPE
})
void someMethod() {
int x = 1;
int y = 10;
anchor: for (int i=0; i<x+y; i++) {
println "$i"
}
}
}
@ASTTest also exposes those variables inside the test closure:
The latter is interesting if you don’t specify the phase attribute. In that case, the closure
will be executed after each compile phase after (and including) SEMANTIC_ANALYSIS .
The context of the transformation is kept after each phase, giving you a chance to check
what changed between two phases.
As an example, here is how you could dump the list of AST transformations registered on
a class node:
import groovy.transform.ASTTest
import groovy.transform.CompileStatic
import groovy.transform.Immutable
import org.codehaus.groovy.ast.ClassNode
import org.codehaus.groovy.control.CompilePhase
@ASTTest(value={
System.err.println "Compile phase: $compilePhase"
ClassNode cn = node
System.err.println "Global AST xforms:
${compilationUnit?.ASTTransformationsContext?.globalTransformNames}"
CompilePhase.values().each {
def transforms = cn.getTransforms(it)
if (transforms) {
System.err.println "Ast xforms for phase $it:"
transforms.each { map ->
System.err.println(map)
}
}
}
})
@CompileStatic
@Immutable
class Foo {
}
And here is how you can memorize variables for testing between two phases:
import groovy.transform.ASTTest
import groovy.transform.ToString
import org.codehaus.groovy.ast.ClassNode
import org.codehaus.groovy.control.CompilePhase
@ASTTest(value={
if (compilePhase==CompilePhase.INSTRUCTION_SELECTION) {
println "toString() was added at phase: ${added}"
assert added == CompilePhase.CANONICALIZATION
} else {
if (node.getDeclaredMethods('toString') && added==null) {
added = compilePhase
}
}
})
@ToString
class Foo {
String name
}
if the current compile phase is instruction selection
otherwise, if toString exists and that the variable from the context, added is null
then it means that this compile phase is the one where toString was added
Grape handling
@groovy.lang.Grab
@groovy.lang.GrabConfig
@groovy.lang.GrabExclude
@groovy.lang.GrabResolver
@groovy.lang.Grapes
• Global transformations are applied to by the compiler on the code being compiled,
wherever the transformation apply. Compiled classes that implement global
transformations are in a JAR added to the classpath of the compiler and contain
service locator file META-
INF/services/org.codehaus.groovy.transform.ASTTransformation with a
line with the name of the transformation class. The transformation class must have a
no-args constructor and implement
the org.codehaus.groovy.transform.ASTTransformation interface. It will be
run against every source in the compilation, so be sure to not create
transformations which scan all the AST in an expansive and time-consuming manner,
to keep the compiler fast.
• Local transformations are transformations applied locally by annotating code
elements you want to transform. For this, we reuse the annotation notation, and
those annotations should
implement org.codehaus.groovy.transform.ASTTransformation . The compiler
will discover them and apply the transformation on these code elements.
Global transformations may be applied in any phase, but local transformations may only
be applied in the semantic analysis phase or later. Briefly, the compiler phases are:
Local transformations
Local AST transformations are relative to the context they are applied to. In most cases,
the context is defined by an annotation that will define the scope of the transform. For
example, annotating a field would mean that the transformation applies to the field,
while annotating the class would mean that the transformation applies tothe whole class.
@WithLogging
def greet() {
println "Hello World"
}
greet()
A local AST transformation is an easy way to do this. It requires two things:
The local transformation annotation is the simple part. Here is the @WithLogging one:
import org.codehaus.groovy.transform.GroovyASTTransformationClass
import java.lang.annotation.ElementType
import java.lang.annotation.Retention
import java.lang.annotation.RetentionPolicy
import java.lang.annotation.Target
@Retention(RetentionPolicy.SOURCE)
@Target([ElementType.METHOD])
@GroovyASTTransformationClass(["gep.WithLoggingASTTransformation"])
public @interface WithLogging {
}
The annotation retention can be SOURCE because you won’t need the annotation past
that. The element type here is METHOD , the @WithLogging because the annotation
applies to methods.
The ASTTransformation class is a little more complex. Here is the very simple, and
very naive, transformation to add a method start and stop message for @WithLogging :
@CompileStatic
@GroovyASTTransformation(phase=CompilePhase.SEMANTIC_ANALYSIS)
class WithLoggingASTTransformation implements ASTTransformation {
@Override
void visit(ASTNode[] nodes, SourceUnit sourceUnit) {
MethodNode method = (MethodNode) nodes[1]
the nodes parameter is a 2 AST node array, for which the first one is the annotation node
( @WithLogging ) and the second one is the annotated node (the method node)
create a statement that will print a message when we enter the method
create a statement that will print a message when we exit the method
add the enter method message before the first statement of existing code
append the exit method message after the last statement of existing code
It is important to notice that for the brevity of this example, we didn’t make the
necessary checks, such as checking that the annotated node is really a MethodNode , or
that the method body is an instance of BlockStatement . This exercise is left to the
reader.
Note the creation of the new println statements in
the createPrintlnAst(String) method. Creating AST for code is not always simple.
In this case we need to construct a new method call, passing in the receiver/variable, the
name of the method, and an argument list. When creating AST, it might be helpful to
write the code you’re trying to create in a Groovy file and then inspect the AST of that
code in the debugger to learn what to create. Then write a function
like createPrintlnAst using what you learned through the debugger.
In the end:
@WithLogging
def greet() {
println "Hello World"
}
greet()
Produces:
Starting greet
Hello World
Ending greet
It is important to note that an AST transformation participates directly in the compilation process. A c
beginners is to have the AST transformation code in the same source tree as a class that uses the tran
in the same source tree in general means that they are compiled at the same time. Since the transform
going to be compiled in phases and that each compile phase processes all files of the same source uni
the next one, there’s a direct consequence: the transformation will not be compiled before the class t
conclusion, AST transformations need to be precompiled before you can use them. In general, it is as
them in a separate source tree.
Global transformations
Global AST transformation are similar to local one with a major difference: they do not
need an annotation, meaning that they are applied globally, that is to say on each class
being compiled. It is therefore very important to limit their use to last resort, because it
can have a significant impact on the compiler performance.
Following the example of the local AST transformation, imagine that we would like to
trace all methods, and not only those which are annotated with @WithLogging .
Basically, we need this code to behave the same as the one annotated
with @WithLogging before:
def greet() {
println "Hello World"
}
greet()
To make this work, there are two steps:
1. create the org.codehaus.groovy.transform.ASTTransformation descriptor
inside the META-INF/services directory
META-INF/services/org.codehaus.groovy.transform.ASTTransformation
gep.WithLoggingASTTransformation
The code for the transformation looks similar to the local case, but instead of using
the ASTNode[] parameter, we need to use the SourceUnit instead:
gep/WithLoggingASTTransformation.groovy
@CompileStatic
@GroovyASTTransformation(phase=CompilePhase.SEMANTIC_ANALYSIS)
class WithLoggingASTTransformation implements ASTTransformation {
@Override
void visit(ASTNode[] nodes, SourceUnit sourceUnit) {
def methods = sourceUnit.AST.methods
methods.each { method ->
def startMessage = createPrintlnAst("Starting $method.name")
def endMessage = createPrintlnAst("Ending $method.name")
def existingStatements =
((BlockStatement)method.code).statements
existingStatements.add(0, startMessage)
existingStatements.add(endMessage)
}
}
the sourceUnit parameter gives access to the source being compiled, so we get the AST of
the current source and retrieve the list of methods from this file
create a statement that will print a message when we enter the method
create a statement that will print a message when we exit the method
add the enter method message before the first statement of existing code
append the exit method message after the last statement of existing code
ClassCodeExpressionTransformer
@Shout
def greet() {
println "Hello World"
}
greet()
should print:
HELLO WORLD
Then the code for the transformation can use
the ClassCodeExpressionTransformer to make this easier:
@CompileStatic
@GroovyASTTransformation(phase=CompilePhase.SEMANTIC_ANALYSIS)
class ShoutASTTransformation implements ASTTransformation {
@Override
void visit(ASTNode[] nodes, SourceUnit sourceUnit) {
ClassCodeExpressionTransformer trn = new
ClassCodeExpressionTransformer() {
private boolean inArgList = false
@Override
protected SourceUnit getSourceUnit() {
sourceUnit
}
@Override
Expression transform(final Expression exp) {
if (exp instanceof ArgumentListExpression) {
inArgList = true
} else if (inArgList &&
exp instanceof ConstantExpression && exp.value
instanceof String) {
return new ConstantExpression(exp.value.toUpperCase())
}
def trn = super.transform(exp)
inArgList = false
trn
}
}
trn.visitMethod((MethodNode)nodes[1])
}
}
Internally the transformation creates a ClassCodeExpressionTransformer
if a constant expression of type string is detected inside an argument list, transform it into its
upper case version
AST Nodes
Writing an AST transformation requires a deep knowledge of the internal Groovy API. In particular it r
about the AST classes. Since those classes are internal, there are chances that the API will change in th
that your transformations couldbreak. Despite that warning, the AST has been very stable over time a
rarely happens.
Macros
Introduction
Until version 2.5.0, when developing AST transformations, developers should have a
deep knowledge about how the AST (Abstract Syntax Tree) was built by the compiler in
order to know how to add new expressions or statements during compile time.
@Retention(RetentionPolicy.SOURCE)
@Target([ElementType.TYPE])
@GroovyASTTransformationClass(["metaprogramming.AddMethodASTTransformation"
])
@interface AddMethod { }
What would the AST transformation look like without the use of a macro ? Something
like this:
@GroovyASTTransformation(phase = CompilePhase.INSTRUCTION_SELECTION)
class AddMethodASTTransformation extends AbstractASTTransformation {
@Override
void visit(ASTNode[] nodes, SourceUnit source) {
ClassNode classNode = (ClassNode) nodes[1]
ReturnStatement code =
new ReturnStatement(
new ConstantExpression("42"))
MethodNode methodNode =
new MethodNode(
"getMessage",
ACC_PUBLIC,
ClassHelper.make(String),
[] as Parameter[],
[] as ClassNode[],
code)
classNode.addMethod(methodNode)
}
}
Create a return statement
If you’re not used to the AST API, that definitely doesn’t look like the code you had in
mind. Now look how the previous code simplifies with the use of macros.
@GroovyASTTransformation(phase = CompilePhase.INSTRUCTION_SELECTION)
class AddMethodWithMacrosASTTransformation extends
AbstractASTTransformation {
@Override
void visit(ASTNode[] nodes, SourceUnit source) {
ClassNode classNode = (ClassNode) nodes[1]
MethodNode methodNode =
new MethodNode(
"getMessage",
ACC_PUBLIC,
ClassHelper.make(String),
[] as Parameter[],
[] as ClassNode[],
simplestCode)
classNode.addMethod(methodNode)
}
}
Much simpler. You wanted to add a return statement that returned "42" and that’s exactly what
you can read inside the macro utility method. Your plain code will be translated for you to
a org.codehaus.groovy.ast.stmt.ReturnStatement
• macro(Closure) : Create a given statement with the code inside the closure.
Sometimes we could be only interested in creating a given expression, not the whole
statement, in order to do that we should use any of the macro invocations with a
boolean parameter:
@GroovyASTTransformation(phase = CompilePhase.INSTRUCTION_SELECTION)
class AddGetTwoASTTransformation extends AbstractASTTransformation {
BinaryExpression onePlusOne() {
return macro(false) { 1 + 1 }
}
@Override
void visit(ASTNode[] nodes, SourceUnit source) {
ClassNode classNode = nodes[1]
BinaryExpression expression = onePlusOne()
ReturnStatement returnStatement = GeneralUtils.returnS(expression)
MethodNode methodNode =
new MethodNode("getTwo",
ACC_PUBLIC,
ClassHelper.Integer_TYPE,
[] as Parameter[],
[] as ClassNode[],
returnStatement
)
classNode.addMethod(methodNode)
}
}
We’re telling macro not to wrap the expression in a statement, we’re only interested in the
expression
Assigning the expression
Variable substitution
Macros are great but we can’t create anything useful or reusable if our macros couldn’t
receive parameters or resolve surrounding variables.
In the following example we’re creating an AST transformation @MD5 that when applied
to a given String field will add a method returning the MD5 value of that field.
@Retention(RetentionPolicy.SOURCE)
@Target([ElementType.FIELD])
@GroovyASTTransformationClass(["metaprogramming.MD5ASTTransformation"])
@interface MD5 { }
And the transformation:
@GroovyASTTransformation(phase = CompilePhase.CANONICALIZATION)
class MD5ASTTransformation extends AbstractASTTransformation {
@Override
void visit(ASTNode[] nodes, SourceUnit source) {
FieldNode fieldNode = nodes[1]
ClassNode classNode = fieldNode.declaringClass
String capitalizedName = fieldNode.name.capitalize()
MethodNode methodNode = new MethodNode(
"get${capitalizedName}MD5",
ACC_PUBLIC,
ClassHelper.STRING_TYPE,
[] as Parameter[],
[] as ClassNode[],
buildMD5MethodCode(fieldNode))
classNode.addMethod(methodNode)
}
If using a class outside the standard packages we should add any needed imports or use the
qualified name. When using the qualified named of a given static method you need to make sure
it’s resolved in the proper compile phase. In this particular case we’re instructing the macro to
resolve it at the SEMANTIC_ANALYSIS phase, which is the first compile phase with type
information.
MacroClass
The next example is a local transformation @Statistics . When applied to a given class,
it will add two methodsgetMethodCount() and getFieldCount() which return how
many methods and fields within the class respectively. Here is the marker annotation.
@Retention(RetentionPolicy.SOURCE)
@Target([ElementType.TYPE])
@GroovyASTTransformationClass(["metaprogramming.StatisticsASTTransformation
"])
@interface Statistics {}
And the AST transformation:
@CompileStatic
@GroovyASTTransformation(phase = CompilePhase.INSTRUCTION_SELECTION)
class StatisticsASTTransformation extends AbstractASTTransformation {
@Override
void visit(ASTNode[] nodes, SourceUnit source) {
ClassNode classNode = (ClassNode) nodes[1]
ClassNode templateClass = buildTemplateClass(classNode)
java.lang.Integer getFieldCount() {
return $v { fieldCount }
}
}
}
}
}
Creating a template class
Building the getMethodCount() method using reference’s method count value expression
Building the getFieldCount() method using reference’s field count value expression
Basically we’ve created the Statistics class as a template to avoid writing low level AST
API, then we copied methods created in the template class to their final destination.
Types inside the MacroClass implementation should be resolved inside, that’s why we had to
write java.lang.Integer instead of simply writing Integer .
Notice that we’re using @CompileDynamic . That’s because the way we use MacroClass is like w
implementing it. So if you were using @CompileStatic it will complain because an implementatio
class can’t be another different class.
@Macro methods
You have seen that by using macro you can save yourself a lot of work but you might
wonder where that method came from. You didn’t declare it or static import it. You can
think of it as a special global method (or if you prefer, a method on every Object ). This
is much like how the println extension method is defined. But unlike println which
becomes a method selected for execution later in the compilation
process, macro expansion is done early in the compilation process. The declaration
of macro as one of the available methods for this early expansion is done by annotating
a macro method definition with the @Macro annotation and making that method
available using a similar mechanism for extension modules. Such methods are known
as macro methods and the good news is you can define your own.
To define your own macro method, create a class in a similar way to an extension
module and add a method such as:
@Macro
public static Expression safe(MacroContext macroContext,
MethodCallExpression callExpression) {
return ternaryX(
notNullX(callExpression.getObjectExpression()),
callExpression,
constX(null)
);
}
...
}
Now you would register this as an extension module using
a org.codehaus.groovy.runtime.ExtensionModule file within the META-
INF/groovy directory.
Now, assuming that the class and meta info file are on your classpath, you can use the
macro method in the following way:
This section is about good practices with regards to testing AST transformations.
Previous sections highlighted the fact that to be able to execute an AST transformation, it
has to be precompiled. It might sound obvious but a lot of people get caught on this,
trying to use an AST transformation in the same source tree as where it is defined.
The first tip for testing AST transformation is therefore to separate test sources from the
sources of the transform. Again, this is nothing but best practices, but you must make
sure that your build too does actually compile them separately. This is the case by
default with both Apache Maven and Gradle.
void testMyTransform() {
def c = new Subject()
c.methodToBeTested()
}
You should write:
void testMyTransformWithBreakpoint() {
assertScript '''
import metaprogramming.MyTransformToDebug
class Subject {
@MyTransformToDebug
void methodToBeTested() {}
}
def c = new Subject()
c.methodToBeTested()
'''
}
The difference is that when you use assertScript , the code in
the assertScript block is compiled when the unit test is executed. That is to say that
this time, the Subject class will be compiled with debugging active, and the breakpoint
is going to be hit.
ASTMatcher
Sometimes you may want to make assertions over AST nodes; perhaps to filter the
nodes, or to make sure a given transformation has built the expected AST node.
Filtering nodes
For instance if you would like to apply a given transformation only to a specific set of
AST nodes, you could use ASTMatcher to filter these nodes. The following example
shows how to transform a given expression to another. Using ASTMatcher it looks for a
specific expression 1 + 1 and it transforms it to 3 . That’s why we called it
the @Joking example.
First we create the @Joking annotation that only can be applied to methods:
@Retention(RetentionPolicy.SOURCE)
@Target([ElementType.METHOD])
@GroovyASTTransformationClass(["metaprogramming.JokingASTTransformation"])
@interface Joking { }
Then the transformation, that only applies an instance
of org.codehaus.groovy.ast.ClassCodeExpressionTransformer to all the
expressions within the method code block.
@CompileStatic
@GroovyASTTransformation(phase = CompilePhase.INSTRUCTION_SELECTION)
class JokingASTTransformation extends AbstractASTTransformation {
@Override
void visit(ASTNode[] nodes, SourceUnit source) {
MethodNode methodNode = (MethodNode) nodes[1]
methodNode
.getCode()
.visit(new ConvertOnePlusOneToThree(source))
}
}
Get the method’s code statement and apply the expression transformer
And this is when the ASTMatcher is used to apply the transformation only to those
expressions matching the expression 1 + 1 .
ConvertOnePlusOneToThree(SourceUnit sourceUnit) {
this.sourceUnit = sourceUnit
}
@Override
Expression transform(Expression exp) {
Expression ref = macro { 1 + 1 }
if (ASTMatcher.matches(ref, exp)) {
return macro { 3 }
}
return super.transform(exp)
}
}
Builds the expression used as reference pattern
Checks the current expression evaluated matches the reference expression
If it matches then replaces the current expression with the expression built with macro
package metaprogramming
class Something {
@Joking
Integer getResult() {
return 1 + 1
}
}
Normally we test AST transformations just checking that the final use of the
transformation does what we expect. But it would be great if we could have an easy way
to check, for example, that the nodes the transformation adds are what we expected
from the beginning.
@GroovyASTTransformation(phase = CompilePhase.INSTRUCTION_SELECTION)
class TwiceASTTransformation extends AbstractASTTransformation {
@Override
void visit(ASTNode[] nodes, SourceUnit source) {
ClassNode classNode = (ClassNode) nodes[1]
MethodNode giveMeTwo = getTemplateClass(sumExpression)
.getDeclaredMethods('giveMeTwo')
.first()
classNode.addMethod(giveMeTwo)
}
BinaryExpression getSumExpression() {
return macro {
$v{ varX(VAR_X) } +
$v{ varX(VAR_X) }
}
}
Building a binary expression. The binary expression uses the same variable expression in both
sides of the + token (check varX method at org.codehaus.groovy.ast.tool.GeneralUtils).
Builds a new ClassNode with a method called giveMeTwo which returns the result of an
expression passed as parameter.
Now instead of creating a test executing the transformation over a given sample code. I
would like to check that the construction of the binary expression is done properly:
void testTestingSumExpression() {
use(ASTMatcher) {
TwiceASTTransformation sample = new TwiceASTTransformation()
Expression referenceNode = macro {
a + a
}.withConstraints {
placeholder 'a'
}
assert sample
.sumExpression
.matches(referenceNode)
}
}
Using ASTMatcher as a category
void testASTBehavior() {
assertScript '''
package metaprogramming
@Twice
class AAA {
}
Last but not least, testing an AST transformation is also about testing the state of the
AST during compilation. Groovy provides a tool named @ASTTest for this: it is an
annotation that will let you add assertions on an abstract syntax tree. Please check
the documentation for ASTTest for more details.
External references
If you are interested in a step-by-step tutorial about writing AST transformations, you
can follow this workshop.
@Grab(group='org.springframework', module='spring-orm',
version='3.2.5.RELEASE')
import org.springframework.jdbc.core.JdbcTemplate
@Grab also supports a shorthand notation:
@Grab('org.springframework:spring-orm:3.2.5.RELEASE')
import org.springframework.jdbc.core.JdbcTemplate
Note that we are using an annotated import here, which is the recommended way. You
can also search for dependencies on mvnrepository.com and it will provide you
the @Grab annotation form of the pom.xml entry.
@GrabResolver(name='restlet', root='http://maven.restlet.org/')
@Grab(group='org.restlet', module='org.restlet', version='1.1.6')
Maven Classifiers
Some maven dependencies need classifiers in order to be able to resolve. You can fix that
like this:
@Grab(group='net.sf.json-lib', module='json-lib', version='2.2.3',
classifier='jdk15')
@Grab('net.sourceforge.htmlunit:htmlunit:2.8')
@GrabExclude('xml-apis:xml-apis')
JDBC Drivers
Because of the way JDBC drivers are loaded, you’ll need to configure Grape to attach
JDBC driver dependencies to the system class loader. I.e:
@GrabConfig(systemClassLoader=true)
@Grab(group='mysql', module='mysql-connector-java', version='5.1.6')
groovy.grape.Grape.grab(group:'org.springframework', module:'spring',
version:'2.5.6')
Proxy settings
If you are behind a firewall and/or need to use Groovy/Grape through a proxy server,
you can specify those settings on the command like via
the http.proxyHost and http.proxyPort system properties:
Logging
If you want to see what Grape is doing set the system
property groovy.grape.report.downloads to true (e.g. add -
Dgroovy.grape.report.downloads=true to invocation or JAVA_OPTS) and Grape will
print the following infos to System.error:
3.5.2. Detail
Grape (The Groovy Adaptable Packaging Engine or Groovy Advanced Packaging Engine) is
the infrastructure enabling the grab() calls in Groovy, a set of classes leveraging Ivy to
allow for a repository driven module system for Groovy. This allows a developer to write
a script with an essentially arbitrary library requirement, and ship just the script. Grape
will, at runtime, download as needed and link the named libraries and all dependencies
forming a transitive closure when the script is run from existing repositories such as
JCenter, Ibiblio and java.net.
Grape follows the Ivy conventions for module version identification, with naming
change.
• group - Which module group the module comes from. Translates directly to a Maven
groupId or an Ivy Organization. Any group matching /groovy[x][\..*]^/ is
reserved and may have special meaning to the groovy endorsed modules.
• module - The name of the module to load. Translated directly to a Maven artifactId
or an Ivy artifact.
• version - The version of the module to use. Either a literal version `1.1-RC3' or an
Ivy Range `[2.2.1,)' meaning 2.2.1 or any greater version).
• classifier - The optional classifier to use (for example, jdk15)
The downloaded modules will be stored according to Ivy’s standard mechanism with a
cache root of ~/.groovy/grapes
3.5.3. Usage
Annotation
One or more groovy.lang.Grab annotations can be added at any place that
annotations are accepted to tell the compiler that this code relies on the specific library.
This will have the effect of adding the library to the classloader of the groovy compiler.
This annotation is detected and evaluated before any other resolution of classes in the
script, so imported classes can be properly resolved by a @Grab annotation.
import com.jidesoft.swing.JideSplitButton
@Grab(group='com.jidesoft', module='jide-oss', version='[2.2.1,2.3.0)')
public class TestClassAnnotation {
public static String testMethod () {
return JideSplitButton.class.name
}
}
An appropriate grab(…) call will be added to the static initializer of the class of the
containing class (or script class in the case of an annotated script element).
@Grapes([
@Grab(group='commons-primitives', module='commons-primitives',
version='1.0'),
@Grab(group='org.ccil.cowan.tagsoup', module='tagsoup',
version='0.9.7')])
class Example {
// ...
}
Otherwise you’d encounter the following error:
Technical notes:
• Originally, Groovy stored the Grab annotations for access at runtime and duplicates
aren’t allowed in the bytecode. In current versions, @Grab has only SOURCE
retention, so the multiple occurrences aren’t an issue.
• Future versions of Grape may support using the Grapes annotation to provide a level
of structuring, e.g. allowing a GrabExclude or GrabResolver annotation to apply to
only a subset of the Grab annotations.
Method call
Typically a call to grab will occur early in the script or in class initialization. This is to
insure that the libraries are made available to the ClassLoader before the groovy code
relies on the code. A couple of typical calls may appear as follows:
import groovy.grape.Grape
// random maven library
Grape.grab(group:'com.jidesoft', module:'jide-oss', version:'[2.2.0,)')
Grape.grab([group:'org.apache.ivy', module:'ivy', version:'2.0.0-beta1',
conf:['default', 'optional']],
[group:'org.apache.ant', module:'ant', version:'1.7.0'])
• Multiple calls to grab in the same context with the same parameters should be
idempotent. However, if the same code is called with a
different ClassLoader context then resolution may be re-run.
• If the args map passed into the grab call has an attribute noExceptions that
evaluates true no exceptions will be thrown.
• grab requires that a RootLoader or GroovyClassLoader be specified or be in
the ClassLoader chain of the calling class. By default failure to have such
a ClassLoader available will result in module resolution and an exception being
thrown
o The ClassLoader passed in via the classLoader: argument and it’s parent
classloaders.
o The ClassLoader of the object passed in as the referenceObject: argument, and
it’s parent classloaders.
o The ClassLoader of the class issuing the call to grab
grab(HashMap) Parameters
• group: - <String> - Which module group the module comes from. Translates
directly to a Maven groupId. Any group matching /groovy(|\..|x|x\..)/ is
reserved and may have special meaning to the groovy endorsed modules.
• module: - <String> - The name of the module to load. Translated directly to a Maven
artifactId.
• version: - <String> and possibly <Range> - The version of the module to use.
Either a literal version `1.1-RC3' or an Ivy Range `[2.2.1,)' meaning 2.2.1 or any
greater version).
• classifier: - <String> - The Maven classifier to resolve by.
• force: - <boolean>, defaults true - Used to indicate that this revision must be used
in case of conflicts, independently of
• conflicts manager
• changing: - <boolean>, default false - Whether the artifact can change without it’s
version designation changing.
• transitive: - <boolean>, default true - Whether to resolve other dependencies this
module has or not.
There are two principal variants of grab , one with a single Map and one with an
arguments Map and multiple dependencies map. A call to the single map grab is the
same as calling grab with the same map passed in twice, so grab arguments and
dependencies can be mixed in the same map, and grab can be called as a single method
with named parameters.
There are synonyms for these parameters. Submitting more than one is a runtime
exception.
• validate: - <boolean>, default false - Should poms or ivy files be validated (true),
or should we trust the cache (false).
• noExceptions: - <boolean>, default false - If ClassLoader resolution or repository
querying fails, should we throw an exception (false) or fail silently (true).
grape list
Lists locally installed modules (with their full maven name in the case of groovy
modules) and versions.
Advanced configuration
Repository Directory
If you need to change the directory grape uses for downloading libraries you can specify
the grape.root system property to change the default (which is ~/.groovy/grapes)
For more information on how to customize these settings, please refer to the Ivy
documentation.
More Examples
Using Apache Commons Collections:
import com.google.common.collect.HashBiMap
@Grab(group='com.google.code.google-collections', module='google-collect',
version='snapshot-20080530')
def getFruit() { [grape:'purple', lemon:'yellow', orange:'orange'] as
HashBiMap }
assert fruit.lemon == 'yellow'
assert fruit.inverse().yellow == 'lemon'
Launching a Jetty server to serve Groovy templates:
@Grab('org.eclipse.jetty.aggregate:jetty-server:8.1.19.v20160209')
@Grab('org.eclipse.jetty.aggregate:jetty-servlet:8.1.19.v20160209')
@Grab('javax.servlet:javax.servlet-api:3.0.1')
import org.eclipse.jetty.server.Server
import org.eclipse.jetty.servlet.ServletContextHandler
import groovy.servlet.TemplateServlet
def runServer(duration) {
def server = new Server(8080)
def context = new ServletContextHandler(server, "/",
ServletContextHandler.SESSIONS)
context.resourceBase = "."
context.addServlet(TemplateServlet, "*.gsp")
server.start()
sleep duration
server.stop()
}
runServer(10000)
Grape will download Jetty and its dependencies on first launch of this script, and cache
them. We create a new Jetty Server on port 8080, then expose Groovy’s TemplateServlet
at the root of the context — Groovy comes with its own powerful template engine
mechanism. We start the server and let it run for a certain duration. Each time someone
will hit http://localhost:8080/somepage.gsp, it will display the somepage.gsp template
to the user — those template pages should be situated in the same directory as this
server script.
This chapter will start with language specific testing features and continue with a closer
look at JUnit integration, Spock for specifications, and Geb for functional tests. Finally,
we’ll do an overview of other testing libraries known to be working with Groovy.
3.6.2. Language Features
Besides integrated support for JUnit, the Groovy programming language comes with
features that have proven to be very valuable for test-driven development. This section
gives insight on them.
Power Assertions
Writing tests means formulating assumptions by using assertions. In Java this can be
done by using the assert keyword that has been added in J2SE 1.4. In
Java, assert statements can be enabled via the JVM parameters -ea (or -
enableassertions ) and -da (or -disableassertions ). Assertion statements in Java
are disabled by default.
Groovy comes with a rather powerful variant of assert also known as power assertion
statement. Groovy’s power assert differs from the Java version in its output given the
boolean expression validates to false :
def x = 1
assert x == 2
// Output:
//
// Assertion failed:
// assert x == 2
// | |
// 1 false
This section shows the std-err output
The power assertion statements true power unleashes in complex Boolean statements,
or statements with collections or other toString -enabled classes:
def x = [1,2,3,4,5]
assert (x << 6) == [6,7,8,9,10]
// Output:
//
// Assertion failed:
// assert (x << 6) == [6,7,8,9,10]
// | | |
// | | false
// | [1, 2, 3, 4, 5, 6]
// [1, 2, 3, 4, 5, 6]
Another important difference from Java is that in Groovy assertions are enabled by
default. It has been a language design decision to remove the possibility to deactivate
assertions. Or, as Bertrand Meyer stated, it makes no sense to take off your
swim ring if you put your feet into real water .
One thing to be aware of are methods with side-effects inside Boolean expressions in
power assertion statements. As the internal error message construction mechanism
does only store references to instances under target, it happens that the error message
text is invalid at rendering time in case of side-effecting methods involved:
// Output:
//
// Assertion failed:
// assert [[1,2,3,3,3,3,4]].first().unique() == [1,2,3]
// | | |
// | | false
// | [1, 2, 3, 4]
// [1, 2, 3, 4]
The error message shows the actual state of the collection, not the state before
the unique method was applied
If you choose to provide a custom assertion error message this can be done by
using the Java syntax assert expression1 :
expression2 where expression1 is the Boolean expression
and expression2 is the custom error message. Be aware though that this will
disable the power assert and will fully fallback to custom error messages on
assertion errors.
The following sections show ways to create mocks and stubs with Groovy language
features only.
Map Coercion
By using maps or expandos, we can incorporate desired behaviour of a collaborator very
easily as shown here:
class TranslationService {
String convert(String key) {
return "test"
}
}
Be aware that map coercion can get into the way if you deal with custom java.util.Map descend
combination with the as operator. The map coercion mechanism is targeted directly at certain collec
doesn’t take custom classes into account.
Closure Coercion
The 'as' operator can be used with closures in a neat way which is great for developer
testing in simple scenarios. We haven’t found this technique to be so powerful that we
want to do away with dynamic mocking, but it can be very useful in simple cases none-
the-less.
Classes or interfaces holding a single method, including SAM (single abstract method)
classes, can be used to coerce a closure block to be an object of the given type. Be aware
that for doing this, Groovy internally create a proxy object descending for the given
class. So the object will not be a direct instance of the given class. This important if, for
example, the generated proxy object’s meta-class is altered afterwards.
The MockFor class supports (typically unit) testing of classes in isolation by allowing
a strictly orderedexpectation of the behavior of collaborators to be defined. A typical test
scenario involves a class under test and one or more collaborators. In such a scenario it
is often desirable to just test the business logic of the class under test. One strategy for
doing that is to replace the collaborator instances with simplified mock objects to help
isolate out the logic in the test target. MockFor allows such mocks to be created using
meta-programming. The desired behavior of collaborators is defined as a behavior
specification. The behavior is enforced and checked automatically.
class Person {
String first, last
}
class Family {
Person father, mother
def nameOfMother() { "$mother.first $mother.last" }
}
With MockFor , a mock expectation is always sequence dependent and its use
automatically ends with a call to verify :
a call to verify checks whether the sequence and number of method calls is as expected
The StubFor class supports (typically unit) testing of classes in isolation by allowing
a loosely-orderedexpectation of the behavior of collaborators to be defined. A typical test
scenario involves a class under test and one or more collaborators. In such a scenario it
is often desirable to just test the business logic of the CUT. One strategy for doing that is
to replace the collaborator instances with simplified stub objects to help isolate out the
logic in the target class. StubFor allows such stubs to be created using meta-
programming. The desired behavior of collaborators is defined as a behavior
specification.
the with method is used for delegating all calls inside the closure to the StubFor instance
a call to verify (optional) checks whether the number of method calls is as expected
MockFor and StubFor can not be used to test statically compiled classes e.g for Java
classes or Groovy classes that make use of @CompileStatic . To stub and/or mock these
classes you can use Spock or one of the Java mocking libraries.
Every java.lang.Class is supplied with a special metaClass property that will give a
reference to an ExpandoMetaClass instance. The expando meta-class is not restricted to
custom classes, it can be used for JDK classes like for example java.lang.String as
well:
String.metaClass.swapCase = {->
def sb = new StringBuffer()
delegate.each {
sb << (Character.isUpperCase(it as char) ? Character.toLowerCase(it
as char) :
Character.toUpperCase(it as char))
}
sb.toString()
}
class Book {
String title
}
If you want to change the metaClass property on a per test method level you need to
remove the changes that were done to the meta-class, otherwise those changes would be
persistent across test method calls. Changes are removed by replacing the meta-class in
the GroovyMetaClassRegistry :
GroovySystem.metaClassRegistry.setMetaClass(java.lang.String, null)
Another alternative is to register a MetaClassRegistryChangeEventListener , track
the changed classes and remove the changes in the cleanup method of your chosen
testing runtime. A good example can be found in the Grails web development
framework.
Besides using the ExpandoMetaClass on a class-level, there is also support for using the
meta-class on a per-object level:
Iterable#combinations
The combinations method that is added on java.lang.Iterable compliant classes
can be used to get a list of combinations from a list containing two or more sub-lists:
void testCombinations() {
def combinations = [[2, 3],[4, 5, 6]].combinations()
assert combinations == [[2, 4], [3, 4], [2, 5], [3, 5], [2, 6], [3, 6]]
}
The method could be used in test case scenarios to generate all possible argument
combinations for a specific method call.
Iterable#eachCombination
The eachCombination method that is added on java.lang.Iterable can be used to
apply a function (or in this case a groovy.lang.Closure ) to each if the combinations
that has been built by the combinations method:
void testEachCombination() {
[[2, 3],[4, 5, 6]].eachCombination { println it[0] + it[1] }
}
The method could be used in the testing context to call methods with each of the
generated combinations.
Tool Support
Test Code Coverage
Code coverage is a useful measure of the effectiveness of (unit) tests. A program with
high code coverage has a lower chance to hold critical bugs than a program with no or
low coverage. To get code coverage metrics, the generated byte-code usually needs to be
instrumented before the tests are executed. One tool with Groovy support for this task
is Cobertura.
Various frameworks and build tools come with Cobertura integration. For Grails, there is
the code coverage plugin based on Cobertura, for Gradle there is the gradle-cobertura
plugin, to name only two of them.
The following code listing shows an example on how to enable Cobertura test coverage
reports in a Gradle build script from a Groovy project:
def pluginVersion = '<plugin version>'
def groovyVersion = '<groovy version>'
def junitVersion = '<junit version>'
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath 'com.eriwen:gradle-cobertura-plugin:${pluginVersion}'
}
}
repositories {
mavenCentral()
}
dependencies {
compile "org.codehaus.groovy:groovy-all:${groovyVersion}"
testCompile "junit:junit:${junitVersion}"
}
cobertura {
format = 'html'
includes = ['**/*.java', '**/*.groovy']
excludes = ['com/thirdparty/**/*.*']
}
Several output formats can be chosen for Cobertura coverage reports and test code
coverage reports can be added to continuous integration build tasks.
• You use the same overall practices as you would when testing with Java but you can
adopt much of Groovy’s concise syntax in your tests making them succinct. You can
even use the capabilities for writing testing domain specific languages (DSLs) if you
feel so inclined.
• There are numerous helper classes that simplify many testing activities. The details
differ in some cases depending on the version of JUnit you are using. We’ll cover
those details shortly.
• Groovy’s PowerAssert mechanism is wonderful to use in your tests
• Groovy deems that tests are so important you should be able to run them as easily as
scripts or classes. This is why Groovy includes an automatic test runner when using
the groovy command or the GroovyConsole. This gives you some additional options
over and above running your tests
In the following sections we will have a closer look at JUnit 3, 4 and 5 Groovy
integration.
JUnit 3
Maybe one of the most prominent Groovy classes supporting JUnit 3 tests is
the GroovyTestCase class. Being derived from junit.framework.TestCase it offers a
bunch of additional methods that make testing in Groovy a breeze.
Although GroovyTestCase inherits from TestCase doesn’t mean you can’t use JUnit 4 features
fact, the most recent Groovy versions come with a bundled JUnit 4 and that comes with a backwards
compatible TestCase implementation. There have been some discussion on the Groovy mailing-list
use GroovyTestCase or JUnit 4 with the result that it is mostly a matter of taste, but with Groovy
get a bunch of methods for free that make certain types of tests easier to write.
Assertion Methods
GroovyTestCase is inherited from junit.framework.TestCase therefore it inherits a
large number of assertion methods being available to be called in every test method:
void testAssertions() {
assertTrue(1 == 1)
assertEquals("test", "test")
def x = "42"
assertNotNull "x must not be null", x
assertNull null
assertSame x, x
}
}
As can be seen above, in contrast to Java it is possible to leave out the parenthesis in
most situations which leads to even more readability of JUnit assertion method call
expressions.
void testScriptAssertions() {
assertScript '''
def x = 1
def y = 2
assert x + y == 3
'''
}
shouldFail Methods
shouldFail can be used to check whether the given code block fails or not. In case it
fails, the assertion does hold, otherwise the assertion fails:
void testInvalidIndexAccess1() {
def numbers = [1,2,3,4]
shouldFail {
numbers.get(4)
}
}
The example above uses the basic shouldFail method interface that takes
a groovy.lang.Closure as a single argument. The Closure instance holds the code
that is supposed to be breaking during run-time.
void testInvalidIndexAccess2() {
def numbers = [1,2,3,4]
shouldFail IndexOutOfBoundsException, {
numbers.get(4)
}
}
If anything other than IndexOutOfBoundsException (or a descendant class of it) is
thrown, the test case will fail.
A pretty nice feature of shouldFail hasn’t been visible so far: it returns the exception
message. This is really useful if you want to assert on the exception error message:
void testInvalidIndexAccess3() {
def numbers = [1,2,3,4]
def msg = shouldFail IndexOutOfBoundsException, {
numbers.get(4)
}
assert msg.contains('Index: 4, Size: 4') ||
msg.contains('Index 4 out-of-bounds for length 4') ||
msg.contains('Index 4 out of bounds for length 4')
}
notYetImplemented Method
The notYetImplemented method has been greatly influenced by HtmlUnit. It allows to
write a test method but mark it as not yet implemented. As long as the test method fails
and is marked with notYetImplemented the test goes green:
void testNotYetImplemented1() {
if (notYetImplemented()) return
assert 1 == 2
}
a call to notYetImplemented is necessary for GroovyTestCase to get the current method
stack.
as long as the test evaluates to false the test execution will be successful.
@NotYetImplemented
void testNotYetImplemented2() {
assert 1 == 2
}
JUnit 4
Groovy can be used to write JUnit 4 test cases without any restrictions.
The groovy.test.GroovyAssert holds various static methods that can be used as
replacement for the GroovyTestCase methods in JUnit 4 tests:
import org.junit.Test
class JUnit4ExampleTests {
@Test
void indexOutOfBoundsAccess() {
def numbers = [1,2,3,4]
shouldFail {
numbers.get(4)
}
}
}
As can be seen in the example above, the static methods found in GroovyAssert are
imported at the beginning of the class definition thus shouldFail can be used the same
way it can be used in a GroovyTestCase .
@Test
void shouldFailReturn() {
def e = shouldFail {
throw new RuntimeException('foo',
new RuntimeException('bar'))
}
assert e instanceof RuntimeException
assert e.message == 'foo'
assert e.cause.message == 'bar'
}
JUnit 5
Much of the approach and helper classes described under JUnit4 apply when using
JUnit5 however JUnit5 uses some slightly different class annotations when writing your
tests. See the JUnit5 documentation for more details.
Create your test classes as per normal JUnit5 guidelines as shown in this example:
class MyTest {
@Test
void streamSum() {
assert Stream.of(1, 2, 3).mapToInt{ i -> i }.sum() > 5
}
@ParameterizedTest
@ValueSource(strings = [ "racecar", "radar", "able was I ere I saw elba"
])
void palindromes(String candidate) {
assert isPalindrome(candidate)
}
@TestFactory
def dynamicTestCollection() {[
dynamicTest("Add test") { -> assert 1 + 1 == 2 },
dynamicTest("Multiply Test") { -> assert 2 * 3 == 6 }
]}
}
This test requires the additional org.junit.jupiter:junit-jupiter-
params dependency if not already in your project.
You can run the tests in your IDE or build tool if it supports and is configured for JUnit5.
If you run the above test in the GroovyConsole or via the groovy command, you will see
a short text summary of the result of running the test:
@BeforeAll
static void init() {
def logger = Logger.getLogger(LoggingListener.name)
logger.level = Level.FINE
logger.addHandler(new ConsoleHandler(level: Level.FINE))
}
Beside these awesome features Spock is a good example on how to leverage advanced Groovy progra
features in third party libraries, for example, by using Groovy AST transformations.
This section should not serve as detailed guide on how to use Spock, it should rather give an impressi
about and how it can be leveraged for unit, integration, functional or any other type of testing.
The next section we will have an first look at the anatomy of a Spock specification. It
should give a pretty good feeling on what Spock is up to.
Specifications
Spock lets you write specifications that describe features (properties, aspects) exhibited
by a system of interest. The "system" can be anything between a single class and an
entire application, a more advanced term for it issystem under specification. The feature
description starts from a specific snapshot of the system and its collaborators, this
snapshot is called the feature’s fixture.
Let’s have a look at a simple specification with a single feature method for an
imaginary Stack class:
when:
stack.push 42
then:
stack.size() == 1
}
}
Feature method, is by convention named with a String literal.
Setup block, here is where any setup work for this feature needs to be done.
When block describes a stimulus, a certain action under target by this feature specification.
Then block any expressions that can be used to validate the result of the code that was triggered
by the when block.
More Spock
Spock provides much more features like data tables or advanced mocking capabilities.
Feel free to consult theSpock GitHub page for more documentation and download
information.
Geb has great features that make it a good fit for a functional testing library:
This section should not serve as detailed guide on how to use Geb, it should rather give an impression
and how it can be leveraged functional testing.
The next section will give an example on how Geb can be used to write a functional test
for a simple web page with a single search field.
A Geb Script
Although Geb can be used standalone in a Groovy script, in many scenarios it’s used in
combination with other testing frameworks. Geb comes with various base classes that
can be used in JUnit 3, 4, TestNG or Spock tests. The base classes are part of additional
Geb modules that need to be added as a dependency.
For example, the following @Grab dependencies can be used to run Geb with the
Selenium Firefox driver in JUnit4 tests. The module that is needed for JUnit 3/4 support
is geb-junit4 :
@Grab('org.gebish:geb-core:0.9.2')
@Grab('org.gebish:geb-junit4:0.9.2')
@Grab('org.seleniumhq.selenium:selenium-firefox-driver:2.26.0')
@Grab('org.seleniumhq.selenium:selenium-support:2.26.0')
The central class in Geb is the geb.Browser class. As its name implies it is used to
browse pages and access DOM elements:
import geb.Browser
import org.openqa.selenium.firefox.FirefoxDriver
$("#username").text = 'John'
$("#password").text = 'Doe'
$("#loginButton").click()
$ together with CSS selectors is used to access the username and password DOM fields.
The Browser class comes with a drive method that delegates all method/property
calls to the current browser instance. The Browser configuration must not be done
inline, it can also be externalized in a GebConfig.groovy configuration file for example.
In practice, the usage of the Browser class is mostly hidden by Geb test base classes.
They delegate all missing properties and method calls to the current browser instance
that exists in the background:
@Test
void executeSeach() {
go 'http://somehost/mayapp/search'
$('#searchField').text = 'John Doe'
$('#searchButton').click()
Browser#$ is used to access DOM content. Any CSS selectors supported by the underlying
Selenium drivers are allowed
The example above shows a simple Geb web test with the JUnit 4 base
class geb.junit4.GebTest . Note that in this case the Browser configuration is
externalized. GebTest delegates methods like go and $ to the
underlying browser instance.
More Geb
In the previous section we only scratched the surface of the available Geb features. More
information on Geb can be found at the project homepage.
3.7.1. JsonSlurper
JsonSlurper is a class that parses JSON text or reader content into Groovy data
structures (objects) such as maps, lists and primitive types
like Integer , Double , Boolean and String .
The class comes with a bunch of overloaded parse methods plus some special methods
such as parseText , parseFile and others. For the next example we will use
the parseText method. It parses a JSON String and recursively converts it to a list or
map of objects. The other parse* methods are similar in that they return a
JSON String but for different parameter types.
In addition to maps JsonSlurper supports JSON arrays which are converted to lists.
For more details please have a look at the section on GPath expressions.
The following table gives an overview of the JSON types and the corresponding Groovy
data types:
JSON Groovy
string java.lang.String
object java.util.LinkedHashMap
array java.util.ArrayList
true true
false false
null null
Parser Variants
JsonSlurper comes with a couple of parser implementations. Each parser fits different
requirements, it could well be that for certain scenarios the JsonSlurper default parser
is not the best bet for all situations. Here is an overview of the shipped parser
implementations:
• The JsonParserCharArray parser basically takes a JSON string and operates on the
underlying character array. During value conversion it copies character sub-arrays
(a mechanism known as "chopping") and operates on them.
• The JsonFastParser is a special variant of the JsonParserCharArray and is the
fastest parser. However, it is not the default parser for a reason. JsonFastParser is
a so-called index-overlay parser. During parsing of the given JSON String it tries as
hard as possible to avoid creating new char arrays or String instances. It keeps
pointers to the underlying original character array only. In addition, it defers object
creation as late as possible. If parsed maps are put into long-term caches care must
be taken as the map objects might not be created and still consist of pointer to the
original char buffer only. However, JsonFastParser comes with a special chop
mode which dices up the char buffer early to keep a small copy of the original buffer.
Recommendation is to use the JsonFastParser for JSON buffers under 2MB and
keeping the long-term cache restriction in mind.
• The JsonParserLax is a special variant of the JsonParserCharArray parser. It has
similar performance characteristics as JsonFastParser but differs in that it isn’t
exclusively relying on the ECMA-404 JSON grammar. For example it allows for
comments, no quote strings etc.
• The JsonParserUsingCharacterSource is a special parser for very large files. It
uses a technique called "character windowing" to parse large JSON files (large means
files over 2MB size in this case) with constant performance characteristics.
The default parser implementation for JsonSlurper is JsonParserCharArray .
The JsonParserType enumeration contains constants for the parser implementations
described above:
Implementation Constant
JsonParserCharArray JsonParserType#CHAR_BUFFER
JsonFastParser JsonParserType#INDEX_OVERLAY
Implementation Constant
JsonParserLax JsonParserType#LAX
JsonParserUsingCharacterSource JsonParserType#CHARACTER_SOURCE
3.7.2. JsonOutput
JsonOutput is responsible for serialising Groovy objects into JSON strings. It can be
seen as companion object to JsonSlurper, being a JSON parser.
Customizing Output
If you need control over the serialized output you can use a JsonGenerator .
The JsonGenerator.Options builder can be used to create a customized generator.
One or more options can be set on this builder in order to alter the resulting output.
When you are done setting the options simply call the build() method in order to get a
fully configured instance that will generate output based on the options selected.
class Person {
String name
String title
int age
String password
Date dob
URL favoriteUrl
}
Person person = new Person(name: 'John', title: null, age: 21, password:
'secret',
dob: Date.parse('yyyy-MM-dd', '1984-12-15'),
favoriteUrl: new URL('http://groovy-
lang.org/'))
class Person {
String name
URL favoriteUrl
}
// First parameter to the converter must match the type for which it is
registered
shouldFail(IllegalArgumentException) {
new JsonGenerator.Options()
.addConverter(Date) { Calendar cal -> }
}
Formatted Output
As we saw in previous examples, the JSON output is not pretty printed per default.
However, the prettyPrint method in JsonOutput comes to rescue for this task.
Builders
Another way to create JSON from Groovy is to
use JsonBuilder or StreamingJsonBuilder . Both builders provide a DSL which
allows to formulate an object graph which is then converted to JSON.
For more details on builders, have a look at the builders chapter which covers
both JsonBuilderand StreamingJsonBuilder.
Property Value
url jdbc:hsqldb:mem:yourdb
password yourPassword
driver org.hsqldb.jdbcDriver
Consult the documentation for the JDBC driver that you plan to use to determine the
correct values for your situation.
The Sql class has a newInstance factory method which takes these parameters. You
would typically use it as follows:
Connecting to HSQLDB
import groovy.sql.Sql
sql.close()
If you don’t want to have to handle resource handling yourself (i.e.
call close() manually) then you can use the withInstance variation as shown here:
import groovy.sql.Sql
import org.hsqldb.jdbc.JDBCDataSource
@Grab('commons-dbcp:commons-dbcp:1.4')
import groovy.sql.Sql
import org.apache.commons.dbcp.BasicDataSource
@Grab('org.hsqldb:hsqldb:2.3.3')
@GrabConfig(systemClassLoader=true)
// create, use, and then close sql instance ...
The @GrabConfig statement is necessary to make sure the system classloader is used.
This ensures that the driver classes and system classes
like java.sql.DriverManager are in the same classloader.
Creating a table
Creating/Inserting data
You can use the same execute() statement we saw earlier but to insert a row by using
a SQL insert statement as follows:
Inserting a row
Inserting a row using executeInsert with a GString and specifying key names
Reading rows
Reading rows of data from the database is accomplished using one of several available
methods: query , eachRow , firstRow and rows .
Use the query method if you want to iterate through the ResultSet returned by the
underlying JDBC API as shown here:
def rowNum = 0
sql.query('SELECT firstname, lastname FROM Author') { resultSet ->
while (resultSet.next()) {
def first = resultSet.getString(1)
def last = resultSet.getString('lastname')
assert expected[rowNum++] == "$first $last"
}
}
Use the eachRow method if you want a slightly higher-level abstraction which provides
a Groovy friendly map-like abstraction for the ResultSet as shown here:
rowNum = 0
sql.eachRow('SELECT firstname, lastname FROM Author') { row ->
def first = row[0]
def last = row.lastname
assert expected[rowNum++] == "$first $last"
}
Note that you can use Groovy list-style and map-style notations when accessing the row
of data.
Use the firstRow method if you for similar functionality as eachRow but returning only
one row of data as shown here:
You can also use any of the above methods to return scalar values, though
typically firstRow is all that is required in such cases. An example returning the count
of rows is shown here:
Updating rows
Updating rows can again be done using the execute() method. Just use a SQL update
statement as the argument to the method. You can insert an author with just a lastname
and then update the row to also have a firstname as follows:
Updating a row
Using executeUpdate
Deleting rows
A successful transaction
If something goes wrong, any earlier operations within the withTransaction block are
rolled back. We can see that in operation in the following example where we use
database metadata (more details coming up shortly) to find the maximum allowable size
of the firstname column and then attempt to enter a firstname one larger than that
maximum value as shown here:
def maxFirstnameLength
def metaClosure = { meta -> maxFirstnameLength = meta.getPrecision(1) }
def rowClosure = {}
def rowCountBefore = sql.firstRow('SELECT COUNT(*) as num FROM Author').num
try {
sql.withTransaction {
sql.execute "INSERT INTO Author (firstname) VALUES ('Dierk')"
sql.eachRow "SELECT firstname FROM Author WHERE firstname = 'Dierk'",
metaClosure, rowClosure
sql.execute "INSERT INTO Author (firstname) VALUES (?)", 'X' *
(maxFirstnameLength + 1)
}
} catch(ignore) { println ignore.message }
def rowCountAfter = sql.firstRow('SELECT COUNT(*) as num FROM Author').num
assert rowCountBefore == rowCountAfter
Even though the first sql execute succeeds initially, it will be rolled back and the number
of rows will remain the same.
Using batches
When dealing with large volumes of data, particularly when inserting such data, it can be
more efficient to chunk the data into batches. This is done using
the withBatch statement as shown in the following example:
import java.util.logging.*
We noted earlier that to avoid SQL injection, we encourage you to use prepared
statements, this is achieved using the variants of methods which take GStrings or a list of
extra parameters. Prepared statements can be used in combination with batches as
shown in the following example:
Performing pagination
When presenting large tables of data to a user, it is often convenient to present
information a page at a time. Many of Groovy’s SQL retrieval methods have extra
parameters which can be used to select a particular page of interest. The starting
position and page size are specified as integers as shown in the following example
using rows :
Fetching metadata
JDBC metadata can be retrieved in numerous ways. Perhaps the most basic approach is
to extract the metadata from any row as shown in the following example which
examines the tablename, column names and column type names:
Finally, JDBC also provides metadata per connection (not just for rows). You can also
access such metadata from Groovy as shown in this example:
def md = sql.connection.metaData
assert md.driverName == 'HSQL Database Engine Driver'
assert md.databaseProductVersion == '2.3.3'
assert ['JDBCMajorVersion', 'JDBCMinorVersion'].collect{ md[it] } == [4, 0]
assert md.stringFunctions.tokenize(',').contains('CONCAT')
def rs = md.getTables(null, null, 'AUTH%', null)
assert rs.next()
assert rs.getString('TABLE_NAME') == 'AUTHOR'
Consult the JavaDoc for your driver to find out what metadata information is available
for you to access.
Named-ordinal parameters
Stored procedures
The exact syntax for creating a stored procedure or function varies slightly between
different databases. For the HSQLDB database we are using, we can create a stored
function which returns the initials of all authors in a table as follows:
sql.execute """
CREATE FUNCTION SELECT_AUTHOR_INITIALS()
RETURNS TABLE (firstInitial VARCHAR(1), lastInitial VARCHAR(1))
READS SQL DATA
RETURN TABLE (
SELECT LEFT(Author.firstname, 1) as firstInitial, LEFT(Author.lastname,
1) as lastInitial
FROM Author
)
"""
We can use a SQL CALL statement to invoke the function using Groovy’s normal SQL
retrieval methods. Here is an example using eachRow .
Creating a stored procedure or function
def result = []
sql.eachRow('CALL SELECT_AUTHOR_INITIALS()') {
result << "$it.firstInitial$it.lastInitial"
}
assert result == ['DK', 'JS', 'GL']
Here is the code for creating another stored function, this one taking the lastname as a
parameter:
sql.execute """
CREATE FUNCTION FULL_NAME (p_lastname VARCHAR(64))
RETURNS VARCHAR(100)
READS SQL DATA
BEGIN ATOMIC
DECLARE ans VARCHAR(100);
SELECT CONCAT(firstname, ' ', lastname) INTO ans
FROM Author WHERE lastname = p_lastname;
RETURN ans;
END
"""
We can use the placeholder syntax to specify where the parameter belongs and note the
special placeholder position to indicate the result:
sql.execute """
CREATE PROCEDURE CONCAT_NAME (OUT fullname VARCHAR(100),
IN first VARCHAR(50), IN last VARCHAR(50))
BEGIN ATOMIC
SET fullname = CONCAT(first, ' ', last);
END
"""
To use the CONCAT_NAME stored procedure parameter, we make use of a
special call method. Any input parameters are simply provided as parameters to the
method call. For output parameters, the resulting type must be specified as shown here:
sql.execute """
CREATE PROCEDURE CHECK_ID_POSITIVE_IN_OUT ( INOUT p_err VARCHAR(64), IN
pparam INTEGER, OUT re VARCHAR(15))
BEGIN ATOMIC
IF pparam > 0 THEN
set p_err = p_err || '_OK';
set re = 'RET_OK';
ELSE
set p_err = p_err || '_ERROR';
set re = 'RET_ERROR';
END IF;
END;
"""
Using a stored procedure with an input/output parameter
• groovy.util.XmlParser
• groovy.util.XmlSlurper
Both have the same approach to parse an xml. Both come with a bunch of overloaded
parse methods plus some special methods such as parseText , parseFile and others. For
the next example we will use the parseText method. It parses a XML String and
recursively converts it to a list or map of objects.
XmlSlurper
XmlParser
• Both are based on SAX so they both are low memory footprint
• Both can update/transform the XML
But they have key differences:
• XmlSlurper evaluates the structure lazily. So if you update the xml you’ll have to
evaluate the whole tree again.
• XmlSlurper returns GPathResult instances when parsing XML
There is a discussion at StackOverflow. The conclusions written here are based partially on this entry.
• If you just have to read a few nodes XmlSlurper should be your choice, since it
will not have to create a complete structure in memory"
In general both classes perform similar way. Even the way of using GPath expressions
with them are the same (both use breadthFirst() and depthFirst() expressions).
So I guess it depends on the write/read frequency.
DOMCategory
There is another way of parsing XML documents with Groovy with the used
of groovy.xml.dom.DOMCategory which is a category class which adds GPath style
operations to Java’s DOM classes.
Java has in-built support for DOM processing of XML using classes representing the various parts of XM
e.g. Document , Element , NodeList , Attr etc. For more information about these classes, refer
JavaDocs.
use(DOMCategory) {
assert records.car.size() == 3
}
Parsing the XML
3.9.2. GPath
The most common way of querying XML in Groovy is using GPath :
GPath is a path expression language integrated into Groovy which allows parts of nested
structured data to be identified. In this sense, it has similar aims and scope as XPath does
for XML. The two main places where you use GPath expressions is when dealing with nested
POJOs or when dealing with XML
It is similar to XPath expressions and you can use it not only with XML but also with
POJO classes. As an example, you can specify a path to an object or element of interest:
• a.b.c → for XML, yields all the <c> elements inside <b> inside <a>
• a.b.c → all POJOs, yields the <c> properties for all the <b> properties of <a> (sort
of like a.getB().getC() in JavaBeans)
For XML, you can also specify attributes, e.g.:
In the end we’ll have the instance of the author node and because we wanted the text
inside that node we should be calling the text() method. The author node is an
instance of GPathResult type and text() a method giving us the content of that node
as a String.
When using GPath with an xml parsed with XmlSlurper we’ll have as a result
a GPathResult object. GPathResult has many other convenient methods to convert
the text inside a node to any other type such as:
• toInteger()
• toFloat()
• toBigInteger()
• …
All these methods try to convert a String to the appropriate type.
If we were using a XML parsed with XmlParser we could be dealing with instances of
type Node . But still all the actions applied to GPathResult in these examples could be
applied to a Node as well. Creators of both parsers took into
account GPath compatibility.
Next step is to get the some values from a given node’s attribute. In the following sample
we want to get the first book’s author’s id. We’ll be using two different approaches. Let’s
see the code first:
As you can see there are two types of notations to get attributes, the
• // : Look everywhere
The first example shows a simple use of * , which only iterates over the direct children
of the node.
Using *
def response = new XmlSlurper().parseText(books)
This operation roughly corresponds to the breadthFirst() method, except that it only
stops at one levelinstead of continuing to the inner levels.
What if we would like to look for a given value without having to know exactly where it
is. Let’s say that the only thing we know is the id of the author "Lewis Carroll" . How are
we going to be able to find that book? Using ** is the solution:
Using **
assert bookId == 3
** is the same as looking for something everywhere in the tree from this point down. In
this case, we’ve used the method find(Closure cl) to find just the first occurrence.
What if we want to collect all book’s titles? That’s easy, just use findAll :
assert titles.size() == 4
In the last two examples, ** is used as a shortcut for the depthFirst() method. It goes
as far down the tree as it can while navigating down the tree from a given node.
The breadthFirst() method finishes off all nodes on a given level before traversing
down to the next level.
The following example shows the difference between these two methods:
depthFirst() vs .breadthFirst
def response = new XmlSlurper().parseText(books)
def nodeName = { node -> node.name() }
def withId2or3 = { node -> node.@id in [2, 3] }
response.value.books.depthFirst().findAll(withId2or3).collect(nodeName)
assert ['book', 'book', 'author', 'author'] ==
response.value.books.breadthFirst().findAll(withId2or3).collect(nodeName)
In this example, we search for any nodes with an id attribute with value 2 or 3. There are
both book and author nodes that match that criteria. The different traversal orders
will find the same nodes in each case but in different orders corresponding to how the
tree was traversed.
It is worth mentioning again that there are some useful methods converting a node’s
value to an integer, float, etc. Those methods could be convenient when doing
comparisons like this:
helpers
assert titles.size() == 2
In this case the number 2 has been hardcoded but imagine that value could have come
from any other source (database… etc.).
• groovy.xml.MarkupBuilder
• groovy.xml.StreamingMarkupBuilder
MarkupBuilder
Here is an example of using Groovy’s MarkupBuilder to create a new XML file:
xml.records() {
car(name: 'HSV Maloo', make: 'Holden', year: 2006) {
country('Australia')
record(type: 'speed', 'Production Pickup Truck with speed of
271kph')
}
car(name: 'Royale', make: 'Bugatti', year: 1931) {
country('France')
record(type: 'price', 'Most Valuable Car at $15 million')
}
}
xmlMarkup.movie("the godfather")
The xmlMarkup.movie(…) call will create a XML node with a tag called movie and with
content the godfather .
xmlMarkup.movie(id: 2) {
name("the godfather")
}
assert movie.@id == 2
assert movie.name.text() == 'the godfather'
A closure represents the children elements of a given node. Notice this time instead of using a
String for the attribute we’re using a number.
Sometimes you may want to use a specific namespace in your xml documents:
Namespace aware
xmlMarkup
.'x:movies'('xmlns:x': 'http://www.groovy-lang.org') {
'x:movie'(id: 1, 'the godfather')
'x:movie'(id: 2, 'ronin')
}
def movies =
new XmlSlurper()
.parseText(xmlWriter.toString())
.declareNamespace(x: 'http://www.groovy-lang.org')
assert movies.'x:movie'.last().@id == 2
assert movies.'x:movie'.last().text() == 'ronin'
Creating a node with a given namespace xmlns:x
Creating a XmlSlurper registering the namespace to be able to test the XML we just created
What about having some more meaningful example. We may want to generate more
elements, to have some logic when creating our XML:
Mix code
def xmlWriter = new StringWriter()
def xmlMarkup = new MarkupBuilder(xmlWriter)
xmlMarkup
.'x:movies'('xmlns:x': 'http://www.groovy-lang.org') {
(1..3).each { n ->
'x:movie'(id: n, "the godfather $n")
if (n % 2 == 0) {
'x:movie'(id: n, "the godfather $n (Extended)")
}
}
}
def movies =
new XmlSlurper()
.parseText(xmlWriter.toString())
.declareNamespace(x: 'http://www.groovy-lang.org')
assert movies.'x:movie'.size() == 4
assert movies.'x:movie'*.text().every { name -> name.startsWith('the') }
Generating elements from a range
Mix code
return builder
}
xmlMarkup.'x:movies'('xmlns:x': 'http://www.groovy-lang.org') {
buildMovieList(xmlMarkup)
}
def movies =
new XmlSlurper()
.parseText(xmlWriter.toString())
.declareNamespace(x: 'http://www.groovy-lang.org')
assert movies.'x:movie'.size() == 4
assert movies.'x:movie'*.text().every { name -> name.startsWith('the') }
In this case we’ve created a Closure to handle the creation of a list of movies
StreamingMarkupBuilder
The class groovy.xml.StreamingMarkupBuilder is a builder class for creating XML
markup. This implementation uses
a groovy.xml.streamingmarkupsupport.StreamingMarkupWriter to handle output.
Using StreamingMarkupBuilder
assert records.car.size() == 3
assert records.car.find { it.@name == 'P50' }.country.text() == 'Isle of
Man'
Note that StreamingMarkupBuilder.bind returns a Writable instance that may be
used to stream the markup to a Writer
We’re capturing the output in a String to parse it again an check the structure of the generated
XML with XmlSlurper .
MarkupBuilderHelper
The groovy.xml.MarkupBuilderHelper is, as its name reflects, a helper
for groovy.xml.MarkupBuilder .
This helper normally can be accessed from within an instance of
class groovy.xml.MarkupBuilder or an instance
of groovy.xml.StreamingMarkupBuilder .
This helper could be handy in situations when you may want to:
Here is another example to show the use of mkp property accessible from within
the bind method scope when using StreamingMarkupBuilder :
DOMToGroovy
Suppose we have an existing XML document and we want to automate generation of the
markup without having to type it all in? We just need to
use org.codehaus.groovy.tools.xml.DOMToGroovy as shown in the following
example:
def builder =
javax.xml.parsers.DocumentBuilderFactory.newInstance().newDocumentBuilder()
converter.print(document)
String xmlRecovered =
new GroovyShell()
.evaluate("""
def writer = new StringWriter()
def builder = new groovy.xml.MarkupBuilder(writer)
builder.${output}
return writer.toString()
""")
Converts the XML to MarkupBuilder calls which are available in the output StringWriter
Using output variable to create the whole MarkupBuilder
Adding nodes
The main difference between XmlSlurper and XmlParser is that when former creates
the nodes they won’t be available until the document’s been evaluated again, so you
should parse the transformed document again in order to be able to see the new nodes.
So keep that in mind when choosing any of both approaches.
If you needed to see a node right after creating it then XmlParser should be your
choice, but if you’re planning to do many changes to the XML and send the result to
another process maybe XmlSlurper would be more efficient.
You can’t create a new node directly using the XmlSlurper instance, but you can
with XmlParser . The way of creating a new node from XmlParser is through its
method createNode(..)
numberOfResults.value = "1"
assert response.numberOfResults.text() == "1"
The createNode() method receives the following parameters:
• parent node (could be null)
• The qualified name for the tag (In this case we only use the local part without any
namespace). We’re using an instance of groovy.xml.QName
• A map with the tag’s attributes (None in this particular case)
Anyway you won’t normally be creating a node from the parser instance but from the
parsed XML instance. That is from a Node or a GPathResult instance.
Take a look at the next example. We are parsing the xml with XmlParser and then
creating a new node from the parsed document’s instance (Notice the method here is
slightly different in the way it receives the parameters):
response.appendNode(
new QName("numberOfResults"),
[:],
"1"
)
response.numberOfResults.text() == "1"
When using XmlSlurper , GPathResult instances don’t have createNode() method.
/* That mkp is a special namespace used to escape away from the normal
building mode
of the builder and get access to helper markup methods
'yield', 'pi', 'comment', 'out', 'namespaces', 'xmlDeclaration' and
'yieldUnescaped' */
def result = new StreamingMarkupBuilder().bind { mkp.yield response
}.toString()
def changedResponse = new XmlSlurper().parseText(result)
Finally both parsers also use the same approach for adding a new attribute to a given
attribute. This time again the difference is whether you want the new nodes to be
available right away or not. First XmlParser :
response.@numberOfResults = "1"
Printing XML
XmlUtil
Sometimes is useful to get not only the value of a given node but the node itself (for
instance to add this node to another XML).
For that you can use groovy.xml.XmlUtil class. It has several static methods to
serialize the xml fragment from several type of sources (Node, GPathResult, String…)
assert nodeAsText ==
XmlUtil.serialize('<?xml version="1.0" encoding="UTF-8"?><author
id="1">Miguel de Cervantes</author>')
<taskdef name="groovy"
classname="org.codehaus.groovy.ant.Groovy"
classpathref="my.classpath"/>
Now use the task like this:
<groovy>
...
</groovy>
You might need to use the contextClassLoader attribute (see below) if any of your
modules load services via the classpath, e.g. groovy-json .
3.11.2. <groovy> attributes
Attribute Description Required
<arg>
Arguments can be set via one or more nested <arg> elements using the standard
Ant command line conventions.
Name Description
ant an instance of AntBuilder that knows about the current ant project
3.11.5. Examples
Hello world, version 1:
<groovy>
println "Hello World"
</groovy>
Hello world, version 2:
<groovy>
ant.echo "Hello World"
</groovy>
List all xml files in the current directory:
<groovy>
xmlfiles = new File(".").listFiles().findAll{ it =~ "\.xml$" }
xmlfiles.sort().each { println it.toString() }
</groovy>
List all xml files within a jar:
<groovy src="/some/directory/some/file.groovy">
<classpath>
<pathelement location="/my/groovy/classes/directory"/>
</classpath>
</groovy>
Find all Builder classes having an org.* package within a directory of jars:
<groovy>
import java.util.jar.JarFile
def resourceNamePattern = /org\/.*\/.*Builder.class/
def candidates = ant.fileScanner {
fileset(dir: '${local.target}/lib') {
include(name: '*beta*.jar')
include(name: '*commons*.jar')
}
}
def classes = candidates.collect {
new JarFile(it).entries().collect { it.name }.findAll {
it ==~ resourceNamePattern
}
}.flatten()
properties["builder-classes"] = classes.join(' ')
</groovy>
Calling out to a web service from your Ant script:
main:
...
[echo] I'm freezing at 0 degrees Celsius
results:
[echo] I'm freezing at 32 degrees Fahrenheit
BUILD SUCCESSFUL
Setting arguments:
<target name="run">
<groovy>
<arg line="1 2 3"/>
<arg value="4 5"/>
println args.size()
println args[2]
args.each{ ant.echo(message:it) }
</groovy>
</target>
Output:
Buildfile: build.xml
run:
[groovy] 4
[groovy] 3
[echo] 1
[echo] 2
[echo] 3
[echo] 4 5
BUILD SUCCESSFUL
3.12.3. SimpleTemplateEngine
Shown here is the SimpleTemplateEngine that allows you to use JSP-like scriptlets
(see example below), script, and EL expressions in your template in order to generate
parametrized text. Here is an example of using the system:
def text = 'Dear "$firstname $lastname",\nSo nice to meet you in <% print
city %>.\nSee you in ${month},\n${signed}'
$firstname
to this (assuming we have set up a static import for capitalize inside the template):
${firstname.capitalize()}
or this:
<% print city == "New York" ? "The Big Apple" : city %>
<% print city == "New York" ? "\\"The Big Apple\\"" : city %>
Similarly, if we wanted a newline, we would use:
\\n
in any GString expression or scriptlet 'code' that appears inside a Groovy script. A
normal “\n” is fine within the static template text itself or if the entire template itself is
in an external template file. Similarly, to represent an actual backslash in your text you
would need
\\\\
in an external file or
\\\\
in any GString expression or scriptlet 'code'. (Note: the necessity to have this extra slash
may go away in a future version of Groovy if we can find an easy way to support such a
change.)
3.12.4. StreamingTemplateEngine
The StreamingTemplateEngine engine is functionally equivalent to
the SimpleTemplateEngine , but creates the template using writable closures making it
more scalable for large templates. Specifically this template engine can handle strings
larger than 64k.
It uses JSP style <% %> script and <%= %> expression syntax or GString style
expressions. The variable 'out' is bound to the writer that the template is being written
to.
Frequently, the template source will be a file but here we show a simple example
providing the template as a string:
3.12.5. GStringTemplateEngine
As an example of using the GStringTemplateEngine , here is the example above done
again (with a few changes to show some other options). First we will store the template
in a file this time:
test.template
3.12.6. XmlTemplateEngine
XmlTemplateEngine for use in templating scenarios where both the template source
and the expected output are intended to be XML. Templates may use the
normal ${expression} and $variable notations to insert an arbitrary expression into
the template. In addition, support is also provided for special
tags: <gsp:scriptlet> (for inserting code fragments) and <gsp:expression> (for
code fragments which produce output).
Comments and processing instructions will be removed as part of processing and special
XML characters such as < , > , " and ' will be escaped using the respective XML
notation. The output will also be indented using standard XML pretty printing.
The xmlns namespace definition for gsp: tags will be removed but other namespace
definitions will be preserved (but may change to an equivalent position within the XML
tree).
Normally, the template source will be in a file but here is a simple example providing the
XML template as a string:
<document type='letter'>
Dearest
<foo:to xmlns:foo='baz'>
Jochen "blackdrag" Theodorou
</foo:to>
How are you today?
</document>
xmlDeclaration()
cars {
cars.each {
car(make: it.make, model: it.model)
}
}
If you feed it with the following model:
<?xml version='1.0'?>
<cars><car make='Peugeot' model='508'/><car make='Toyota'
model='Prius'/></cars>
The key features of this template engine are:
xmlDeclaration()
cars {
cars.each {
car(make: it.make, model: it.model)
}
}
renders the XML declaration string.
cars is a variable found in the template model, which is a list of Car instances
for each item, we create a car tag with the attributes from the Car instance
As you can see, regular Groovy code can be used in the template. Here, we are
calling each on a list (retrieved from the model), allowing us to render one car tag per
entry.
In a similar fashion, rendering HTML code is as simple as this:
renders a p tag
Support methods
In the previous example, the doctype declaration was rendered using
the yieldUnescaped method. We have also seen the xmlDeclaration method. The
template engine provides several support methods that will help you render contents
appropriately:
<?xml version='1.0'?>
If TemplateConfigurat
ion#getDeclarationEn
coding is not null:
Output:
<?xml version='1.0'
encoding='UTF-8'?>
Output:
<!--This is <a
href="foo.html">comme
nted out</a>-->
<p>text</p>
<p>text on new
line</p>
pi("xml-
stylesheet":[href:"my
style.css",
type:"text/css"])
Output:
<?xml-stylesheet
href='mystyle.css'
type='text/css'?>
Includes
The MarkupTemplateEngine supports inclusion of contents from another file. Included
contents may be:
• another template
• raw contents
• contents to be escaped
Including another template can be done using:
Calling those methods instead of the include xxx: syntax can be useful if the name of
the file to be included is dynamic (stored in a variable for example). Files to be included
(independently of their type, template or text) are found on classpath. This is one of the
reasons why the MarkupTemplateEngine takes an optional ClassLoader as
constructor argument (the other reason being that you can include code referencing
other classes in a template).
Fragments
Fragments are nested templates. They can be used to provide improved composition in a
single template. A fragment consists of a string, the inner template, and a model, used to
render this template. Consider the following template:
ul {
pages.each {
fragment "li(line)", line:it
}
}
The fragment element creates a nested template, and renders it with a model which is
specific to this template. Here, we have the li(line) fragment, where line is bound
to it . Since it corresponds to the iteration of pages , we will generate a
single li element for each page in our model:
<ul><li>Page 1</li><li>Page 2</li></ul>
Fragments are interesting to factorize template elements. They come at the price of the
compilation of a fragment per template, and they cannot be externalized.
Layouts
Layouts, unlike fragments, refer to other templates. They can be used to compose
templates and share common structures. This is often interesting if you have, for
example, a common HTML page setup, and that you only want to replace the body. This
can be done easily with a layout. First of all, you need to create a layout template:
layout-main.tpl
html {
head {
title(title)
}
body {
bodyContents()
}
}
the title variable (inside the title tag) is a layout variable
layout 'layout-main.tpl',
title: 'Layout example',
bodyContents: contents { p('This is the body') }
use the main-layout.tpl layout file
As you can see, bodyContents will be rendered inside the layout, thanks to
the bodyContents() call in the layout file. As a result, the template will be rendered as
this:
Layouts use, by default, a model which is independent from the model of the page where
they are used. It is however possible to make them inherit from the parent model.
Imagine that the model is defined like this:
then it is not necessary to pass the title value to the layout as in the previous example.
The result will be:
Rendering contents
Creation of a template engine
On the server side, rendering templates require an instance
of groovy.text.markup.MarkupTemplateEngine and
a groovy.text.markup.TemplateConfiguration :
render output
Configuration options
The behavior of the engine can be tweaked with several configuration options accessible
through the TemplateConfiguration class:
<?xml version='1.0'
encoding='UTF-8'?>
Output:
<p></p>
Output:
<tag attr="value"/>
Output:
<p>foo</p>BAR<p>baz</p>
y escaped
before
rendering.
autoIndent four (4) spaces The string to See the auto formatting
String be used as section
indent.
Once the template engine has been created, it is unsafe to change the configuration.
Automatic formatting
By default, the template engine will render output without any specific formatting.
Some configuration optionscan improve the situation:
• autoIndent is responsible for auto-indenting after a new line is inserted
config.setAutoNewLine(true);
config.setAutoIndent(true);
Using the following template:
html {
head {
title('Title')
}
}
The output will now be:
<html>
<head>
<title>Title</title>
</head>
</html>
We can slightly change the template so that the title instruction is found on the same
line as the head one:
html {
head { title('Title')
}
}
And the output will reflect that:
<html>
<head><title>Title</title>
</head>
</html>
New lines are only inserted where curly braces for tags are found, and the insertion
corresponds to where the nested content is found. This means that tags in the body of
another tag will not trigger new lines unless they use curly braces themselves:
html {
head {
meta(attr:'value')
title('Title')
newLine()
meta(attr:'value2')
}
}
a new line is inserted because meta is not on the same line as head
no new line is inserted, because we’re on the same depth as the previous tag
<html>
<head>
<meta attr='value'/><title>Title</title>
<meta attr='value2'/>
</head>
</html>
By default, the renderer uses four(4) spaces as indent, but you can change it by setting
the TemplateConfiguration#autoIndentString property.
Automatic escaping
By default, contents which is read from the model is rendered as is. If this contents
comes from user input, it can be sensible, and you might want to escape it by default, for
example to avoid XSS injection. For that, the template configuration provides an option
which will automatically escape objects from the model, as long as they inherit
from CharSequence (typically, `String`s).
config.setAutoEscape(false);
model = new HashMap<String,Object>();
model.put("unsafeContents", "I am an <html> hacker.");
and the following template:
html {
body {
div(unsafeContents)
}
}
Then you wouldn’t want the HTML from unsafeContents to be rendered as is, because
of potential security issues:
config.setAutoEscape(true);
And now the output is properly escaped:
html {
body {
div(unescaped.unsafeContents)
}
}
Common gotchas
Strings containing markup
Say that you want to generate a <p> tag which contains a string containing markup:
p {
yield "This is a "
a(href:'target.html', "link")
yield " to another page"
}
and generates:
p {
yield "This is a ${a(href:'target.html', "link")} to another page"
}
but the result will not look as expected:
It is worth noting that using stringOf or the special $tag notation triggers the creation of a distin
which is then used to render the markup. It is slower than using the version with calls to yield whic
streaming of the markup instead.
Internationalization
The template engine has native support for internationalization. For that, when you
create the TemplateConfiguration , you can provide a Locale which is the default
locale to be used for templates. Each template may have different versions, one for each
locale. The name of the template makes the difference:
• …
When a template is rendered or included, then:
• if the template name or include name explicitly sets a locale, the specific version is
included, or the default version if not found
• if the template name doesn’t include a locale, the version for
the TemplateConfiguration locale is used, or the default version if not found
For example, imagine the default locale is set to Locale.ENGLISH and that the main
template includes:
Texte en français
Using an include without specifying a locale will make the template engine look for a
template with the configured locale, and if not, fallback to the default, like here:
Default text
However, changing the default locale of the template engine to Locale.FRANCE will
change the output, because the template engine will now look for a file with
the fr_FR locale:
Don’t fallback to the default template because a locale specific template was found
Texte en français
This strategy lets you translate your templates one by one, by relying on default
templates, for which no locale is set in the file name.
config.setBaseTemplateClass(MyTemplate.class);
The custom base class has to extend BaseClass like in this example:
List<Module> getModules() {
return modules
}
Page.groovy
Long id
String title
String body
}
Then a list of pages can be exposed in the model, like this:
page.title is valid
Runtime error
declare the type of the pages variables (note the use of a string for the type)
This time, when the template is compiled at the last line, an error occurs:
modelTypes = {
List<Page> pages
}
This feature will automatically compile your .groovy source files, turn them into
bytecode, load the Class and cache it until you change the source file.
Here’s a simple example to show you the kind of things you can do from a Groovlet.
Notice the use of implicit variables to access the session, output and request. Also notice
that this is more like a script as it does not have a class wrapper.
if (!session) {
session = request.getSession(true)
}
if (!session.counter) {
session.counter = 1
}
println """
<html>
<head>
<title>Groovy Servlet</title>
</head>
<body>
<p>
Hello, ${request.remoteHost}: ${session.counter}! ${new Date()}
</p>
</body>
</html>
"""
session.counter = session.counter + 1
Or, do the same thing using MarkupBuilder:
if (!session) {
session = request.getSession(true)
}
if (!session.counter) {
session.counter = 1
}
request ServletRequest -
response ServletResponse -
context ServletContext -
application ServletContext -
1. The session variable is only set, if there was already a session object. See the if
(session == null) checks in the examples above.
2. These variables cannot be re-assigned inside a Groovlet . They are bound on first
access, allowing to e.g. calling methods on the response object before using out .
<servlet>
<servlet-name>Groovy</servlet-name>
<servlet-class>groovy.servlet.GroovyServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>Groovy</servlet-name>
<url-pattern>*.groovy</url-pattern>
</servlet-mapping>
Then put the required groovy jar files into WEB-INF/lib .
Now put the .groovy files in, say, the root directory (i.e. where you would put your html
files). The GroovyServlet takes care of compiling the .groovy files.
So for example using tomcat you could edit tomcat/conf/server.xml like this:
Eval
The groovy.util.Eval class is the simplest way to execute Groovy dynamically at
runtime. This can be done by calling the me method:
import groovy.util.Eval
assert Eval.me('33*3') == 99
assert Eval.me('"foo".toUpperCase()') == 'FOO'
Eval supports multiple variants that accept parameters for simple evaluation:
The Eval class makes it very easy to evaluate simple scripts, but doesn’t scale: there is
no caching of the script, and it isn’t meant to evaluate more than one liners.
GroovyShell
Multiple sources
The groovy.lang.GroovyShell class is the preferred way to evaluate scripts with the
ability to cache the resulting script instance. Although the Eval class returns the result
of the execution of the compiled script, the GroovyShell class offers more options.
add a date to the binding (you are not limited to simple types)
Note that it is also possible to write from the script into the binding:
shell.evaluate('foo=123')
It is important to understand that you need to use an undeclared variable if you want to
write into the binding. Using def or an explicit type like in the example below would
fail because you would then create a local variable:
shell.evaluate('int foo=123')
try {
assert sharedData.getProperty('foo')
} catch (MissingPropertyException e) {
println "foo is defined as a local variable"
}
You must be very careful when using shared data in a multithreaded environment. The Binding ins
to GroovyShell is not thread safe, and shared by all scripts.
However, you must be aware that you are still sharing the same instance of a script. So
this technique cannot be used if you have two threads working on the same script. In
that case, you must make sure of creating two distinct script instances:
In case you need thread safety like here, it is more advisable to use
the GroovyClassLoader directly instead.
String greet() {
"Hello, $name!"
}
}
The custom class defines a property called name and a new method called greet . This
class can be used as the script base class by using a custom configuration:
import org.codehaus.groovy.control.CompilerConfiguration
then use the compiler configuration when you create the shell
You are not limited to the sole scriptBaseClass configuration. You can use any of
the compiler configuration tweaks, including the compilation customizers.
GroovyClassLoader
In the previous section, we have shown that GroovyShell was an easy tool to execute
scripts, but it makes it complicated to compile anything but scripts. Internally, it makes
use of the groovy.lang.GroovyClassLoader , which is at the heart of the compilation
and loading of classes at runtime.
import groovy.lang.GroovyClassLoader
you can check that the class which is returns is really the one defined in the script
and you can create a new instance of the class, which is not a script
import groovy.lang.GroovyClassLoader
The reason is that a GroovyClassLoader doesn’t keep track of the source text. If you
want to have the same instance, then the source must be a file, like in this example:
parse a class from a distinct file instance, but pointing to the same physical file
GroovyScriptEngine
The groovy.util.GroovyScriptEngine class provides a flexible foundation for
applications which rely on script reloading and script dependencies.
While GroovyShell focuses on standalone Script`s and
`GroovyClassLoader handles dynamic compilation and loading of any Groovy class,
the GroovyScriptEngine will add a layer on top of GroovyClassLoader to handle
both script dependencies and reloading.
To illustrate this, we will create a script engine and execute code in an infinite loop. First
of all, you need to create a directory with the following script inside:
ReloadingTest.groovy
class Greeter {
String sayHello() {
def greet = "Hello, world!"
greet
}
}
new Greeter()
then you can execute this code using a GroovyScriptEngine :
while (true) {
def greeter = engine.run('ReloadingTest.groovy', binding)
println greeter.sayHello()
Thread.sleep(1000)
}
create a script engine which will look for sources into our source directory
Hello, world!
Hello, world!
...
Without interrupting the script execution, now replace the contents of
the ReloadingTest file with:
ReloadingTest.groovy
class Greeter {
String sayHello() {
def greet = "Hello, Groovy!"
greet
}
}
new Greeter()
And the message should change to:
Hello, world!
...
Hello, Groovy!
Hello, Groovy!
...
But it is also possible to have a dependency on another script. To illustrate this, create
the following file into the same directory, without interrupting the executing script:
Depencency.groovy
class Dependency {
String message = 'Hello, dependency 1'
}
and update the ReloadingTest script like this:
ReloadingTest.groovy
import Dependency
class Greeter {
String sayHello() {
def greet = new Dependency().message
greet
}
}
new Greeter()
And this time, the message should change to:
Hello, Groovy!
...
Hello, dependency 1!
Hello, dependency 1!
...
And as a last test, you can update the Dependency.groovy file without touching
the ReloadingTest file:
Depencency.groovy
class Dependency {
String message = 'Hello, dependency 2'
}
And you should observe that the dependent file was reloaded:
Hello, dependency 1!
...
Hello, dependency 2!
Hello, dependency 2!
CompilationUnit
Ultimately, it is possible to perform more operations during compilation by relying
directly on the org.codehaus.groovy.control.CompilationUnit class. This class is
responsible for determining the various steps of compilation and would let you
introduce new steps or even stop compilation at various phases. This is for example how
stub generation is done, for the joint compiler.
However, overriding CompilationUnit is not recommended and should only be done if
no other standard solution works.
Since Groovy has its own native support for integration with Java, you only need to worry about BSF i
also be able to call other languages, e.g. JRuby or if you want to remain very loosely coupled from you
language.
Getting started
Provided you have Groovy and BSF jars in your classpath, you can use the following Java
code to run a sample Groovy script:
Passing in variables
BSF lets you pass beans between Java and your scripting language. You
can register/unregister beans which makes them known to BSF. You can then use BSF
methods to lookup beans as required. Alternatively, you can declare/undeclare beans.
This will register them but also make them available for use directly in your scripting
language. This second approach is the normal approach used with Groovy. Here is an
example:
Here is how you need to initialize the JSR-223 engine to talk to Groovy from Java:
import javax.script.ScriptEngine;
import javax.script.ScriptEngineManager;
import javax.script.ScriptException;
...
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine engine = factory.getEngineByName("groovy");
Then you can execute Groovy scripts easily:
engine.put("first", "HELLO");
engine.put("second", "world");
String result = (String) engine.eval("first.toLowerCase() + ' ' +
second.toUpperCase()");
assertEquals("hello WORLD", result);
This next example illustrates calling an invokable function:
import javax.script.Invocable;
...
ScriptEngineManager factory = new ScriptEngineManager();
ScriptEngine engine = factory.getEngineByName("groovy");
String fact = "def factorial(n) { n == 1 ? 1 : n * factorial(n - 1) }";
engine.eval(fact);
Invocable inv = (Invocable) engine;
Object[] params = {5};
Object result = inv.invokeFunction("factorial", params);
assertEquals(new Integer(120), result);
The engine keeps per default hard references to the script functions. To change this you
should set a engine level scoped attribute to the script context of the
name #jsr223.groovy.engine.keep.globals with a String being phantom to use
phantom references, weak to use weak references or soft to use soft references -
casing is ignored. Any other string will cause the use of hard references.
The above examples illustrate using a command chain based DSL but not how to create
one. There are various strategies that you can use, but to illustrate creating such a DSL,
we will show a couple of examples - first using maps and Closures:
show = { println it }
square_root = { Math.sqrt(it) }
def please(action) {
[the: { what ->
[of: { n -> action(what(n)) }]
}]
}
@Grab('com.google.guava:guava:r09')
import com.google.common.base.*
def result = Splitter.on(',').trimResults(CharMatcher.is('_' as
char)).split("_a ,_b_ ,c__").iterator().toList()
It reads fairly well for a Java developer but if that is not your target audience or you have
many such statements to write, it could be considered a little verbose. Again, there are
many options for writing a DSL. We’ll keep it simple with Maps and Closures. We’ll first
write a helper method:
@Grab('com.google.guava:guava:r09')
import com.google.common.base.*
def split(string) {
[on: { sep ->
[trimming: { trimChar ->
Splitter.on(sep).trimResults(CharMatcher.is(trimChar as
char)).split(string).iterator().toList()
}]
}]
}
now instead of this line from our original example:
This allows you to provide your own Java or Groovy objects which can take advantage of
operator overloading. The following table describes the operators supported in Groovy
and the methods they map to.
Operator Method
a + b a.plus(b)
a - b a.minus(b)
a * b a.multiply(b)
a ** b a.power(b)
a / b a.div(b)
a % b a.mod(b)
a | b a.or(b)
a & b a.and(b)
a ^ b a.xor(b)
a[b] a.getAt(b)
a[b] = c a.putAt(b, c)
a << b a.leftShift(b)
a >> b a.rightShift(b)
a >>> b a.rightShiftUnsigned(b)
if(a) a.asBoolean()
~a a.bitwiseNegate()
-a a.negative()
+a a.positive()
a as b a.asType(b)
a == b a.equals(b)
a != b ! a.equals(b)
a <=> b a.compareTo(b)
input variables are set from the calling class inside the binding
This is a very practical way to share data between the caller and the script, however it
may be insufficient or not practical in some cases. For that purpose, Groovy allows you
to set your own base script class. A base script class has to extend groovy.lang.Script and
be a single abstract method type:
set the base script class to our custom base script class
then create a GroovyShell using that configuration
the script will then extend the base script class, giving direct access to the name property
and greet method
import groovy.transform.BaseScript
@BaseScript(MyBaseClass)
import groovy.transform.BaseScript
setName 'Judith'
greet()
the run method can be overridden and perform a task before executing the script body
run calls the abstract scriptBody method which will delegate to the user script
then it can return something else than the value from the script
use(TimeCategory) {
println 1.minute.from.now
println 10.hours.ago
Categories are lexically bound, making them a great fit for internal DSLs.
3.15.5. @DelegatesTo
Explaining delegation strategy at compile time
@groovy.lang.DelegatesTo is a documentation and compile-time annotation aimed
at:
email {
from '[email protected]'
to '[email protected]'
subject 'The pope has resigned!'
body {
p 'Really, the pope has resigned!'
}
}
One way of implementing this is using the builder strategy, which implies a method,
named email which accepts a closure as an argument. The method may delegate
subsequent calls to an object that implements
the from , to , subject and body methods. Again, body is a method which accepts a
closure as an argument and that uses the builder strategy.
class EmailSpec {
void from(String from) { println "From: $from"}
void to(String... to) { println "To: $to"}
void subject(String subject) { println "Subject: $subject"}
void body(Closure body) {
def bodySpec = new BodySpec()
def code = body.rehydrate(bodySpec, this, this)
code.resolveStrategy = Closure.DELEGATE_ONLY
code()
}
}
The EmailSpec class has itself a body method accepting a closure that is cloned and
executed. This is what we call the builder pattern in Groovy.
One of the problems with the code that we’ve shown is that the user of
the email method doesn’t have any information about the methods that he’s allowed to
call inside the closure. The only possible information is from the method documentation.
There are two issues with this: first of all, documentation is not always written, and if it
is, it’s not always available (javadoc not downloaded, for example). Second, it doesn’t
help IDEs. What would be really interesting, here, is for IDEs to help the developer by
suggesting, once they are in the closure body, methods that exist on the email class.
Moreover, if the user calls a method in the closure which is not defined by
the EmailSpec class, the IDE should at least issue a warning (because it’s very likely
that it will break at runtime).
One more problem with the code above is that it is not compatible with static type
checking. Type checking would let the user know if a method call is authorized at
compile time instead of runtime, but if you try to perform type checking on this code:
email {
from '[email protected]'
to '[email protected]'
subject 'The pope has resigned!'
body {
p 'Really, the pope has resigned!'
}
}
Then the type checker will know that there’s an email method accepting a Closure ,
but it will complain for every method call inside the closure, because from , for example,
is not a method which is defined in the class. Indeed, it’s defined in the EmailSpec class
and it has absolutely no hint to help it knowing that the closure delegate will, at runtime,
be of type EmailSpec :
@groovy.transform.TypeChecked
void sendEmail() {
email {
from '[email protected]'
to '[email protected]'
subject 'The pope has resigned!'
body {
p 'Really, the pope has resigned!'
}
}
}
will fail compilation with errors like this one:
[Static type checking] - Cannot find matching method
MyScript#from(java.lang.String). Please check if the declared type is right
and if the method exists.
@ line 31, column 21.
from '[email protected]'
@DelegatesTo
For those reasons, Groovy 2.1 introduced a new annotation named @DelegatesTo . The
goal of this annotation is to solve both the documentation issue, that will let your IDE
know about the expected methods in the closure body, and it will also solve the type
checking issue, by giving hints to the compiler about what are the potential receivers of
method calls in the closure body.
@TypeChecked
void doEmail() {
email {
from '[email protected]'
to '[email protected]'
subject 'The pope has resigned!'
body {
p 'Really, the pope has resigned!'
}
}
}
DelegatesTo modes
@DelegatesTo supports multiple modes that we will describe with examples in this
section.
Simple delegation
In this mode, the only mandatory parameter is the value which says to which class we
delegate calls. Nothing more. We’re telling the compiler that the type of the delegate
will always be of the type documented by @DelegatesTo (note that it can be a subclass,
but if it is, the methods defined by the subclass will not be visible to the type checker).
Delegation strategy
In this mode, you must specify both the delegate class and a delegation strategy. This
must be used if the closure will not be called with the default delegation strategy, which
is Closure.OWNER_FIRST .
Delegate to parameter
In this variant, we will tell the compiler that we are delegating to another parameter of
the method. Take the following code:
class Greeter {
void sayHello() { println 'Hello' }
}
def greeter = new Greeter()
exec(greeter) {
sayHello()
}
Remember that this works out of the box without having to annotate
with @DelegatesTo . However, to make the IDE aware of the delegate type, or the type
checker aware of it, we need to add @DelegatesTo . And in this case, it will know that
the Greeter variable is of type Greeter , so it will not report errors on
the sayHellomethod even if the exec method doesn’t explicitly define the target as of
type Greeter. This is a very powerful feature, because it prevents you from writing
multiple versions of the same exec method for different receiver types!
In this mode, the @DelegatesTo annotation also supports the strategy parameter
that we’ve described upper.
Multiple closures
In the previous example, the exec method accepted only one closure, but you may have
methods that take multiple closures:
void fooBarBaz(
@DelegatesTo.Target('foo') foo,
@DelegatesTo.Target('bar') bar,
@DelegatesTo.Target('baz') baz,
@groovy.transform.ToString
class Realm {
String name
}
List<Realm> list = []
3.times { list << new Realm() }
configure(list) {
name = 'My Realm'
}
assert list.every { it.name == 'My Realm' }
To let the type checker and the IDE know that the configure method calls the closure
on each element of the list, you need to use @DelegatesTo differently:
class Mapper<T,U> {
final T value
Mapper(T value) { this.value = value }
U map(Closure<U> producer) {
producer.delegate = value
producer()
}
}
The mapper class takes two generic type arguments: the source type and the target type
The map method asks to convert the source object to a target object
As you can see, the method signature from map does not give any information about
what object will be manipulated by the closure. Reading the method body, we know that
it will be the value which is of type T , but T is not found in the method signature, so
we are facing a case where none of the available options for @DelegatesTo is suitable.
For example, if we try to statically compile this code:
def mapper = new Mapper<String,Integer>('Hello')
assert mapper.map { length() } == 5
Then the compiler will fail with:
class Mapper<T,U> {
final T value
Mapper(T value) { this.value = value }
U map(@DelegatesTo(type="T") Closure<U> producer) {
producer.delegate = value
producer()
}
}
The @DelegatesTo annotation references a generic type which is not found in the method
signature
Note that you are not limited to generic type tokens. The type member can be used to
represent complex types, such as List<T> or Map<T,List<U>> . The reason why you
should use that in last resort is that the type is only checked when the type checker finds
usage of @DelegatesTo , not when the annotated method itself is compiled. This means
that type safety is only ensured at the call site. Additionally, compilation will be slower
(though probably unnoticeable for most cases).
The goal of compilation customizers is to make those common tasks easy to implement.
For that, the CompilerConfiguration class is the entry point. The general schema will
always be based on the following code:
import org.codehaus.groovy.control.CompilerConfiguration
// create a configuration
def config = new CompilerConfiguration()
// tweak the configuration
config.addCompilationCustomizers(...)
// run your script
def shell = new GroovyShell(config)
shell.evaluate(script)
Compilation customizers must extend
the org.codehaus.groovy.control.customizers.CompilationCustomizerclass. A customizer
works:
Import customizer
Using this compilation customizer, your code will have imports added transparently.
This is in particular useful for scripts implementing a DSL where you want to avoid
users from having to write imports. The import customizer will let you add all the
variants of imports the Groovy language allows, that is:
As an example, let’s say you want to be able to use @Log in a script. The problem is
that @Log is normally applied on a class node and a script, by definition, doesn’t require
one. But implementation wise, scripts are classes, it’s just that you cannot annotate this
implicit class node with @Log . Using the AST customizer, you have a workaround to do
it:
import org.codehaus.groovy.control.customizers.ASTTransformationCustomizer
import groovy.util.logging.Log
If the AST transformation that you are using accepts parameters, you can use
parameters in the constructor too:
import org.codehaus.groovy.control.customizers.SecureASTCustomizer
import static org.codehaus.groovy.syntax.Types.*
If what the secure AST customizer provides out of the box isn’t enough for your needs,
before creating your own compilation customizer, you might be interested in the
expression and statement checkers that the AST customizer supports. Basically, it allows
you to add custom checks on the AST tree, on expressions (expression checkers) or
statements (statement checkers). For this, you must
implement org.codehaus.groovy.control.customizers.SecureASTCustomizer.S
tatementChecker or org.codehaus.groovy.control.customizers.SecureASTCust
omizer.ExpressionChecker .
Those interfaces define a single method called isAuthorized , returning a boolean, and
taking a Statement (or Expression ) as a parameter. It allows you to perform complex
logic over expressions or statements to tell if a user is allowed to do it or not.
For example, there’s no predefined configuration flag in the customizer which will let
you prevent people from using an attribute expression. Using a custom checker, it is
trivial:
SourceUnit gives you access to multiple things but in particular the file being compiled
(if compiling from a file, of course). It gives you the potential to perform operation based
on the file name, for example. Here is how you would create a source aware customizer:
import org.codehaus.groovy.control.customizers.SourceAwareCustomizer
import org.codehaus.groovy.control.customizers.ImportCustomizer
// class validation
// the customizer will only be applied to classes ending with 'Bean'
sac.classValidator = { ClassNode cn -> cn.endsWith('Bean') }
Customizer builder
If you are using compilation customizers in Groovy code (like the examples above) then
you can use an alternative syntax to customize compilation. A
builder ( org.codehaus.groovy.control.customizers.builder.CompilerCustomiz
ationBuilder ) simplifies the creation of customizers using a hierarchical DSL.
import org.codehaus.groovy.control.CompilerConfiguration
import static
org.codehaus.groovy.control.customizers.builder.CompilerCustomizationBuilde
r.withConfig
The code sample above shows how to use the builder. A static method, withConfig, takes
a closure corresponding to the builder code, and automatically registers compilation
customizers to the configuration. Every compilation customizer available in the
distribution can be configured this way:
Import customizer
withConfig(configuration) {
imports { // imports customizer
normal 'my.package.MyClass' // a normal import
alias 'AI', 'java.util.concurrent.atomic.AtomicInteger' // an aliased
import
star 'java.util.concurrent' // star imports
staticMember 'java.lang.Math', 'PI' // static import
staticMember 'pi', 'java.lang.Math', 'PI' // aliased static import
}
}
AST transformation customizer
withConfig(conf) {
ast(Log)
}
withConfig(conf) {
ast(Log, value: 'LOGGER')
}
apply @Log transparently
withConfig(configuration){
source(extensions: ['sgroovy','sg']) {
ast(CompileStatic)
}
}
withConfig(configuration) {
source(extensionValidator: { it.name in ['sgroovy','sg']}) {
ast(CompileStatic)
}
}
withConfig(configuration) {
source(basename: 'foo') {
ast(CompileStatic)
}
}
withConfig(configuration) {
source(basenames: ['foo', 'bar']) {
ast(CompileStatic)
}
}
withConfig(configuration) {
source(basenameValidator: { it in ['foo', 'bar'] }) {
ast(CompileStatic)
}
}
withConfig(configuration) {
source(unitValidator: { unit -> !unit.AST.classes.any { it.name ==
'Baz' } }) {
ast(CompileStatic)
}
}
apply CompileStatic AST annotation on .sgroovy files
apply CompileStatic AST annotation on files that do not contain a class named 'Baz'
Inlining a customizer
Inlined customizer allows you to write a compilation customizer directly, without
having to create a class for it.
withConfig(configuration) {
inline(phase:'CONVERSION') { source, context, classNode ->
println "visiting $classNode"
}
}
define an inlined customizer which will execute at the CONVERSION phase
Multiple customizers
Of course, the builder allows you to define multiple customizers at once:
withConfig(configuration) {
ast(ToString)
ast(EqualsAndHashCode)
}
If you want it to be applied on the classes you compile with the normal Groovy compiler
(that is to say with groovyc , ant or gradle , for example), it is possible to use a
compilation flag named configscript that takes a Groovy configuration script as
argument.
This script gives you access to the CompilerConfiguration instance before the files
are compiled (exposed into the configuration script as a variable
named configuration ), so that you can tweak it.
withConfig(configuration) {
ast(groovy.transform.CompileStatic)
}
configuration references a CompilerConfiguration instance
That is actually all you need. You don’t have to import the builder, it’s automatically
exposed in the script. Then, compile your files using the following command line:
AST transformations
If:
3.15.8. Builders
(TBD)
Creating a builder
(TBD)
BuilderSupport
(TBD)
FactoryBuilderSupport
(TBD)
Existing builders
(TBD)
MarkupBuilder
See Creating Xml - MarkupBuilder.
StreamingMarkupBuilder
See Creating Xml - StreamingMarkupBuilder.
SaxBuilder
A builder for generating Simple API for XML (SAX) events.
builder.root() {
helloWorld()
}
And then check that everything worked as expected:
StaxBuilder
A Groovy builder that works with Streaming API for XML (StAX) processors.
Here is a simple example using the StAX implementation of Java to generate XML:
builder.root(attribute:1) {
elem1('hello')
elem2('world')
}
@Grab('org.codehaus.jettison:jettison:1.3.3')
@GrabExclude('stax:stax-api') // part of Java 6 and later
import org.codehaus.jettison.mapped.*
builder.root(attribute:1) {
elem1('hello')
elem2('world')
}
assert writer.toString() ==
'{"root":{"@attribute":"1","elem1":"hello","elem2":"world"}}'
DOMBuilder
A builder for parsing HTML, XHTML and XML into a W3C DOM tree.
NodeBuilder
NodeBuilder is used for creating nested trees of Node objects for handling arbitrary
data. To create a simple user list you use a NodeBuilder like this:
JsonBuilder
Groovy’s JsonBuilder makes it easy to create Json. For example to create this Json
string:
JsonAssert.assertJsonEquals(json, carRecords)
If you need to customize the generated output you can pass a JsonGenerator instance
when creating a JsonBuilder :
import groovy.json.*
StreamingJsonBuilder
Unlike JsonBuilder which creates a data structure in memory, which is handy in those
situations where you want to alter the structure programmatically before
output, StreamingJsonBuilder directly streams to a writer without any intermediate
memory data structure. If you do not need to modify the structure and want a more
memory-efficient approach, use StreamingJsonBuilder .
JsonAssert.assertJsonEquals(json, carRecords)
If you need to customize the generated output you can pass a JsonGenerator instance
when creating a StreamingJsonBuilder :
builder.records {
car {
name 'HSV Maloo'
make 'Holden'
year 2006
country 'Australia'
homepage new URL('http://example.org')
record {
type 'speed'
description 'production pickup truck with speed of 271kph'
}
}
}
assert writer.toString() == '{"records":{"car":{"name":"HSV
Maloo","homepage":"http://groovy-lang.org"}}}'
SwingBuilder
SwingBuilder allows you to create full-fledged Swing GUIs in a declarative and concise
fashion. It accomplishes this by employing a common idiom in Groovy, builders. Builders
handle the busywork of creating complex objects for you, such as instantiating children,
calling Swing methods, and attaching these children to their parents. As a consequence,
your code is much more readable and maintainable, while still allowing you access to the
full range of Swing components.
import groovy.swing.SwingBuilder
import java.awt.BorderLayout as BL
count = 0
new SwingBuilder().edt {
frame(title: 'Frame', size: [300, 300], show: true) {
borderLayout()
textlabel = label(text: 'Click the button!', constraints: BL.NORTH)
button(text:'Click Me',
actionPerformed: {count++; textlabel.text = "Clicked ${count}
time(s)."; println "clicked"}, constraints:BL.SOUTH)
}
}
Here is what it will look like:
The flexibility shown here is made possible by leveraging the many programming
features built-in to Groovy, such as closures, implicit constructor calling, import aliasing,
and string interpolation. Of course, these do not have to be fully understood in order to
use SwingBuilder ; as you can see from the code above, their uses are intuitive.
Here is a slightly more involved example, with an example of SwingBuilder code re-use
via a closure.
import groovy.swing.SwingBuilder
import javax.swing.*
import java.awt.*
count = 0
swing.edt {
frame(title: 'Frame', defaultCloseOperation: JFrame.EXIT_ON_CLOSE,
pack: true, show: true) {
vbox {
textlabel = label('Click the button!')
button(
text: 'Click Me',
actionPerformed: {
count++
textlabel.text = "Clicked ${count} time(s)."
println "Clicked!"
}
)
widget(sharedPanel())
widget(sharedPanel())
}
}
}
Here’s another variation that relies on observable beans and binding:
import groovy.swing.SwingBuilder
import groovy.beans.Bindable
class MyModel {
@Bindable int count = 0
}
Groovy has a helper class called AntBuilder which makes the scripting of Ant tasks
really easy; allowing a real scripting language to be used for programming constructs
(variables, methods, loops, logical branching, classes etc). It still looks like a neat concise
version of Ant’s XML without all those pointy brackets; though you can mix and match
this markup inside your script. Ant itself is a collection of jar files. By adding them to
your classpath, you can easily use them within Groovy as is. We believe
using AntBuilder leads to more concise and readily understood syntax.
AntBuilder exposes Ant tasks directly using the convenient builder notation that we
are used to in Groovy. Here is the most basic example, which is printing a message on
the standard output:
Imagine that you need to create a ZIP file. It can be as simple as:
ant.junit {
classpath { pathelement(path: '.') }
test(name:'some.pkg.MyTest')
}
We can even go further by compiling and executing a Java file directly from Groovy:
ant.echo(file:'Temp.java', '''
class Temp {
public static void main(String[] args) {
System.out.println("Hello");
}
}
''')
ant.javac(srcdir:'.', includes:'Temp.java', fork:'true')
ant.java(classpath:'.', classname:'Temp', fork:'true')
ant.echo('Done')
It is worth mentioning that AntBuilder is included in Gradle, so you can use it in Gradle
just like you would in Groovy. Additional documentation can be found in the Gradle
manual.
CliBuilder
CliBuilder provides a compact way to specify the available options for a commandline
application and then automatically parse the application’s commandline parameters
according to that specification. By convention, a distinction is made
between option commandline parameters and any remaining parameters which are
passed to an application as its arguments. Typically, several types of options might be
supported such as -V or --tabsize=4 . CliBuilder removes the burden of developing
lots of code for commandline processing. Instead, it supports a somewhat declarative
approach to declaring your options and then provides a single call to parse the
commandline parameters with a simple mechanism to interrogate the options (you can
think of this as a simple model for your options).
Even though the details of each commandline you create could be quite different, the
same main steps are followed each time. First, a CliBuilder instance is created. Then,
allowed commandline options are defined. This can be done using a dynamic api style or
an annotation style. The commandline parameters are then parsed according to the
options specification resulting in a collection of options which are then interrogated.
specify a -a option taking a single argument with an optional long variant --audience
Hello Groovologist
When creating the CliBuilder instance in the above example, we set the
optional usage property within the constructor call. This follows Groovy’s normal
ability to set additional properties of the instance during construction. There are
numerous other properties which can be set such as header and footer . For the
complete set of available properties, see the available properties for the CliBuilder class.
When defining an allowed commandline option, both a short name (e.g. "h" for
the help option shown previously) and a short description (e.g. "display usage" for
the help option) must be supplied. In our example above, we also set some additional
properties such as longOpt and args . The following additional properties are
supported when specifying an allowed commandline option:
If you have an option with only a longOpt variant, you can use the special shortname of
'_' to specify the option, e.g. : cli._(longOpt: 'verbose', 'enable verbose
logging') . Some of the remaining named parameters should be fairly self-explanatory
while others deserve a bit more explanation. But before further explanations, let’s look
at ways of using CliBuilder with annotations.
Rather than making a series of method calls (albeit in a very declarative mini-DSL form)
to specify the allowable options, you can provide an interface specification of the
allowable options where annotations are used to indicate and provide details for those
options and for how unprocessed parameters are handled. Two annotations are
used: groovy.cli.Option and groovy.cli.Unparsed.
interface GreeterI {
@Option(shortName='h', description='display usage') Boolean help()
@Option(shortName='a', description='greeting audience') String
audience()
@Unparsed(description = "positional parameters") List remaining()
}
Specify a Boolean option set using -h or --help
Note how the long name is automatically determined from the interface method name.
You can use the longName annotation attribute to override that behavior and specify a
custom long name if you wish or use a longName of '_' to indicate that no long name is to
be provided. You will need to specify a shortName in such a case.
Alternatively, perhaps you already have a domain class containing the option
information. You can simply annotate properties or setters from that class to
enable CliBuilder to appropriately populate your domain object. Each annotation
both describes that option’s properties through the annotation attributes and indicates
the setter the CliBuilder will use to populate that option in your domain object.
class GreeterC {
@Option(shortName='h', description='display usage')
Boolean help
Finally, there are two additional convenience annotation aliases specifically for scripts.
They simply combine the previously mentioned annotations and groovy.transform.Field.
The groovydoc for those annotations reveals the
details: groovy.cli.OptionField and groovy.cli.UnparsedField.
We saw in our initial example that some options act like flags, e.g. Greeter -h but
others take an argument, e.g. Greeter --audience Groovologist . The simplest cases
involve options which act like flags or have a single (potentially optional) argument.
Here is an example involving those cases:
An option with an optional argument; it acts like a flag if the option is left out
An example using this spec where an argument is supplied to the 'c' option
An example using this spec where no argument is supplied to the 'c' option; it’s just a flag
Option arguments may also be specified using the annotation style. Here is an interface
option specification illustrating such a definition:
interface WithArgsI {
@Option boolean a()
@Option String b()
@Option(optionalArg=true) String[] c()
@Unparsed List remaining()
}
And here is how it is used:
Specifying a type
Arguments on the commandline are by nature Strings (or arguably can be considered
Booleans for flags) but can be converted to richer types automatically by supplying
additional typing information. For the annotation-based argument definition style, these
types are supplied using the field types for annotation properties or return types of
annotated methods (or the setter argument type for setter methods). For the dynamic
method style of argument definition a special 'type' property is supported which allows
you to specify a Class name.
If the supported types aren’t sufficient, you can supply a closure to handle the String to
rich type conversion for you. Here is a sample using the dynamic api style:
interface WithConvertI {
@Option(convert={ it.toLowerCase() }) String a()
@Option(convert={ it.toUpperCase() }) String b()
@Option(convert={ Date.parse("yyyy-MM-dd", it) }) Date d()
@Unparsed List remaining()
}
And an example using that specification:
Multiple arguments are also supported using an args value greater than 1. There is a
special named parameter, valueSeparator , which can also be optionally used when
processing multiple arguments. It allows some additional flexibility in the syntax
supported when supplying such argument lists on the commandline. For example,
supplying a value separator of ',' allows a comma-delimited list of values to be passed on
the commandline.
The args value is normally an integer. It can be optionally supplied as a String. There
are two special String symbols: + and * . The * value means 0 or more. The + value
means 1 or more. The * value is the same as using + and also setting
the optionalArg value to true.
Accessing the multiple arguments follows a special convention. Simply add an 's' to the
normal property you would use to access the argument option and you will retrieve all
the supplied arguments as a list. So, for a short option named 'a', you access the first 'a'
argument using options.a and the list of all arguments using options.as . It’s fine to
have a shortname or longname ending in 's' so long as you don’t also have the singular
variant without the 's'. So, if name is one of your options with multiple arguments
and guess is another with a single argument, there will be no confusion
using options.names and options.guess .
options = cli.parse(['-b1,2'])
assert options.bs == ['1', '2']
options = cli.parse(['-c1'])
assert options.cs == ['1']
options = cli.parse(['-c1,2,3'])
assert options.cs == ['1', '2', '3']
Args value supplied as a String and comma value separator specified
Two commandline parameters will be supplied as the 'b' option’s list of arguments
Access the 'a' option’s first argument
An alternative syntax for specifying two arguments for the 'a' option
As an alternative to accessing multiple arguments using the plural name approach, you
can use an array-based type for the option. In this case, all options will always be
returned via the array which is accessed via the normal singular name. We’ll see an
example of this next when discussing types.
Multiple arguments are also supported using the annotation style of option definition by
using an array type for the annotated class member (method or property) as this
example shows:
interface ValSepI {
@Option(numberOfArguments=2) String[] a()
@Option(numberOfArgumentsString='2', valueSeparator=',') String[] b()
@Option(numberOfArgumentsString='+', valueSeparator=',') String[] c()
@Unparsed remaining()
}
And used as follows:
Here is an example using types and multiple arguments with the dynamic api argument
definition style:
def argz = '''-j 3 4 5 -k1.5,2.5,3.5 and some more'''.split()
def cli = new CliBuilder()
cli.j(args: 3, type: int[], 'j-arg')
cli.k(args: '+', valueSeparator: ',', type: BigDecimal[], 'k-arg')
def options = cli.parse(argz)
assert options.js == [3, 4, 5]
assert options.j == [3, 4, 5]
assert options.k == [1.5, 2.5, 3.5]
assert options.arguments() == ['and', 'some', 'more']
For an array type, the trailing 's' can be used but isn’t needed
Groovy makes it easy using the Elvis operator to provide a default value at the point of
usage of some variable, e.g. String x = someVariable ?: 'some default' . But
sometimes you wish to make such a default part of the options specification to minimise
the interrogators work in later stages. CliBuilder supports
the defaultValue property to cater for this scenario.
Here is how you could use it using the dynamic api style:
interface WithDefaultValueI {
@Option(shortName='f', defaultValue='one') String from()
@Option(shortName='t', defaultValue='35') int to()
}
Which would be used like this:
The dynamic api style of using CliBuilder is inherently dynamic but you have a few
options should you want to make use of Groovy’s static type checking capabilities.
Firstly, consider using the annotation style, for example, here is an interface option
specification:
interface TypeCheckedI{
@Option String name()
@Option int age()
@Unparsed List remaining()
}
And it can be used in combination with @TypeChecked as shown here:
@TypeChecked
void testTypeCheckedInterface() {
def argz = "--name John --age 21 and some more".split()
def cli = new CliBuilder()
def options = cli.parseFromSpec(TypeCheckedI, argz)
String n = options.name()
int a = options.age()
assert n == 'John' && a == 21
assert options.remaining() == ['and', 'some', 'more']
}
Secondly, there is a feature of the dynamic api style which offers some support. The
definition statements are inherently dynamic but actually return a value which we have
ignored in earlier examples. The returned value is in fact a TypedOption<Type> and
special getAt support allows the options to be interrogated using the typed option,
e.g. options[savedTypeOption] . So, if you have statements similar to these in a non
type checked part of your code:
import groovy.cli.TypedOption
import groovy.transform.TypeChecked
@TypeChecked
void testTypeChecked() {
def cli = new CliBuilder()
TypedOption<String> name = cli.option(String, opt: 'n', longOpt:
'name', 'name option')
TypedOption<Integer> age = cli.option(Integer, longOpt: 'age', 'age
option')
def argz = "--name John --age 21 and some more".split()
def options = cli.parse(argz)
String n = options[name]
int a = options[age]
assert n == 'John' && a == 21
assert options.arguments() == ['and', 'some', 'more']
}
Advanced CLI Usage
As an example, here is some code for making use of Apache Commons CLI’s grouping
mechanism:
import org.apache.commons.cli.*
Picocli
When users of your application give invalid command line arguments, CliBuilder writes
an error message and the usage help message to the stderr output stream. It doesn’t
use the stdout stream to prevent the error message from being parsed when your
program’s output is used as input for another process. You can customize the
destination by setting the errorWriter to a different value.
You can specify different writers for testing. Be aware that for backwards compatibility,
setting the writer property to a different value will set both the writer and
the errorWriter to the specified writer.
ANSI colors
The picocli version of CliBuilder renders the usage help message in ANSI colors on
supported platforms automatically. If desired you can customize this. (An example
follows below.)
As before, you can set the synopsis of the usage help message with the usage property.
You may be interested in a small improvement: if you only set the command name , a
synopsis will be generated automatically, with repeating elements followed by … and
optional elements surrounded with [ and ] . (An example follows below.)
This property exposes a UsageMessageSpec object from the underlying picocli library,
which gives fine-grained control over various sections of the usage help message. For
example:
The parser property gives access to the picocli ParserSpec object that can be used to
customize the parser behavior. See the documentation for details.
Map options
Finally, if your application has options that are key-value pairs, you may be interested in
picocli’s support for maps. For example:
import java.util.concurrent.TimeUnit
import static java.util.concurrent.TimeUnit.DAYS
import static java.util.concurrent.TimeUnit.HOURS
Picocli map support: simply specify Map as the type of the option
Previously, all key-value pairs end up in a list and it is up to the application to work with this list
ObjectGraphBuilder
ObjectGraphBuilder is a builder for an arbitrary graph of beans that follow the
JavaBean convention. It is in particular useful for creating test data.
package com.acme
class Company {
String name
Address address
List employees = []
}
class Address {
String line1
String line2
int zip
String state
}
class Employee {
String name
int employeeId
Address address
Company company
}
Then using ObjectGraphBuilder building a Company with three employees is as easy
as:
o ChildPropertySetter will insert the child into the parent taking into account if
the child belongs to a Collection or not (in this case employees should be a list
of Employee instances in Company ).
All 4 strategies have a default implementation that work as expected if the code follows
the usual conventions for writing JavaBeans. In case any of your beans or objects do not
follow the convention you may plug your own implementation of each strategy. For
example imagine that you need to build a class which is immutable:
@Immutable
class Person {
String name
int age
}
Then if you try to create a Person with the builder:
Its worth mentioning that you cannot modify the properties of a referenced bean.
JmxBuilder
See Working with JMX - JmxBuilder for details.
FileTreeBuilder
FileTreeBuilder is a builder for generating a file directory structure from a specification.
For example, to create the following tree:
src/
|--- main
| |--- groovy
| |--- Foo.groovy
|--- test
|--- groovy
|--- FooTest.groovy
You can use a FileTreeBuilder like this:
tmpDir = File.createTempDir()
def fileTreeBuilder = new FileTreeBuilder(tmpDir)
fileTreeBuilder.dir('src') {
dir('main') {
dir('groovy') {
file('Foo.groovy', 'println "Hello"')
}
}
dir('test') {
dir('groovy') {
file('FooTest.groovy', 'class FooTest extends GroovyTestCase {}')
}
}
}
To check that everything worked as expected we use the following `assert`s:
tmpDir = File.createTempDir()
def fileTreeBuilder = new FileTreeBuilder(tmpDir)
fileTreeBuilder.src {
main {
groovy {
'Foo.groovy'('println "Hello"')
}
}
test {
groovy {
'FooTest.groovy'('class FooTest extends GroovyTestCase {}')
}
}
}
This produces the same directory structure as above, as shown by these `assert`s:
You can monitor the JVM through its platform MBeans with the following code:
import java.lang.management.*
def os = ManagementFactory.operatingSystemMXBean
println """OPERATING SYSTEM:
\tarchitecture = $os.arch
\tname = $os.name
\tversion = $os.version
\tprocessors = $os.availableProcessors
"""
def rt = ManagementFactory.runtimeMXBean
println """RUNTIME:
\tname = $rt.name
\tspec name = $rt.specName
\tvendor = $rt.specVendor
\tspec version = $rt.specVersion
\tmanagement spec version = $rt.managementSpecVersion
"""
def cl = ManagementFactory.classLoadingMXBean
println """CLASS LOADING SYSTEM:
\tisVerbose = ${cl.isVerbose()}
\tloadedClassCount = $cl.loadedClassCount
\ttotalLoadedClassCount = $cl.totalLoadedClassCount
\tunloadedClassCount = $cl.unloadedClassCount
"""
ManagementFactory.memoryPoolMXBeans.each { mp ->
println "\tname: " + mp.name
String[] mmnames = mp.memoryManagerNames
mmnames.each{ mmname ->
println "\t\tManager Name: $mmname"
}
println "\t\tmtype = $mp.type"
println "\t\tUsage threshold supported = " +
mp.isUsageThresholdSupported()
}
println()
def td = ManagementFactory.threadMXBean
println "THREADS:"
td.allThreadIds.each { tid ->
println "\tThread name = ${td.getThreadInfo(tid).threadName}"
}
println()
OPERATING SYSTEM:
architecture = x86
name = Windows XP
version = 5.1
processors = 2
RUNTIME:
name = 620@LYREBIRD
spec name = Java Virtual Machine Specification
vendor = Sun Microsystems Inc.
spec version = 1.0
management spec version = 1.0
COMPILATION:
totalCompilationTime = 91
MEMORY:
HEAP STORAGE:
committed = 3108864
init = 0
max = 66650112
used = 1994728
NON-HEAP STORAGE:
committed = 9240576
init = 8585216
max = 100663296
used = 5897880
THREADS:
Thread name = Monitor Ctrl-Break
Thread name = Signal Dispatcher
Thread name = Finalizer
Thread name = Reference Handler
Thread name = main
GARBAGE COLLECTION:
name = Copy
collection count = 60
collection time = 141
mpool name = Eden Space
mpool name = Survivor Space
name = MarkSweepCompact
collection count = 0
collection time = 0
mpool name = Eden Space
mpool name = Survivor Space
mpool name = Tenured Gen
mpool name = Perm Gen
set JAVA_OPTS=-Dcom.sun.management.jmxremote -
Dcom.sun.management.jmxremote.port=9004\
-Dcom.sun.management.jmxremote.authenticate=false -
Dcom.sun.management.jmxremote.ssl=false
You can do this in your startup script and may choose any available port, we used 9004.
The following code uses JMX to discover the available MBeans in the running Tomcat,
determine which are web modules, extract the processing time for each web module and
displays the result in a graph using JFreeChart:
import groovy.swing.SwingBuilder
import javax.management.ObjectName
import javax.management.remote.JMXConnectorFactory as JmxFactory
import javax.management.remote.JMXServiceURL as JmxUrl
import javax.swing.WindowConstants as WC
import org.jfree.chart.ChartFactory
import org.jfree.data.category.DefaultCategoryDataset as Dataset
import org.jfree.chart.plot.PlotOrientation as Orientation
modules.each { m ->
println m.name()
dataset.addValue m.processingTime, 0, m.path
}
Note: if you get errors running this script, see the Troubleshooting section below.
import javax.management.remote.*
import oracle.oc4j.admin.jmx.remote.api.JMXConnectorConstant
import org.jfree.chart.ChartFactory
import javax.swing.WindowConstants as WC
import javax.management.remote.*
import oracle.oc4j.admin.jmx.remote.api.JMXConnectorConstant
import javax.management.remote.*
import javax.management.*
import javax.naming.Context
import org.springframework.jmx.export.annotation.*
@ManagedResource(objectName="bean:name=calcMBean", description="Calculator
MBean")
public class Calculator {
<bean id="exporter"
class="org.springframework.jmx.export.MBeanExporter">
<property name="assembler" ref="assembler"/>
<property name="namingStrategy" ref="namingStrategy"/>
<property name="beans">
<map>
<entry key="bean:name=defaultCalcName" value-
ref="calcBean"/>
</map>
</property>
<property name="server" ref="mbeanServer"/>
<property name="autodetect" value="true"/>
</bean>
<bean id="jmxAttributeSource"
class="org.springframework.jmx.export.annotation.AnnotationJmxAttributeSour
ce"/>
class="org.springframework.jmx.export.assembler.MetadataMBeanInfoAssembler"
>
<property name="attributeSource" ref="jmxAttributeSource"/>
</bean>
class="org.springframework.jmx.export.naming.MetadataNamingStrategy">
<property name="attributeSource" ref="jmxAttributeSource"/>
</bean>
<bean id="calcBean"
class="Calculator">
<property name="base" value="10"/>
</bean>
</beans>
Here is a script which uses this bean and configuration:
import org.springframework.context.support.ClassPathXmlApplicationContext
import java.lang.management.ManagementFactory
import javax.management.ObjectName
import javax.management.Attribute
Thread.start {
// access bean via JMX, use a separate thread just to
// show that we could access remotely if we wanted
def server = ManagementFactory.platformMBeanServer
def mbean = new GroovyMBean(server, 'bean:name=calcMBean')
sleep 1000
assert 8 == mbean.add(7, 1)
mbean.Base = 8
assert '10' == mbean.addStrings('7', '1')
mbean.Base = 16
sleep 2000
println "Number of invocations: $mbean.Invocations"
println mbean
}
assert 15 == calc.add(9, 6)
assert '11' == calc.addStrings('10', '1')
sleep 2000
assert '20' == calc.addStrings('1f', '1')
And here is the resulting output:
Number of invocations: 5
MBean Name:
bean:name=calcMBean
Attributes:
(rw) int Base
(r) int Invocations
Operations:
int add(int x, int y)
java.lang.String addStrings(java.lang.String x, java.lang.String y)
int getInvocations()
int getBase()
void setBase(int p1)
You can even attach to the process while it is running with jconsole. It will look
something like:
We started the Groovy application with the -Dcom.sun.management.jmxremote JVM
argument.
See also:
3.16.7. Troubleshooting
java.lang.SecurityException
If you get the following error, your container’s JMX access is password protected:
3.16.8. JmxBuilder
JmxBuilder is a Groovy-based domain specific language for the Java Management
Extension (JMX) API. It uses the builder pattern (FactoryBuilder) to create an internal
DSL that facilitates the exposure of POJO’s and Groovy beans as management
components via the MBean server. JmxBuilder hides the complexity of creating and
exporting management beans via the JMX API and provides a set of natural Groovy
constructs to interact with the JMX infrastructure.
Instantiating JmxBuilder
To start using JmxBuilder, simply make sure the jar file is on your class path. Then you
can do the following in your code:
def jmx = new JmxBuilder()
That’s it! You are now ready to use the JmxBuilder.
NOTE
JMX Connectors
Remote connectivity is a crucial part of the JMX architecture. JmxBuilder facilitates the
creation of connector servers and connector clients with a minimal amount of coding.
Connector Server
JmxBuilder.connectorServer() supports the full Connector api syntax and will let you
specify properties, override the URL, specify your own host, etc.
Syntax
jmx.connectorServer(
protocol:"rmi",
host:"...",
port:1099,
url:"...",
properties:[
"authenticate":true|false,
"passwordFile":"...",
"accessFile":"...",
"sslEnabled" : true | false
// any valid connector property
]
)
Note that the serverConnector node will accept four ServerConnector property aliases
(authenticate, passwordFile,accessFile, and sslEnabled). You can use these aliases or
provided any of the RMI-supported properties.
jmx.connectorServer(port: 9000).start()
The snippet above returns an RMI connector that will start listening on port 9000. By
default, the builder will internally generate
URL "service:jmx:rmi:///jndi/rmi://localhost:9000/jmxrmi".
NOTE: Sadly you are as likely to get something like the following when attempting to run
the previous snippet of code (example is incomplete, see below):
The example above does not create the RMI registry. So, in order to export, you have to
first export the RMI object registry (make sure to
import java.rmi.registry.LocateRegistry ).
import java.rmi.registry.LocateRegistry
//...
LocateRegistry.createRegistry(9000)
jmx.connectorServer(port: 9000).start()
Connector Client
JmxBuilder.connectorClient() node lets you create JMX connector client object to
connect to a JMX MBean Server.
Syntax
jmx.connectorClient (
protocol:"rmi",
host:"...",
port:1099,
url:"...",
)
Example - Client Connector
Creating a connector client can be done just as easily. With one line of code, you can
create an instance of a JMX Connector Client as shown below.
JmxBuilder.export() Syntax
JmxBuilder.export() node supports the registrationPolicy parameter to specify how
JmxBuilder will behave to resolve bean name collision during MBean registration:
jmx.export(policy:"replace|ignore|error")
or
jmx.export(regPolicy:"replace|ignore|error")
• replace - JmxBuilder.export() will replace any bean already registered with the
MBean during export.
• ignore - The bean being exported will be ignored if the same bean is already
registered.
• error - JmxBuilder.export() throws an error upon bean name collision during
registration.
Integration with GroovyMBean Class
When you export an MBean to the MBeanServer, JmxBuilder will return an instance
of GroovyMBeanrepresenting the management bean that have been exported by the
builder. Nodes such as bean() and timer()will return an instances of GroovyMBean
when they are invoked. The export() node returns an array of all of
GroovyMBean[] representing all managed objects exported to the MBean server.
RequestController
class RequestController {
// constructors
RequestController() { super() }
RequestController(Map resource) { }
// attributes
boolean isStarted() { true }
int getRequestCount() { 0 }
int getResourceCount() { 0 }
void setRequestLimit(int limit) { }
int getRequestLimit() { 0 }
// operations
void start() { }
void stop() { }
void putResource(String name, Object resource) { }
void makeRequest(String res) { }
void makeRequest() { }
}
Implicit Export
As mentioned earlier, you can use JmxBuilder’s flexible syntax to export any POJO/POGO
with no descriptor. The builder can automatically describe all aspects of the
management beans using implicit defaults. These default values can easily be overridden
as we’ll see in this in the next section.
jmx.export {
bean(new RequestController(resource: "Hello World"))
}
What this does:
JmxBuilder.bean() Syntax
The JmxBuilder.bean() node supports an extensive set of descriptors to describe your
bean for management. The JMX MBeanServer uses these descriptors to expose meta data
about the bean exposed for management.
jmx.export {
bean(
target:bean instance,
name:ObjectName,
desc:"...",
attributes:"*",
attributes:[]
attributes:[
"AttrubuteName1","AttributeName2",...,"AttributeName_n" ]
attributes:[
"AttributeName":"*",
"AttributeName":[
desc:"...",
defaultValue:value,
writable:true|false,
editable:true|false,
onChange:{event-> // event handler}
]
],
constructors:"*",
constructors:[
"Constructor Name":[],
"Constructor Name":[ "ParamType1","ParamType2,...,ParamType_n"
],
"Constructor Name":[
desc:"...",
params:[
"ParamType1":"*",
"ParamType2":[desc:"...", name:"..."],...,
"ParamType_n":[desc:"...", name:"..."]
]
]
],
operations:"*",
operations:[ "OperationName1",
"OperationName2",...,"OperationNameN" ],
operations:[
"OperationName1":"*",
"OperationName2":[ "type1","type2,"type3" ]
"OperationName3":[
desc:"...",
params:[
"ParamType1":"*"
"ParamType2":[desc:"...", name:"..."],...,
"ParamType_n":[desc:"...", name:"..."]
],
onInvoked:{event-> JmxBuilder.send(event:"", to:"")}
]
],
listeners:[
"ListenerName1":[event: "...", from:ObjectName, call:{event-
>}],
"ListenerName2":[event: "...", from:ObjectName,
call:&methodPointer]
]
)
}
Instead of describing the entire node, the following section explore each attribute
separately.
• Operations start() and stop() are described by the "desc" key (this is enough since
there are no params).
• In operation setResource() uses of a shorthand version of params: to describe the
parameters for the method.
• makeRequest() uses the extended descriptor syntax to describe all aspects of the
operation.
Embedding Descriptor
JmxBuilder supports the ability to embed descriptors directly in your Groovy class.
So, instead of wrapping your description around the declared object (as we’ve seen
here), you can embed your JMX descriptors directly in your class.
RequestControllerGroovy
class RequestControllerGroovy {
// attributes
boolean started
int requestCount
int resourceCount
int requestLimit
Map resources
// operations
void start() { }
void stop(){ }
void putResource(String name, Object resource) { }
void makeRequest(String res) { }
void makeRequest() { }
static descriptor = [
name: "jmx.builder:type=EmbeddedObject",
operations: ["start", "stop", "putResource"],
attributes: "*"
]
}
// export
jmx.export(
bean(new RequestControllerGroovy())
)
There are two things going on in the code above:
Timer Export
JMX standards mandate that the implementation of the API makes available a timer
service. Since JMX is a component-based architecture, timers provide an excellent
signalling mechanism to communicate to registered listener components in the
MBeanServer. JmxBuilder supports the creation and export of timers using the same
easy syntax we’ve seen so far.
Timer Node Syntax
timer(
name:ObjectName,
event:"...",
message:"...",
data:dataValue
startDate:"now"|dateValue
period:"99d"|"99h"|"99m"|"99s"|99
occurrences:long
)
The timer() node supports several attributes:
• name: - Required The qualified JMX ObjectName instance (or String) for the timer.
• event: - The JMX event type string that will be broadcast with every timing signal
(default "jmx.builder.event").
• message: - An optional string value that can be sent to listeners.
• data: - An optional object that can be sent to listeners of timing signal.
• startDate: - When to start timer. Set of valid values [ "now", date object ]. Default is
"now"
• period: - A timer’s period expressed as either a number of millisecond or time unit
(day, hour, minute, second). See description below.
• occurrences: - A number indicating the number of time to repeat timer. Default is
forever.
Exporting a Timer
def timer = jmx.timer(name: "jmx.builder:type=Timer", event: "heartbeat",
period: "1s")
timer.start()
This snippet above describes, creates, and exports a standard JMX
Timer component. Here, the timer() node returns a GroovyMBean that represents the
registered timer MBean in the MBeanServer.
Parameterless
callback = { ->
// event handling code here.
}
JmxBuilder executes the closure and passes no information about the event that was
captured on the bus.
jmx.export {
bean(
target: new RequestController(), name: "jmx.tutorial:type=Object",
attributes: [
"Resource": [
readable: true, writable: true,
onChange: { e ->
println e.oldValue
println e.newValue
}
]
]
)
}
The sample snippet above shows how to specify an "onChange" callback
closure when describing MBean attributes. In this sample code, whenever attribute
"Resource" is updated via the exported MBean, the onChange event will be executed.
class EventHandler {
void handleStart(e){
println e
}
}
def handler = new EventHandler()
Listener MBean
When you export an MBean with the bean() node, you can define events the MBean can
listen and react to. The bean() node provides a "listeners:" attribute that lets you define
event listeners that your bean can react to.
jmx.listener(
event: "...",
from: "object name" | ObjectName,
call: { event-> }
)
Here is the description of the listener() node attributes:
• event: An optional string that identifies the JMX event type to listen for.
• from (required): The JMX ObjectName of the component to listen to. This can be
specified as a string or an instance of ObjectName
• call: The closure to execute when the event is captured. This can also be specified as
a Groovy method pointer.
Here is an example of JmxBuilder’s listener node:
jmx.listener(
from: "jmx.builder:type=Timer",
call: { e ->
println "beep..."
}
)
This example shows how you can use a stand alone listener (outside of an MBean
export). Here, we export a timer with a 1 second resolution. Then, we specify a
listener to that timer that will print "beep" every second.
Emitter Syntax
jmx.emitter(name:"Object:Name", event:"type")
The attributes for the node Emitter() can be summarized as follows:
Broadcast Event
Once you have declared your emitter, you can broadcast your event.
emitter.send()
The sample above shows the emitter sending an event, once it has been declared. Any
JMX component registered in the MBeanServer can register to receive message from this
emitter.
emitter.send("Hello!")
If you use an event listener closure (see above) that accepts a parameter, you can
access that value.
3.16.9. Further JMX Information
• Monitoring the Java Virtual Machine
• Using Groovy for System Management
• Groovier jconsole!
• JMX Scripts with Eclipse Monkey
• Using JMX to monitor Apache ActiveMQ
3.18. Security
(TBD)
• some patterns carry over directly (and can make use of normal Groovy syntax
improvements for greater readability)
• some patterns are no longer required because they are built right into the language
or because Groovy supports a better way of achieving the intent of the pattern
• some patterns that have to be expressed at the design level in other languages can be
implemented directly in Groovy (due to the way Groovy can blur the distinction
between design and implementation)
3.19.1. Patterns
Abstract Factory Pattern
The Abstract Factory Pattern provides a way to encapsulate a group of individual
factories that have a common theme. It embodies the intent of a normal factory, i.e.
remove the need for code using an interface to know the concrete implementation
behind the interface, but applies to a set of interfaces and selects an entire family of
concrete classes which implement those interfaces.
As an example, I might have interfaces Button, TextField and Scrollbar. I might have
WindowsButton, MacButton, FlashButton as concrete classes for Button. I might have
WindowsScrollBar, MacScrollBar and FlashScrollBar as concrete implementations for
ScrollBar. Using the Abstract Factory Pattern should allow me to select which
windowing system (i.e. Windows, Mac, Flash) I want to use once and from then on
should be able to write code that references the interfaces but is always using the
appropriate concrete classes (all from the one windowing system) under the covers.
Example
Suppose we want to write a game system. We might note that many games have very
similar features and control.
We decide to try to split the common and game-specific code into separate classes.
class TwoupMessages {
def welcome = 'Welcome to the twoup game, you start with $1000'
def done = 'Sorry, you have no money left, goodbye'
}
class TwoupInputConverter {
def convert(input) { input.toInteger() }
}
class TwoupControl {
private money = 1000
private random = new Random()
private tossWasHead() {
def next = random.nextInt()
return next % 2 == 0
}
def moreTurns() {
if (money > 0) {
println "You have $money, how much would you like to bet?"
return true
}
false
}
def play(amount) {
def coin1 = tossWasHead()
def coin2 = tossWasHead()
if (coin1 && coin2) {
money += amount
println 'You win'
} else if (!coin1 && !coin2) {
money -= amount
println 'You lose'
} else {
println 'Draw'
}
}
}
Now, let’s look at the game-specific code for a number guessing game:
class GuessGameMessages {
def welcome = 'Welcome to the guessing game, my secret number is
between 1 and 100'
def done = 'Correct'
}
class GuessGameInputConverter {
def convert(input) { input.toInteger() }
}
class GuessGameControl {
private lower = 1
private upper = 100
private guess = new Random().nextInt(upper - lower) + lower
def moreTurns() {
def done = (lower == guess || upper == guess)
if (!done) {
println "Enter a number between $lower and $upper"
}
!done
}
def play(nextGuess) {
if (nextGuess <= guess) {
lower = [lower, nextGuess].max()
}
if (nextGuess >= guess) {
upper = [upper, nextGuess].min()
}
}
}
Now, let’s write our factory code:
class GameFactory {
def static factory
def static getMessages() { return factory.messages.newInstance() }
def static getControl() { return factory.control.newInstance() }
def static getConverter() { return factory.converter.newInstance() }
}
The important aspect of this factory is that it allows selection of an entire family of
concrete classes.
GameFactory.factory = twoupFactory
def messages = GameFactory.messages
def control = GameFactory.control
def converter = GameFactory.converter
println messages.welcome
def reader = new BufferedReader(new InputStreamReader(System.in))
while (control.moreTurns()) {
def input = reader.readLine().trim()
control.play(converter.convert(input))
}
println messages.done
Note that the first line configures which family of concrete game classes we will use. It’s
not important that we selected which family to use by using the factory property as
shown in the first line. Other ways would be equally valid examples of this pattern. For
example, we may have asked the user which game they wanted to play or determined
which game from an environment setting.
With the code as shown, the game might look like this when run:
Adapter Pattern
The Adapter Pattern (sometimes called the wrapper pattern) allows objects satisfying
one interface to be used where another type of interface is expected. There are two
typical flavours of the pattern: the delegation flavour and the inheritance flavour.
Delegation Example
Suppose we have the following classes:
class SquarePeg {
def width
}
class RoundPeg {
def radius
}
class RoundHole {
def radius
def pegFits(peg) {
peg.radius <= radius
}
String toString() { "RoundHole with radius $radius" }
}
We can ask the RoundHole class if a RoundPeg fits in it, but if we ask the same question
for a SquarePeg , then it will fail because the SquarePeg class doesn’t have
a radius property (i.e. doesn’t satisfy the required interface).
To get around this problem, we can create an adapter to make it appear to have the
correct interface. It would look like this:
class SquarePegAdapter {
def peg
def getRadius() {
Math.sqrt(((peg.width / 2) ** 2) * 2)
}
String toString() {
"SquarePegAdapter with peg width $peg.width (and notional radius
$radius)"
}
}
We can use the adapter like this:
Inheritance Example
Let’s consider the same example again using inheritance. First, here are the original
classes (unchanged):
class SquarePeg {
def width
}
class RoundPeg {
def radius
}
class RoundHole {
def radius
def pegFits(peg) {
peg.radius <= radius
}
String toString() { "RoundHole with radius $radius" }
}
An adapter using inheritance:
interface RoundThing {
def getRadius()
}
We can then define an adapter as a closure as follows:
def adapter = {
p -> [getRadius: { Math.sqrt(((p.width / 2) ** 2) * 2) }] as RoundThing
}
And use it like this:
Bouncer Pattern
The Bouncer Pattern describes usage of a method whose sole purpose is to either throw
an exception (when particular conditions hold) or do nothing. Such methods are often
used to defensively guard pre-conditions of a method.
When writing utility methods, you should always guard against faulty input arguments.
When writing internal methods, you may be able to ensure that certain pre-conditions
always hold by having sufficient unit tests in place. Under such circumstances, you may
reduce the desirability to have guards on your methods.
Groovy differs from other languages in that you frequently use the assert method
within your methods rather than having a large number of utility checker methods or
classes.
class NullChecker {
static check(name, arg) {
if (arg == null) {
throw new IllegalArgumentException(name + ' is null')
}
}
}
And we would use it like this:
Validation Example
As an alternative example, we might have this utility method:
class NumberChecker {
static final String NUMBER_PATTERN = "\\\\d+(\\\\.\\\\d+(E-?\\\\d+)?)?"
static isNumber(str) {
if (!str ==~ NUMBER_PATTERN) {
throw new IllegalArgumentException("Argument '$str' must be a
number")
}
}
static isNotZero(number) {
if (number == 0) {
throw new IllegalArgumentException('Argument must not be 0')
}
}
}
And we would use it like this:
Example
In this example, the script sends requests to the lister object. The lister points to
a UnixLister object. If it can’t handle the request, it sends the request to
the WindowsLister . If it can’t handle the request, it sends the request to
the DefaultLister .
class UnixLister {
private nextInLine
UnixLister(next) { nextInLine = next }
def listFiles(dir) {
if (System.getProperty('os.name') == 'Linux') {
println "ls $dir".execute().text
} else {
nextInLine.listFiles(dir)
}
}
}
class WindowsLister {
private nextInLine
WindowsLister(next) { nextInLine = next }
def listFiles(dir) {
if (System.getProperty('os.name') == 'Windows XP') {
println "cmd.exe /c dir $dir".execute().text
} else {
nextInLine.listFiles(dir)
}
}
}
class DefaultLister {
def listFiles(dir) {
new File(dir).eachFile { f -> println f }
}
}
lister.listFiles('Downloads')
The output will be a list of files (with slightly different format depending on the
operating system).
Composite Pattern
The Composite Pattern allows you to treat single instances of an object the same way as
a group of objects. The pattern is often used with hierarchies of objects. Typically, one or
more methods should be callable in the same way for either leaf or composite nodes
within the hierarchy. In such a case, composite nodes typically invoke the same named
method for each of their children nodes.
Example
Consider this usage of the composite pattern where we want to call toString() on
either Leaf or Composite objects.
In Java, the Component class is essential as it provides the type used for both leaf and
composite nodes. In Groovy, because of duck-typing, we don’t need it for that purpose,
however, it can still serve as a useful place to place common behaviour between the leaf
and composite nodes.
Decorator Pattern
The Decorator Pattern provides a mechanism to embellish the behaviour of an object
without changing its essential interface. A decorated object should be able to be
substituted wherever the original (non-decorated) object was expected. Decoration
typically does not involve modifying the source code of the original object and
decorators should be able to be combined in flexible ways to produce objects with
several embellishments.
Traditional Example
Suppose we have the following Logger class.
class Logger {
def log(String message) {
println message
}
}
There might be times when it is useful to timestamp a log message, or times when we
might want to change the case of the message. We could try to build all of this
functionality into our Logger class. If we did that, the Logger class would start to be
very complex. Also, everyone would obtain all of features even when they might not
want a small subset of the features. Finally, feature interaction would become quite
difficult to control.
class GenericLowerDecorator {
private delegate
GenericLowerDecorator(delegate) {
this.delegate = delegate
}
def invokeMethod(String name, args) {
def newargs = args.collect { arg ->
if (arg instanceof String) {
return arg.toLowerCase()
} else {
return arg
}
}
delegate.invokeMethod(name, newargs)
}
}
It takes any class and decorates it so that any String method parameter will
automatically be changed to lower case.
class Calc {
def add(a, b) { a + b }
}
We might be interested in observing usage of the class over time. If it is buried deep
within our codebase, it might be hard to determine when it is being called and with what
parameters. Also, it might be hard to know if it is performing well. We can easily make a
generic tracing decorator that prints out tracing information whenever any method on
the Calc class is called and also provide timing information about how long it took to
execute. Here is the code for the tracing decorator:
class TracingDecorator {
private delegate
TracingDecorator(delegate) {
this.delegate = delegate
}
def invokeMethod(String name, args) {
println "Calling $name$args"
def before = System.currentTimeMillis()
def result = delegate.invokeMethod(name, args)
println "Got $result in ${System.currentTimeMillis()-before} ms"
result
}
}
Here is how to use the class in a script:
Groovy even comes with a built-in TracingInterceptor . We can extend the built-in
class like this:
before Calc.ctor()
after Calc.ctor()
Duration: 0 ms
before Calc.add(java.lang.Integer, java.lang.Integer)
after Calc.add(java.lang.Integer, java.lang.Integer)
Duration: 2 ms
First define a class that you want to decorate (we’ll also use an interface as is normal
Spring practice):
interface Calc {
def add(a, b)
}
Here’s the class:
class="org.springframework.aop.interceptor.PerformanceMonitorInterceptor">
<property name="loggerName" value="performance"/>
</bean>
<bean id="calc" class="util.CalcImpl"/>
<bean
class="org.springframework.aop.framework.autoproxy.BeanNameAutoProxyCreator
">
<property name="beanNames" value="calc"/>
<property name="interceptorNames" value="performanceInterceptor"/>
</bean>
</beans>
Now, our script looks like this:
@Grab('org.springframework:spring-context:3.2.2.RELEASE')
import org.springframework.context.support.ClassPathXmlApplicationContext
21/05/2007 23:02:35
org.springframework.aop.interceptor.PerformanceMonitorInterceptor
invokeUnderTrace
FINEST: StopWatch 'util.Calc.add': running time (millis) = 16
You may have to adjust your logging.properties file for messages at log
level FINEST to be displayed.
@Grab('org.codehaus.gpars:gpars:0.10')
import static groovyx.gpars.GParsPool.withPool
interface Document {
void print()
String getText()
}
def avgWordLength = {
def words = words(it.text)
sprintf "Avg Word Length: %4.2f", words*.size().sum() / words.size()
}
def modeWord = {
def wordGroups = words(it.text).groupBy {it}.collectEntries { k, v ->
[k, v.size()] }
def maxSize = wordGroups*.value.max()
def maxWords = wordGroups.findAll { it.value == maxSize }
"Mode Word(s): ${maxWords*.key.join(', ')} ($maxSize occurrences)"
}
def wordCount = { d -> "Word Count: " + words(d.text).size() }
Document d = asyncDecorator(asyncDecorator(asyncDecorator(
new DocumentImpl(document:"This is the file with the words in
it\\n\\t\\nDo you see the words?\\n"),
// new DocumentImpl(document: new
File('AsyncDecorator.groovy').text),
wordCount), modeWord), avgWordLength)
d.print()
Delegation Pattern
The Delegation Pattern is a technique where an object’s behavior (public methods) is
implemented by delegating responsibility to one or more associated objects.
Groovy allows the traditional style of applying the delegation pattern, e.g. see Replace
Inheritance with Delegation.
Implement Delegation Pattern using ExpandoMetaClass
The groovy.lang.ExpandoMetaClass allows usage of this pattern to be encapsulated in a
library. This allows Groovy to emulate similar libraries available for the Ruby language.
class Delegator {
private targetClass
private delegate
Delegator(targetClass, delegate) {
this.targetClass = targetClass
this.delegate = delegate
}
def delegate(String methodName) {
delegate(methodName, methodName)
}
def delegate(String methodName, String asMethodName) {
targetClass.metaClass."$asMethodName" = delegate.&"$methodName"
}
def delegateAll(String[] names) {
names.each { delegate(it) }
}
def delegateAll(Map names) {
names.each { k, v -> delegate(k, v) }
}
def delegateAll() {
delegate.class.methods*.name.each { delegate(it) }
}
}
With this in your classpath, you can now apply the delegation pattern dynamically as
shown in the following examples. First, consider we have the following classes:
class Person {
String name
}
class MortgageLender {
def borrowAmount(amount) {
"borrow \\$$amount"
}
def borrowFor(thing) {
"buy \\$thing"
}
}
delegator.delegateAll()
Which will make all the methods in the delegate object available in the Person class.
class Person {
def name
@Delegate MortgageLender mortgageLender = new MortgageLender()
}
class MortgageLender {
def borrowAmount(amount) {
"borrow \\$$amount"
}
def borrowFor(thing) {
"buy $thing"
}
}
In such circumstances, we call the state that is shared with many other things (e.g. the
character type) instrinsicstate. It is captured within the heavy-weight class. The state
which distinguishes the physical character (maybe just its ASCII code or Unicode) is
called its extrinsic state.
Example
First we are going to model some complex aircraft (the first being a hoax competitor of
the second - not that is relevant to the example).
class Boeing797 {
def wingspan = '80.8 m'
def capacity = 1000
def speed = '1046 km/h'
def range = '14400 km'
// ...
}
class Airbus380 {
def wingspan = '79.8 m'
def capacity = 555
def speed = '912 km/h'
def range = '10370 km'
// ...
}
If we want to model our fleet, our first attempt might involve using many instances of
these heavy-weight objects. It turns out though that only a few small pieces of state (our
extrinsic state) change for each aircraft, so we will have singletons for the heavy-weight
objects and capture the extrinsic state (bought date and asset number in the code
below) separately.
class FlyweightFactory {
static instances = [797: new Boeing797(), 380: new Airbus380()]
}
class Aircraft {
private type // instrinsic state
private assetNumber // extrinsic state
private bought // extrinsic state
Aircraft(typeCode, assetNumber, bought) {
type = FlyweightFactory.instances[typeCode]
this.assetNumber = assetNumber
this.bought = bought
}
def describe() {
println """
Asset Number: $assetNumber
Capacity: $type.capacity people
Speed: $type.speed
Range: $type.range
Bought: $bought
"""
}
}
def fleet = [
new Aircraft(380, 1001, '10-May-2007'),
new Aircraft(380, 1002, '10-Nov-2007'),
new Aircraft(797, 1003, '10-May-2008'),
new Aircraft(797, 1004, '10-Nov-2008')
]
As a further efficiency measure, we might use lazy creation of the flyweight objects
rather than create the initial map up front as in the above example.
Iterator Pattern
The Iterator Pattern allows sequential access to the elements of an aggregate object
without exposing its underlying representation.
Groovy has the iterator pattern built right in to many of its closure operators,
e.g. each and eachWithIndex as well as the for .. in loop.
For example:
def printAll(container) {
for (item in container) { println item }
}
1
2
3
4
May=31
Mar=31
Apr=30
java.awt.Color[r=0,g=0,b=0]
java.awt.Color[r=255,g=255,b=255]
Another example:
This pattern is built in to many Groovy helper methods. You should consider using it
yourself if you need to work with resources in ways beyond what Groovy supports.
Example
Consider the following code which works with a file. First we might write some line to
the file and then print its size:
Sometimes however, you wish to do things slightly differently to what you can get for
free using Groovy’s built-in mechanisms. You should consider utilising this pattern
within your own resource-handling operations.
Consider how you might process the list of words on each line within the file. We could
actually do this one too using Groovy’s built-in functions, but bear with us and assume
we have to do some resource handling ourselves. Here is how we might write the code
without using this pattern:
Let’s now apply the loan pattern. First, we’ll write a helper method:
Simple Example
Suppose we have the following system:
class Job {
def salary
}
class Person {
def name
def Job job
}
def people = [
new Person(name: 'Tom', job: new Job(salary: 1000)),
new Person(name: 'Dick', job: new Job(salary: 1200)),
]
To overcome this problem, we can introduce a NullJob class and change the above
statement to become:
Tree Example
Consider the following example where we want to calculate size, cumulative sum and
cumulative product of all the values in a tree structure.
Our first attempt has special logic within the calculation methods to handle null values.
class NullHandlingTree {
def left, right, value
def size() {
1 + (left ? left.size() : 0) + (right ? right.size() : 0)
}
def sum() {
value + (left ? left.sum() : 0) + (right ? right.sum() : 0)
}
def product() {
value * (left ? left.product() : 1) * (right ? right.product() : 1)
}
}
println root.size()
println root.sum()
println root.product()
If we introduce the null object pattern (here by defining the NullTree class), we can
now simplify the logic in the size() , sum() and`product()` methods. These methods
now much more clearly represent the logic for the normal (and now universal) case.
Each of the methods within NullTree returns a value which represents doing nothing.
class Tree {
def left = new NullTree(), right = new NullTree(), value
def size() {
1 + left.size() + right.size()
}
def sum() {
value + left.sum() + right.sum()
}
def product() {
value * left.product() * right.product()
}
}
class NullTree {
def size() { 0 }
def sum() { 0 }
def product() { 1 }
}
println root.size()
println root.sum()
println root.product()
The result of running either of these examples is:
4
14
120
Note: a slight variation with the null object pattern is to combine it with the singleton
pattern. So, we wouldn’t write new NullTree() wherever we needed a null object as
shown above. Instead we would have a single null object instance which we would place
within our data structures as needed.
Example
Suppose we want to make use of the built-in Integer facilities in Groovy (which build
upon the features already in Java). Those libraries have nearly all of the features we
want but not quite everything. We may not have all of the source code to the Groovy and
Java libraries so we can’t just change the library. Instead we augment the library. Groovy
has a number of ways to do this. One way is to use a Category.
class EnhancedInteger {
static boolean greaterThanAll(Integer self, Object[] others) {
greaterThanAll(self, others)
}
static boolean greaterThanAll(Integer self, others) {
others.every { self > it }
}
}
We have added two methods which augment the Integer methods by providing
the greaterThanAll method. Categories follow conventions where they are defined as
static methods with a special first parameter representing the class we wish to extend.
The greaterThanAll(Integer self, others) static method becomes
the greaterThanAll(other) instance method.
We defined two versions of greaterThanAll . One which works for collections, ranges
etc. The other which works with a variable number of Integer arguments.
use(EnhancedInteger) {
assert 4.greaterThanAll(1, 2, 3)
assert !5.greaterThanAll(2, 4, 6)
assert 5.greaterThanAll(-4..4)
assert 5.greaterThanAll([])
assert !5.greaterThanAll([4, 5])
}
As you can see, using this technique you can effectively enrich an original class without
having access to its source code. Moreover, you can apply different enrichments in
different parts of the system as well as work with un-enriched objects if we need to.
Proxy Pattern
The Proxy Pattern allows one object to act as a pretend replacement for some other
object. In general, whoever is using the proxy, doesn’t realise that they are not using the
real thing. The pattern is useful when the real object is hard to create or use: it may exist
over a network connection, or be a large object in memory, or be a file, database or some
other resource that is expensive or impossible to duplicate.
Example
One common use of the proxy pattern is when talking to remote objects in a different
JVM. Here is the client code for creating a proxy that talks via sockets to a server object
as well as an example usage:
class AccumulatorProxy {
def accumulate(args) {
def result
def s = new Socket("localhost", 54321)
s.withObjectStreams { ois, oos ->
oos << args
result = ois.readObject()
}
s.close()
return result
}
}
class Accumulator {
def accumulate(args) {
args.inject(0) { total, arg -> total += arg }
}
}
Singleton Pattern
The Singleton Pattern is used to make sure only one object of a particular class is ever
created. This can be useful when when exactly one object is needed to coordinate
actions across a system; perhaps for efficiency where creating lots of identical objects
would be wasteful, perhaps because a particular algorithm needing a single point of
control is required or perhaps when an object is used to interact with a non-shareable
resource.
• It can reduce reuse. For instance, there are issues if you want to use inheritance with
Singletons. If SingletonB extends SingletonA , should there be exactly (at most)
one instance of each or should the creation of an object from one of the classes
prohibit creation from the other. Also, if you decide both classes can have an
instance, how do you override the getInstance() method which is static?
• It is also hard to test singletons in general because of the static methods but Groovy
can support that if required.
class VoteCollector {
def votes = 0
private static final INSTANCE = new VoteCollector()
static getInstance() { return INSTANCE }
private VoteCollector() { }
def display() { println "Collector:${hashCode()}, Votes:$votes" }
}
Some points of interest about this code:
• it has a private constructor, so no VoteCollector objects can be created in our
system (except for the INSTANCE we create)
Thread.start{
def collector2 = VoteCollector.instance
collector2.display()
collector2.votes++
collector2 = null
}.join()
Collector:15959960, Votes:0
Collector:15959960, Votes:1
Collector:15959960, Votes:2
Variations to this pattern:
First we define some base classes. A Calculator class which performs calculations and
records how many such calculations it performs and a Client class which acts as a
facade to the calculator.
class Calculator {
private total = 0
def add(a, b) { total++; a + b }
def getTotalCalculations() { 'Total Calculations: ' + total }
String toString() { 'Calc: ' + hashCode() }
}
class Client {
def calc = new Calculator()
def executeCalc(a, b) { calc.add(a, b) }
String toString() { 'Client: ' + hashCode() }
}
Now we can define and register a MetaClass which intercepts all attempts to create
a Calculator object and always provides a pre-created instance instead. We also
register this MetaClass with the Groovy system:
Guice Example
We can also implement the Singleton Pattern using Guice.
Consider the Calculator example again.
@Grapes([@Grab('aopalliance:aopalliance:1.0'),
@Grab('com.google.code.guice:guice:1.0')])
import com.google.inject.*
interface Calculator {
def add(a, b)
}
class Client {
@Inject Calculator calc
def executeCalc(a, b) { calc.add(a, b) }
String toString() { 'Client: ' + hashCode() }
}
client = injector.getInstance(Client)
assert 4 == client.executeCalc(2, 2)
println "$client, $client.calc, $client.calc.totalCalculations"
Note the @Inject annotation in the Client class. We can always tell right in the source
code which fields will be injected.
In this example we chose to use an explicit binding. All of our dependencies (ok, only one
in this example at the moment) are configured in the binding. The Guide injector knows
about the binding and injects the dependencies as required when we create objects. For
the singleton pattern to hold, you must always use Guice to create your instances.
Nothing shown so far would stop you creating another instance of the calculator
manually using new CalculatorImpl() which would of course violate the desired
singleton behaviour.
In other scenarios (though probably not in large systems), we could choose to express
dependencies using annotations, such as the following example shows:
@Grapes([@Grab('aopalliance:aopalliance:1.0'),
@Grab('com.google.code.guice:guice:1.0')])
import com.google.inject.*
@ImplementedBy(CalculatorImpl)
interface Calculator {
// as before ...
}
@Singleton
class CalculatorImpl implements Calculator {
// as before ...
}
class Client {
// as before ...
}
// ...
Note the @Singleton annotation on the CalculatorImpl class and
the @ImplementedBy annotation in the Calculator interface.
When run, the above example (using either approach) yields (your hashcode values will
vary):
Spring Example
We can do the Calculator example again using Spring as follows:
@Grapes([@Grab('org.springframework:spring-core:3.2.2.RELEASE'),
@Grab('org.springframework:spring-beans:3.2.2.RELEASE')])
import org.springframework.beans.factory.support.*
interface Calculator {
def add(a, b)
}
class CalculatorImpl implements Calculator {
private total = 0
def add(a, b) { total++; a + b }
def getTotalCalculations() { 'Total Calculations: ' + total }
String toString() { 'Calc: ' + hashCode() }
}
class Client {
Client(Calculator calc) { this.calc = calc }
def calc
def executeCalc(a, b) { calc.add(a, b) }
String toString() { 'Client: ' + hashCode() }
}
client = factory.getBean('client')
assert 4 == client.executeCalc(2, 2)
println "$client, $client.calc, $client.calc.totalCalculations"
And here is the result (your hashcode values will vary):
Further information
• Simply Singleton
• Use your singletons wisely
• Double-checked locking and the Singleton pattern
• Lazy Loading Singletons
• Implementing the Singleton Pattern in C#
State Pattern
The State Pattern provides a structured approach to partitioning the behaviour within
complex systems. The overall behaviour of a system is partitioned into well-defined
states. Typically, each state is implemented by a class. The overall system behaviour can
be determined firstly by knowing the current state of the system; secondly, by
understanding the behaviour possible while in that state (as embodied in the methods of
the class corresponding to that state).
Example
Here is an example:
class Client {
def context = new Context()
def connect() {
context.state.connect()
}
def disconnect() {
context.state.disconnect()
}
def send_message(message) {
context.state.send_message(message)
}
def receive_message() {
context.state.receive_message()
}
}
class Context {
def state = new Offline(this)
}
class ClientState {
def context
ClientState(context) {
this.context = context
inform()
}
}
offline
error: not connected
connected
"Hello" sent
error: already connected
message received
offline
One of the great things about a dynamic language like Groovy though is that we can take
this example and express it in many different ways depending on our particular needs.
Some potential variations for this example are shown below.
interface State {
def connect()
def disconnect()
def send_message(message)
def receive_message()
}
Then our Client , Online and 'Offline` classes could be modified to implement that
interface, e.g.:
We don’t have to use interfaces for this, but it helps express the intent of this particular
style of partitioning and it helps reduce the size of our unit tests (we would have to have
additional tests in place to express this intent in languages which have less support for
interface-oriented design).
def transitionTo(nextState) {
client.setContext(InstanceProvider.create(nextState, client))
}
}
This is all quite generic and can be used wherever we want to introduce the state
pattern. Here is what our code would look like now:
We can define the following generic helper functions (first discussed here):
class Grammar {
def fsm
def event
def fromState
def toState
Grammar(a_fsm) {
fsm = a_fsm
}
def on(a_event) {
event = a_event
this
}
def from(a_fromState) {
fromState = a_fromState
this
}
def to(a_toState) {
assert a_toState, "Invalid toState: $a_toState"
toState = a_toState
fsm.registerTransition(this)
this
}
def isValid() {
event && fromState && toState
}
def initialState
def currentState
FiniteStateMachine(a_initialState) {
assert a_initialState, "You need to provide an initial state"
initialState = a_initialState
currentState = a_initialState
}
def record() {
Grammar.newInstance(this)
}
def reset() {
currentState = initialState
}
def isState(a_state) {
currentState == a_state
}
def registerTransition(a_grammar) {
assert a_grammar.isValid(), "Invalid transition ($a_grammar)"
def transition
def event = a_grammar.event
def fromState = a_grammar.fromState
def toState = a_grammar.toState
if (!transitions[event]) {
transitions[event] = [:]
}
transition = transitions[event]
assert !transition[fromState], "Duplicate fromState $fromState for
transition $a_grammar"
transition[fromState] = toState
}
def fire(a_event) {
assert currentState, "Invalid current state '$currentState': passed
into constructor"
assert transitions.containsKey(a_event), "Invalid event '$a_event',
should be one of ${transitions.keySet()}"
def transition = transitions[a_event]
def nextState = transition[currentState]
assert nextState, "There is no transition from '$currentState' to
any other state"
currentState = nextState
currentState
}
}
Now we can define and test our state machine like this:
void testInitialState() {
assert fsm.isState('offline')
}
void testOfflineState() {
shouldFail{
fsm.fire('send_message')
}
shouldFail{
fsm.fire('receive_message')
}
shouldFail{
fsm.fire('disconnect')
}
assert 'online' == fsm.fire('connect')
}
void testOnlineState() {
fsm.fire('connect')
fsm.fire('send_message')
fsm.fire('receive_message')
shouldFail{
fsm.fire('connect')
}
assert 'offline' == fsm.fire('disconnect')
}
}
This example isn’t an exact equivalent of the others. It doesn’t use
predefined Online and Offline classes. Instead it defines the entire state machine on
the fly as needed. See the previous reference for more elaborate examples of this style.
Strategy Pattern
The Strategy Pattern allows you to abstract away particular algorithms from their usage.
This allows you to easily swap the algorithm being used without having to change the
calling code. The general form of the pattern is:
In Groovy, because of its ability to treat code as a first class object using anonymous
methods (which we loosely call Closures), the need for the strategy pattern is greatly
reduced. You can simply place algorithms inside Closures.
Example
First let’s look at the traditional way of encapsulating the Strategy Pattern.
interface Calc {
def execute(n, m)
}
result
}
}
def sampleData = [
[3, 4, 12],
[5, -5, -25]
]
Calc[] multiplicationStrategies = [
new CalcByMult(),
new CalcByManyAdds()
]
sampleData.each{ data ->
multiplicationStrategies.each { calc ->
assert data[2] == calc.execute(data[0], data[1])
}
}
Here we have defined an interface Calc which our concrete strategy classes will
implement (we could also have used an abstract class). We then defined two algorithms
for doing simple multiplication: CalcByMult the normal way, and CalcByManyAdds
using only addition (don’t try this one using negative numbers - yes we could fix this but
it would just make the example longer). We then use normal polymorphism to invoke
the algorithms.
Here is the Groovier way to achieve the same thing using Closures:
def multiplicationStrategies = [
{ n, m -> n * m },
{ n, m -> def result = 0; n.times{ result += m }; result }
]
def sampleData = [
[3, 4, 12],
[5, -5, -25]
]
Example
In this example, Accumulator captures the essence of the accumulation algorithm. The
base classes Sum and Product provide particular customised ways to use the generic
accumulation algorithm.
10
24
In this particular case, you could use Groovy’s inject method to achieve a similar result
using Closures:
In this particular case, you could use Groovy’s inject method to achieve a similar result
using Closures:
Visitor Pattern
The Visitor Pattern is one of those well-known but not often used patterns. I think this is
strange, as it is really a nice thing.
The goal of the pattern is to separate an algorithm from an object structure. A practical
result of this separation is the ability to add new operations to existing object structures
without modifying those structures.
Simple Example
This example considers how to calculate the bounds of shapes (or collections of shapes).
Our first attempt uses the traditional visitor pattern. We will see a more Groovy way to
do this shortly.
def union(rect) {
if (!rect) return this
def minx = [rect.x, x].min()
def maxx = [rect.x + width, x + width].max()
def miny = [rect.y, y].min()
def maxy = [rect.y + height, y + height].max()
new Rectangle(minx, miny, maxx - minx, maxy - miny)
}
def accept(visitor) {
visitor.visit_rectangle(this)
}
}
def accept(visitor){
visitor.visit_line(this)
}
}
def visit_rectangle(rectangle) {
if (bounds)
bounds = bounds.union(rectangle)
else
bounds = rectangle
}
def visit_line(line) {
def line_bounds = new Rectangle(line.x1, line.y1, line.x2-line.y1,
line.x2-line.y2)
if (bounds)
bounds = bounds.union(line_bounds)
else
bounds = line_bounds
}
def visit_group(group) {
group.shapes.each { shape -> shape.accept(this) }
}
}
We can improve the clarity of our code (and make it about half the size) by making use
of Groovy Closures as follows:
class Group {
def shapes = []
def leftShift(shape) { shapes += shape }
def accept(Closure yield) { shapes.each{it.accept(yield)} }
}
Advanced Example
interface Visitor {
void visit(NodeType1 n1)
void visit(NodeType2 n2)
}
interface Visitable {
void accept(Visitor visitor)
}
As you can see here very good we have a visitor that has a state while the tree of objects
is not changed. That’s pretty useful in different areas, for example you could have a
visitor counting all node types, or how many different types are used, or you could use
methods special to the node to gather information about the tree and much more.
In this case we have to do much work.. we have to change Visitor to accept the new type,
we have to write the new type itself of course and we have to change every Visitor we
have already implemented. After very few changes you will modify all your Visitors to
extend a default implementation of the visitor, so you don’t need to change every Visitor
each time you add a new type.
Then you have a problem. since the node describes how to iterate, you have no influence
and stop iteration at a point or change the order. So maybe we should change this a little
to this:
interface Visitor {
void visit(NodeType1 n1)
void visit(NodeType2 n2)
}
interface Visitable {
void accept(Visitor visitor)
}
Make it Groovy
The question now is how to make that a bit more Groovy. Didn’t you find
this visitor.visit(this) strange? Why is it there? The answer is to simulate double
dispatch. In Java the compile time type is used, so when
I visitor.visit(children[i]) then the compiler won’t be able to find the correct
method, because Visitor does not contain a method visit(Visitable) . And even if it
would, we would like to visit the more special methods with NodeType1 or NodeType2 .
Now Groovy is not using the static type, Groovy uses the runtime type. This means I
could do visitor.visit(children[i]) directly. Hmm.. since we minimized the
accept method to just do the double dispatch part and since the runtime type system of
Groovy will already cover that.. do we need the accept method? I think you can guess
that I would answer no. But we can do more. We had the disadvantage of not knowing
how to handle unknown tree elements. We had to extends the interface Visitor for
that, resulting in changes to DefaultVisitor and then we have the task to provide a
useful default like iterating the node or not doing anything at all. Now with Groovy we
can catch that case by adding a visit(Visitable) method that does nothing. That
would be the same in Java btw.
But don’t let us stop here… do we need the Visitor interface? If we don’t have the
accept method, then we don’t need the Visitor interface at all. So the new code would
be:
class DefaultVisitor {
void visit(NodeType1 n1) {
n1.children.each { visit(it) }
}
void visit(NodeType2 n2) {
n2.children.each { visit(it) }
}
void visit(Visitable v) { }
}
interface Visitable { }
class DefaultVisitor {
void visit(Visitable v) {
doIteraton(v)
}
void doIteraton(Visitable v) {
v.children.each {
visit(it)
}
}
}
interface Visitable {
Visitable[] getChildren()
}
Summary
In the end we got ~40% less code, a robust and stable architecture and we completely
removed the Visitor from the Visitable. I heard about visitor implementations based on
Reflection to get a more generic version. Well, with this you see there is really no need to
do such thing. If we add new types we don’t need to change anything. It is said that the
visitor pattern doesn’t fit extreme programming techniques very well because you need
to make changes to so many classes all the time. I think I proved that this is because of
Java not because the pattern is bad or something.
There are variants of the Visitor pattern, like the acyclic visitor pattern, that tries to
solve the problem of adding new node types with special visitors. I don’t like that very
much, it works with casts, catches the ClassCastException and other nasty things. In
the end it tries to solve something we don’t even get with the Groovy version.
One more thing. NodeType1Counter could be implemented in Java as well. Groovy will
recognize the visit methods and call them as needed because DefaultVisitor is still
Groovy and does all the magic.
Further Information
• Componentization: the Visitor example
3.19.2. References
1. Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides (1995). Design Patterns:
Elements of Reusable Object-Oriented Software. Addison-Wesley. ISBN 0-201-63361-
2.
o The canonical reference of design patterns.
2. Martin Fowler (1999). Refactoring: Improving the Design of Existing Code. Addison-
Wesley. ISBN 0-201-48567-2.
3. Joshua Kerievsky (2004). Refactoring To Patterns. Addison-Wesley. ISBN 0-321-
21335-1.
4. Eric Freeman, Elisabeth Freeman, Kathy Sierra, Bert Bates (2004). Head First Design
Patterns. O’Reilly. ISBN 0-596-00712-4. *A great book to read, informative as well as
amusing.
5. Dierk Koenig with Andrew Glover, Paul King, Guillaume Laforge and Jon Skeet
(2007). Groovy in Action. Manning. ISBN 1-932394-84-2.
o Discusses Visitor, Builder and other Patterns.
6. Brad Appleton (1999). Pizza Inversion - a Pattern for Efficient Resource
Consumption.
o One of the most frequently used patterns by many software engineers!
7. Design Patterns in Dynamic Languages by Neil Ford. Houston Java User’s Group.
Examples in Groovy and Ruby. http://www.oracle.com/technetwork/server-
storage/ts-4961-159222.pdf