Modern Java a Guide to Java 8
Modern Java a Guide to Java 8
Introduction 0
Modern Java - A Guide to Java 8 1
Java 8 Stream Tutorial 2
Java 8 Nashorn Tutorial 3
Java 8 Concurrency Tutorial: Threads and Executors 4
Java 8 Concurrency Tutorial: Synchronization and Locks 5
Java 8 Concurrency Tutorial: Atomic Variables and ConcurrentMap
Java 8 API by Example: Strings, Numbers, Math and Files 7 6
Avoiding Null Checks in Java 8 8
Fixing Java 8 Stream Gotchas with IntelliJ IDEA 9
Using Backbone.js with Nashorn 10
Modern Java - A Guide to Java 8
Author: winterbe
From: java8-tutorial
interface Formula {
double calculate(int a);
formula.calculate(100); // 100.0
formula.sqrt(16); // 4.0
The formula is implemented as an anonymous object. The code is quite
verbose: 6 lines of code for such a simple calculation of sqrt(a * 100) .
As we'll see in the next section, there's a much nicer way of implementing
single method objects in Java 8.
Lambda expressions
Let's start with a simple example of how to sort a list of strings in prior
versions of Java:
Instead of creating anonymous objects all day long, Java 8 comes with a
much shorter syntax, lambda expressions:
As you can see the code is much shorter and easier to read. But it gets
even shorter:
List now has a sort method. Also the java compiler is aware of the
parameter types so you can skip them as well. Let's dive deeper into how
lambda expressions can be used in the wild.
Functional Interfaces
How does lambda expressions fit into Java's type system? Each lambda
corresponds to a given type, specified by an interface. A so called
functional interface must contain exactly one abstract method
declaration. Each lambda expression of that type will be matched to this
abstract method. Since default methods are not abstract you're free to
add default methods to your functional interface.
Example:
@FunctionalInterface
interface Converter<F, T> {
T convert(F from);
}
class Something {
String startsWith(String s) {
return String.valueOf(s.charAt(0));
}
}
Let's see how the :: keyword works for constructors. First we define an
example bean with different constructors:
class Person {
String firstName;
String lastName;
Person() {}
Person(String firstName, String lastName) {
this.firstName = firstName;
this.lastName = lastName;
}
}
stringConverter.convert(2); // 3
But different to anonymous objects the variable num does not have to be
declared final. This code is also valid:
int num = 1;
Converter<Integer, String> stringConverter =
(from) -> String.valueOf(from + num);
stringConverter.convert(2); // 3
However num must be implicitly final for the code to compile. The
following code does not compile:
int num = 1;
Converter<Integer, String> stringConverter =
(from) -> String.valueOf(from + num);
num = 3;
class Lambda4 {
static int outerStaticNum;
int outerNum;
void testScopes() {
Converter<Integer, String> stringConverter1 = (from) -> {
outerNum = 23;
return String.valueOf(from);
};
But the Java 8 API is also full of new functional interfaces to make your
life easier. Some of those new interfaces are well known from the Google
Guava library. Even if you're familiar with this library you should keep a
close eye on how those interfaces are extended by some useful method
extensions.
Predicates
Predicates are boolean-valued functions of one argument. The interface
contains various default methods for composing predicates to complex
logical terms (and, or, negate)
predicate.test("foo"); // true
predicate.negate().test("foo"); // false
Functions
Functions accept one argument and produce a result. Default methods
can be used to chain multiple functions together (compose, andThen).
backToString.apply("123"); // "123"
Suppliers
Suppliers produce a result of a given generic type. Unlike Functions,
Suppliers don't accept arguments.
Consumers
Consumers represent operations to be performed on a single input
argument.
Comparators
Comparators are well known from older versions of Java. Java 8 adds
various default methods to the interface.
Comparator<Person> comparator = (p1, p2) ->
p1.firstName.compareTo(p2.firstName);
optional.isPresent(); // true
optional.get(); // "bam"
optional.orElse("fallback"); // "bam"
Let's first look how sequential streams work. First we create a sample
source in form of a list of strings:
stringCollection
.stream()
.filter((s) -> s.startsWith("a"))
.forEach(System.out::println);
// "aaa2", "aaa1"
Sorted
Sorted is an intermediate operation which returns a sorted view of the
stream. The elements are sorted in natural order unless you pass a
custom Comparator .
stringCollection
.stream()
.sorted()
.filter((s) -> s.startsWith("a"))
.forEach(System.out::println);
// "aaa1", "aaa2"
Keep in mind that sorted does only create a sorted view of the stream
without manipulating the ordering of the backed collection. The ordering
of stringCollection is untouched:
System.out.println(stringCollection);
// ddd2, aaa2, bbb1, aaa1, bbb3, ccc, bbb2, ddd1
Map
The intermediate operation map converts each element into another
object via the given function. The following example converts each string
into an upper-cased string. But you can also use map to transform each
object into another type. The generic type of the resulting stream
depends on the generic type of the function you pass to map .
stringCollection
.stream()
.map(String::toUpperCase)
.sorted((a, b) -> b.compareTo(a))
.forEach(System.out::println);
Match
Various matching operations can be used to check whether a certain
predicate matches the stream. All of those operations are terminal and
return a boolean result.
boolean anyStartsWithA =
stringCollection
.stream()
.anyMatch((s) -> s.startsWith("a"));
System.out.println(anyStartsWithA); // true
boolean allStartsWithA =
stringCollection
.stream()
.allMatch((s) -> s.startsWith("a"));
System.out.println(allStartsWithA); // false
boolean noneStartsWithZ =
stringCollection
.stream()
.noneMatch((s) -> s.startsWith("z"));
System.out.println(noneStartsWithZ); // true
Count
long startsWithB =
stringCollection
.stream()
.filter((s) -> s.startsWith("b"))
.count();
System.out.println(startsWithB); // 3
Reduce
This terminal operation performs a reduction on the elements of the
stream with the given function. The result is an Optional holding the
reduced value.
Optional<String> reduced =
stringCollection
.stream()
.sorted()
.reduce((s1, s2) -> s1 + "#" + s2);
reduced.ifPresent(System.out::println);
// "aaa1#aaa2#bbb1#bbb2#bbb3#ccc#ddd1#ddd2"
Parallel Streams
As mentioned above streams can be either sequential or parallel.
Operations on sequential streams are performed on a single thread while
operations on parallel streams are performed concurrently on multiple
threads.
Sequential Sort
long t0 = System.nanoTime();
long t1 = System.nanoTime();
long t0 = System.nanoTime();
long t1 = System.nanoTime();
As you can see both code snippets are almost identical but the parallel
sort is roughly 50% faster. All you have to do is change stream() to
parallelStream() .
Maps
As already mentioned maps do not directly support streams. There's no
stream() method available on the Map interface itself, however you can
create specialized streams upon the keys, values or entries of a map via
map.keySet().stream() , map.values().stream() and
map.entrySet().stream() .
Furthermore maps support various new and useful methods for doing
common tasks.
Next, we learn how to remove entries for a given key, only if it's currently
mapped to a given value:
map.remove(3, "val3");
map.get(3); // val33
map.remove(3, "val33");
map.get(3); // null
Merge either put the key/value into the map if no entry for the key exists,
or the merging function will be called to change the existing value.
Date API
Java 8 contains a brand new date and time API under the package
java.time . The new Date API is comparable with the Joda-Time library,
however it's not the same. The following examples cover the most
important parts of this new API.
Clock
Clock provides access to the current date and time. Clocks are aware of
a timezone and may be used instead of System.currentTimeMillis() to
retrieve the current time in milliseconds since Unix EPOCH. Such an
instantaneous point on the time-line is also represented by the class
Instant . Instants can be used to create legacy java.util.Date objects.
Timezones
Timezones are represented by a ZoneId . They can easily be accessed
via static factory methods. Timezones define the offsets which are
important to convert between instants and local dates and times.
System.out.println(ZoneId.getAvailableZoneIds());
// prints all available timezone ids
ZoneId zone1 = ZoneId.of("Europe/Berlin");
ZoneId zone2 = ZoneId.of("Brazil/East");
System.out.println(zone1.getRules());
System.out.println(zone2.getRules());
// ZoneRules[currentStandardOffset=+01:00]
// ZoneRules[currentStandardOffset=-03:00]
LocalTime
LocalTime represents a time without a timezone, e.g. 10pm or 17:30:15.
The following example creates two local times for the timezones defined
above. Then we compare both times and calculate the difference in hours
and minutes between both times.
System.out.println(now1.isBefore(now2)); // false
System.out.println(hoursBetween); // -3
System.out.println(minutesBetween); // -239
DateTimeFormatter germanFormatter =
DateTimeFormatter
.ofLocalizedTime(FormatStyle.SHORT)
.withLocale(Locale.GERMAN);
DateTimeFormatter germanFormatter =
DateTimeFormatter
.ofLocalizedDate(FormatStyle.MEDIUM)
.withLocale(Locale.GERMAN);
LocalDateTime
LocalDateTime represents a date-time. It combines date and time as
seen in the above sections into one instance. LocalDateTime is
immutable and works similar to LocalTime and LocalDate. We can utilize
methods for retrieving certain fields from a date-time:
LocalDateTime sylvester = LocalDateTime.of(2014, Month.DECEMBER,
31, 23, 59, 59);
DateTimeFormatter formatter =
DateTimeFormatter
.ofPattern("MMM dd, yyyy - HH:mm");
@interface Hints {
Hint[] value();
}
@Repeatable(Hints.class)
@interface Hint {
String value();
}
@Hints({@Hint("hint1"), @Hint("hint2")})
class Person {}
@Hint("hint1")
@Hint("hint2")
class Person {}
@Target({ElementType.TYPE_PARAMETER, ElementType.TYPE_USE})
@interface MyAnnotation {}
Where to go from here?
My programming guide to Java 8 ends here. If you want to learn more
about all the new classes and features of the JDK 8 API, check out my
JDK8 API Explorer. It helps you figuring out all the new classes and
hidden gems of JDK 8, like Arrays.parallelSort , StampedLock and
CompletableFuture - just to name a few.
This guide teaches you how to work with Java 8 streams and how to use
the different kind of available stream operations. You'll learn about the
processing order and how the ordering of stream operations affect
runtime performance. The more powerful stream operations reduce ,
collect and flatMap are covered in detail. The tutorial ends with an in-
depth look at parallel streams.
List<String> myList =
Arrays.asList("a1", "a2", "b1", "c2", "c1");
myList
.stream()
.filter(s -> s.startsWith("c"))
.map(String::toUpperCase)
.sorted()
.forEach(System.out::println);
// C1
// C2
IntStream.range(1, 4)
.forEach(System.out::println);
// 1
// 2
// 3
All those primitive streams work just like regular object streams with the
following differences: Primitive streams use specialized lambda
expressions, e.g. IntFunction instead of Function or IntPredicate
IntStream.range(1, 4)
.mapToObj(i -> "a" + i)
.forEach(System.out::println);
// a1
// a2
// a3
// a1
// a2
// a3
Processing Order
Now that we've learned how to create and work with different kinds of
streams, let's dive deeper into how stream operations are processed
under the hood.
When executing this code snippet, nothing is printed to the console. That
is because intermediate operations will only be executed when a terminal
operation is present.
Executing this code snippet results in the desired output on the console:
filter: d2
forEach: d2
filter: a2
forEach: a2
filter: b1
forEach: b1
filter: b3
forEach: b3
filter: c
forEach: c
// map: d2
// anyMatch: D2
// map: a2
// anyMatch: A2
// map: d2
// filter: D2
// map: a2
// filter: A2
// forEach: A2
// map: b1
// filter: B1
// map: b3
// filter: B3
// map: c
// filter: C
As you might have guessed both map and filter are called five times
for every string in the underlying collection whereas forEach is only
called once.
// filter: d2
// filter: a2
// map: a2
// forEach: A2
// filter: b1
// filter: b3
// filter: c
Now, map is only called once so the operation pipeline performs much
faster for larger numbers of input elements. Keep that in mind when
composing complex method chains.
First, the sort operation is executed on the entire input collection. In other
words sorted is executed horizontally. So in this case sorted is called
eight times for multiple combinations on every element in the input
collection.
// filter: d2
// filter: a2
// filter: b1
// filter: b3
// filter: c
// map: a2
// forEach: A2
Reusing Streams
Java 8 streams cannot be reused. As soon as you call any terminal
operation the stream is closed:
Stream<String> stream =
Stream.of("d2", "a2", "b1", "b3", "c")
.filter(s -> s.startsWith("a"));
at
java.util.stream.ReferencePipeline.noneMatch(ReferencePipeline.java:459)
at com.winterbe.java8.Streams5.test7(Streams5.java:38)
at com.winterbe.java8.Streams5.main(Streams5.java:28)
To overcome this limitation we have to to create a new stream chain for
every terminal operation we want to execute, e.g. we could create a
stream supplier to construct a new stream with all intermediate
operations already set up:
Supplier<Stream<String>> streamSupplier =
() -> Stream.of("d2", "a2", "b1", "b3", "c")
.filter(s -> s.startsWith("a"));
Each call to get() constructs a new stream on which we are save to call
the desired terminal operation.
Advanced Operations
Streams support plenty of different operations. We've already learned
about the most important operations like filter or map . I leave it up to
you to discover all other available operations (see Stream Javadoc).
Instead let's dive deeper into the more complex operations collect ,
flatMap and reduce .
Most code samples from this section use the following list of persons for
demonstration purposes:
class Person {
String name;
int age;
List<Person> persons =
Arrays.asList(
new Person("Max", 18),
new Person("Peter", 23),
new Person("Pamela", 23),
new Person("David", 12));
Collect
List<Person> filtered =
persons
.stream()
.filter(p -> p.name.startsWith("P"))
.collect(Collectors.toList());
As you can see it's very simple to construct a list from the elements of a
stream. Need a set instead of list - just use Collectors.toSet() .
personsByAge
.forEach((age, p) -> System.out.format("age %s: %s\n", age,
p));
System.out.println(averageAge); // 19.0
IntSummaryStatistics ageSummary =
persons
.stream()
.collect(Collectors.summarizingInt(p -> p.age));
System.out.println(ageSummary);
// IntSummaryStatistics{count=4, sum=76, min=12, average=19.000000,
max=23}
System.out.println(phrase);
// In Germany Max and Peter and Pamela are of legal age.
System.out.println(map);
// {18=Max, 23=Peter;Pamela, 12=David}
Now that we know some of the most powerful built-in collectors, let's try
to build our own special collector. We want to transform all persons of the
stream into a single string consisting of all names in upper letters
separated by the | pipe character. In order to achieve this we create a
new collector via Collector.of() . We have to pass the four ingredients
of a collector: a supplier, an accumulator, a combiner and a finisher.
Collector<Person, StringJoiner, String> personNameCollector =
Collector.of(
() -> new StringJoiner(" | "), // supplier
(j, p) -> j.add(p.name.toUpperCase()), // accumulator
(j1, j2) -> j1.merge(j2), // combiner
StringJoiner::toString); // finisher
FlatMap
class Foo {
String name;
List<Bar> bars = new ArrayList<>();
Foo(String name) {
this.name = name;
}
}
class Bar {
String name;
Bar(String name) {
this.name = name;
}
}
// create foos
IntStream
.range(1, 4)
.forEach(i -> foos.add(new Foo("Foo" + i)));
// create bars
foos.forEach(f ->
IntStream
.range(1, 4)
.forEach(i -> f.bars.add(new Bar("Bar" + i + " <- " +
f.name))));
foos.stream()
.flatMap(f -> f.bars.stream())
.forEach(b -> System.out.println(b.name));
As you can see, we've successfully transformed the stream of three foo
objects into a stream of nine bar objects.
Finally, the above code example can be simplified into a single pipeline of
stream operations:
IntStream.range(1, 4)
.mapToObj(i -> new Foo("Foo" + i))
.peek(f -> IntStream.range(1, 4)
.mapToObj(i -> new Bar("Bar" + i + " <- " f.name))
.forEach(f.bars::add))
.flatMap(f -> f.bars.stream())
.forEach(b -> System.out.println(b.name));
class Nested {
Inner inner;
}
class Inner {
String foo;
}
In order to resolve the inner string foo of an outer instance you have to
add multiple null checks to prevent possible NullPointerExceptions:
operation:
Optional.of(new Outer())
.flatMap(o -> Optional.ofNullable(o.nested))
.flatMap(n -> Optional.ofNullable(n.inner))
.flatMap(i -> Optional.ofNullable(i.foo))
.ifPresent(System.out::println);
Reduce
The reduction operation combines all elements of the stream into a single
result. Java 8 supports three different kind of reduce methods. The first
one reduces a stream of elements to exactly one element of the stream.
Let's see how we can use this method to determine the oldest person:
persons
.stream()
.reduce((p1, p2) -> p1.age > p2.age ? p1 : p2)
.ifPresent(System.out::println); // Pamela
Person result =
persons
.stream()
.reduce(new Person("", 0), (p1, p2) -> {
p1.age += p2.age;
p1.name += p2.name;
return p1;
});
System.out.println(ageSum); // 76
As you can see the result is 76, but what's happening exactly under the
hood? Let's extend the above code by some debug output:
As you can see the accumulator function does all the work. It first get
called with the initial identity value 0 and the first person Max. In the next
three steps sum continually increases by the age of the last steps
person up to a total age of 76.
Wait wat? The combiner never gets called? Executing the same stream
in parallel will lift the secret:
Parallel Streams
Streams can be executed in parallel to increase runtime performance on
large amount of input elements. Parallel streams use a common
ForkJoinPool available via the static ForkJoinPool.commonPool()
-Djava.util.concurrent.ForkJoinPool.common.parallelism=5
filter: b1 [main]
filter: a2 [ForkJoinPool.commonPool-worker-1]
map: a2 [ForkJoinPool.commonPool-worker-1]
filter: c2 [ForkJoinPool.commonPool-worker-3]
map: c2 [ForkJoinPool.commonPool-worker-3]
filter: c1 [ForkJoinPool.commonPool-worker-2]
map: c1 [ForkJoinPool.commonPool-worker-2]
forEach: C2 [ForkJoinPool.commonPool-worker-3]
forEach: A2 [ForkJoinPool.commonPool-worker-1]
map: b1 [main]
forEach: B1 [main]
filter: a1 [ForkJoinPool.commonPool-worker-3]
map: a1 [ForkJoinPool.commonPool-worker-3]
forEach: A1 [ForkJoinPool.commonPool-worker-3]
forEach: C1 [ForkJoinPool.commonPool-worker-2]
As you can see the parallel stream utilizes all available threads from the
common ForkJoinPool for executing the stream operations. The output
may differ in consecutive runs because the behavior which particular
thread is actually used is non-deterministic.
filter: c2 [ForkJoinPool.commonPool-worker-3]
filter: c1 [ForkJoinPool.commonPool-worker-2]
map: c1 [ForkJoinPool.commonPool-worker-2]
filter: a2 [ForkJoinPool.commonPool-worker-1]
map: a2 [ForkJoinPool.commonPool-worker-1]
filter: b1 [main]
map: b1 [main]
filter: a1 [ForkJoinPool.commonPool-worker-2]
map: a1 [ForkJoinPool.commonPool-worker-2]
map: c2 [ForkJoinPool.commonPool-worker-3]
sort: A2 <> A1 [main]
sort: B1 <> A2 [main]
sort: C2 <> B1 [main]
sort: C1 <> C2 [main]
sort: C1 <> B1 [main]
sort: C1 <> C2 [main]
forEach: A1 [ForkJoinPool.commonPool-worker-1]
forEach: C2 [ForkJoinPool.commonPool-worker-3]
forEach: B1 [main]
forEach: A2 [ForkJoinPool.commonPool-worker-2]
forEach: C1 [ForkJoinPool.commonPool-worker-1]
Coming back to the reduce example from the last section. We already
found out that the combiner function is only called in parallel but not in
sequential streams. Let's see which threads are actually involved:
persons
.parallelStream()
.reduce(0,
(sum, p) -> {
System.out.format("accumulator: sum=%s; person=%s
[%s]\n",
sum, p, Thread.currentThread().getName());
return sum += p.age;
},
(sum1, sum2) -> {
System.out.format("combiner: sum1=%s; sum2=%s [%s]\n",
sum1, sum2, Thread.currentThread().getName());
return sum1 + sum2;
});
The console output reveals that both the accumulator and the combiner
functions are executed in parallel on all available threads:
Furthermore we've learned that all parallel stream operations share the
same JVM-wide common ForkJoinPool . So you probably want to avoid
implementing slow blocking stream operations since that could potentially
slow down other parts of your application which rely heavily on parallel
streams.
That's it
My programming guide to Java 8 streams ends here. If you're interested
in learning more about Java 8 streams, I recommend to you the Stream
Javadoc package documentation. If you want to learn more about the
underlying mechanisms, you probably want to read Martin Fowlers article
about Collection Pipelines.
Happy coding!
Java 8 Nashorn Tutorial
April 05, 2014
Learn all about the Nashorn Javascript Engine with easily understood
code examples. The Nashorn Javascript Engine is part of Java SE 8 and
competes with other standalone engines like Google V8 (the engine that
powers Google Chrome and Node.js). Nashorn extends Javas
capabilities by running dynamic javascript code natively on the JVM.
In the next ~15 minutes you learn how to evaluate javascript on the JVM
dynamically during runtime. The most recent Nashorn language features
are demonstrated with small code examples. You learn how to call
javascript functions from java code and vice versa. At the end you're
ready to integrate dynamic scripts in your daily java business.
UPDATE - I'm currently working on a JavaScript implementation of the
Java 8 Streams API for the browser. If I've drawn your interest check out
Stream.js on GitHub. Your Feedback is highly appreciated.
Using Nashorn
The Nashorn javascript engine can either be used programmatically from
java programs or by utilizing the command line tool jjs , which is
located in $JAVA_HOME/bin . If you plan to work with jjs you might want
to put a symbolic link for simple access:
$ cd /usr/bin
$ ln -s $JAVA_HOME/bin/jjs jjs
$ jjs
jjs> print('Hello World');
This tutorial focuses on using nashorn from java code, so let's skip jjs
In order to evaluate javascript code from java, you first create a nashorn
script engine by utilizing the javax.script package already known from
Rhino (Javas legacy js engine from Mozilla).
The following javascript functions will later be called from the java side:
Executing the code results in three lines written to the console. Calling
the function print pipes the result to System.out , so we see the
javascript message first.
Now let's call the second function by passing arbitrary java objects:
invocable.invokeFunction("fun2", LocalDateTime.now());
// [object java.time.LocalDateTime]
Java classes can be referenced from javascript via the Java.type API
extension. It's similar to importing classes in java code. As soon as the
java type is defined we naturally call the static method fun1() and print
the result to sout . Since the method is static, we don't have to create an
instance first.
How does Nashorn handle type conversion when calling java methods
with native javascript types? Let's find out with a simple example.
The following java method simply prints the actual class type of the
method parameter:
static void fun2(Object object) {
System.out.println(object.getClass());
}
MyJavaClass.fun2(123);
// class java.lang.Integer
MyJavaClass.fun2(49.99);
// class java.lang.Double
MyJavaClass.fun2(true);
// class java.lang.Boolean
MyJavaClass.fun2("hi there")
// class java.lang.String
MyJavaClass.fun2(new Number(23));
// class jdk.nashorn.internal.objects.NativeNumber
MyJavaClass.fun2(new Date());
// class jdk.nashorn.internal.objects.NativeDate
MyJavaClass.fun2(new RegExp());
// class jdk.nashorn.internal.objects.NativeRegExp
MyJavaClass.fun2({foo: 'bar'});
// class jdk.nashorn.internal.scripts.JO4
> Anything marked internal will likely change out from underneath you.
ScriptObjectMirror
When passing native javascript objects to java you can utilize the class
ScriptObjectMirror which is actually a java representation of the
underlying javascript object. ScriptObjectMirror implements the map
interface and resides inside the package jdk.nashorn.api . Classes from
this package are intended to be used in client-code.
MyJavaClass.fun3({
foo: 'bar',
bar: 'foo'
});
We can also call member functions on javascript object from java. Let's
first define a javascript type Person with properties firstName and
lastName and method getFullName .
When passing a new person to the java method, we see the desired
result on the console:
Language Extensions
Nashorn defines various language and API extensions to the
ECMAScript standard. Let's head right into the most recent features:
Typed Arrays
Native javascript arrays are untyped. Nashorn enables you to use typed
java arrays in javascript:
try {
array[5] = 23;
} catch (e) {
print(e.message); // Array index out of range: 5
}
array[0] = "17";
print(array[0]); // 17
array[0] = "17.3";
print(array[0]); // 17
The int[] array behaves like a real java int array. But additionally
Nashorn performs implicit type conversions under the hood when we're
trying to add non-integer values to the array. Strings will be auto-
converted to int which is quite handy.
Instead of messing around with arrays we can use any java collection.
First define the java type via Java.type , then create new instances on
demand.
list2
.stream()
.filter(function(el) {
return el.startsWith("aaa");
})
.sorted()
.forEach(function(el) {
print(el);
});
// aaa1, aaa2
Extending classes
new Thread(function() {
print('printed from another thread');
}).start();
Parameter overloading
Methods and functions can either be called with the point notation or with
the square braces notation.
Instead of explicitly working with getters and setters you can just use
simple property names both for getting or setting values from a java
bean.
Function Literals
For simple one line functions we can skip the curly braces:
function sqr(x) x * x;
print(sqr(3)); // 9
Binding properties
var o1 = {};
var o2 = { foo: 'bar'};
Object.bindProperties(o1, o2);
print(o1.foo); // bar
o1.foo = 'BAM';
print(o2.foo); // BAM
Trimming strings
Whereis
Import Scopes
Sometimes it's useful to import many java packages at once. We can use
the class JavaImporter to be used in conjunction with the with
statement. All class files from the imported packages are accessible
within the local scope of the with statement:
Convert arrays
Calling Super
// super run
// on my run
Loading scripts
load('http://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.6.0/underscore-
min.js');
print(odds); // 1, 3, 5
loadWithNewGlobal('script.js');
Command-line scripts
If you're interested in writing command-line (shell) scripts with Java, give
Nake a try. Nake is a simplified Make for Java 8 Nashorn. You define
tasks in a project-specific Nakefile , then run those tasks by typing
nake -- myTask into the command line. Tasks are written in javascript
and run in Nashorns scripting mode, so you can utilize the full power of
your terminal as well as the JDK8 API and any java library.
That's it
I hope this guide was helpful to you and you enjoyed our journey to the
Nashorn Javascript Engine. For further information about Nashorn read
here, here and here. A guide to coding shell scripts with Nashorn can be
found here.
Keep on coding!
Java 8 Concurrency Tutorial:
Threads and Executors
April 07, 2015
The Concurrency API was first introduced with the release of Java 5 and
then progressively enhanced with every new Java release. The majority
of concepts shown in this article also work in older versions of Java.
However my code samples focus on Java 8 and make heavy use of
lambda expressions and other new features. If you're not yet familiar with
lambdas I recommend reading my Java 8 Tutorial first.
Java supports Threads since JDK 1.0. Before starting a new thread you
have to specify the code to be executed by this thread, often called the
task. This is done by implementing Runnable - a functional interface
defining a single void no-args method run() as demonstrated in the
following example:
task.run();
System.out.println("Done!");
Hello main
Hello Thread-0
Done!
Or that:
Hello main
Done!
Hello Thread-0
Threads can be put to sleep for a certain duration. This is quite handy to
simulate long running tasks in the subsequent code samples of this
article:
When you run the above code you'll notice the one second delay
between the first and the second print statement. TimeUnit is a useful
enum for working with units of time. Alternatively you can achieve the
same by calling Thread.sleep(1000) .
Working with the Thread class can be very tedious and error-prone. Due
to that reason the Concurrency API has been introduced back in 2004
with the release of Java 5. The API is located in package
java.util.concurrent and contains many useful classes for handling
concurrent programming. Since that time the Concurrency API has been
enhanced with every new Java release and even Java 8 provides new
classes and methods for dealing with concurrency.
Now let's take a deeper look at one of the most important parts of the
Concurrency API - the executor services.
Executors
The Concurrency API introduces the concept of an ExecutorService as a
higher level replacement for working with threads directly. Executors are
capable of running asynchronous tasks and typically manage a pool of
threads, so we don't have to create new threads manually. All threads of
the internal pool will be reused under the hood for revenant tasks, so we
can run as many concurrent tasks as we want throughout the life-cycle of
our application with a single executor service.
The result looks similar to the above sample but when running the code
you'll notice an important difference: the java process never stops!
Executors have to be stopped explicitly - otherwise they keep listening for
new tasks.
try {
System.out.println("attempt to shutdown executor");
executor.shutdown();
executor.awaitTermination(5, TimeUnit.SECONDS);
}
catch (InterruptedException e) {
System.err.println("tasks interrupted");
}
finally {
if (!executor.isTerminated()) {
System.err.println("cancel non-finished tasks");
}
executor.shutdownNow();
System.out.println("shutdown finished");
}
The executor shuts down softly by waiting a certain amount of time for
termination of currently running tasks. After a maximum of five seconds
the executor finally shuts down by interrupting all running tasks.
Calling the method get() blocks the current thread and waits until the
callable completes before returning the actual result 123 . Now the future
is finally done and we see the following result on the console:
executor.shutdownNow();
future.get();
You might have noticed that the creation of the executor slightly differs
from the previous example. We use newFixedThreadPool(1) to create an
executor service backed by a thread-pool of size one. This is equivalent
to newSingleThreadExecutor() but we could later increase the pool size
by simply passing a value larger than one.
Timeouts
Any call to future.get() will block and wait until the underlying callable
has been terminated. In the worst case a callable runs forever - thus
making your application unresponsive. You can simply counteract those
scenarios by passing a timeout:
future.get(1, TimeUnit.SECONDS);
InvokeAll
executor.invokeAll(callables)
.stream()
.map(future -> {
try {
return future.get();
}
catch (Exception e) {
throw new IllegalStateException(e);
}
})
.forEach(System.out::println);
InvokeAny
// => task2
The above example uses yet another type of executor created via
newWorkStealingPool() . This factory method is part of Java 8 and returns
an executor of type ForkJoinPool which works slightly different than
normal executors. Instead of using a fixed size thread-pool ForkJoinPools
are created for a given parallelism size which per default is the number of
available cores of the hosts CPU.
Scheduled Executors
We've already learned how to submit and run tasks once on an executor.
In order to periodically run common tasks multiple times, we can utilize
scheduled thread pools.
This code sample schedules a task to run after an initial delay of three
seconds has passed:
ScheduledExecutorService executor =
Executors.newScheduledThreadPool(1);
TimeUnit.MILLISECONDS.sleep(1337);
ScheduledExecutorService executor =
Executors.newScheduledThreadPool(1);
Runnable task = () -> System.out.println("Scheduling: " +
System.nanoTime());
int initialDelay = 0;
int period = 1;
executor.scheduleAtFixedRate(task, initialDelay, period,
TimeUnit.SECONDS);
instead. This method works just like the counterpart described above.
The difference is that the wait time period applies between the end of a
task and the start of the next task. For example:
ScheduledExecutorService executor =
Executors.newScheduledThreadPool(1);
executor.scheduleWithFixedDelay(task, 0, 1, TimeUnit.SECONDS);
This example schedules a task with a fixed delay of one second between
the end of an execution and the start of the next execution. The initial
delay is zero and the tasks duration is two seconds. So we end up with
an execution interval of 0s, 3s, 6s, 9s and so on. As you can see
scheduleWithFixedDelay() is handy if you cannot predict the duration of
the scheduled tasks.
I hope you've enjoyed this article. If you have any further questions send
me your feedback in the comments below or via Twitter.
The majority of concepts shown in this article also work in older versions
of Java. However the code samples focus on Java 8 and make heavy
use of lambda expressions and new concurrency features. If you're not
yet familiar with lambdas I recommend reading my Java 8 Tutorial first.
For simplicity the code samples of this tutorial make use of the two helper
methods sleep(seconds) and stop(executor) as defined here.
Synchronized
In the previous tutorial) we've learned how to execute code in parallel via
executor services. When writing such multi-threaded code you have to
pay particular attention when accessing shared mutable variables
concurrently from multiple threads. Let's just say we want to increment an
integer which is accessible simultaneously from multiple threads.
int count = 0;
void increment() {
count = count + 1;
}
IntStream.range(0, 10000)
.forEach(i -> executor.submit(this::increment));
stop(executor);
System.out.println(count); // 9965
Instead of seeing a constant result count of 10000 the actual result varies
with every execution of the above code. The reason is that we share a
mutable variable upon different threads without synchronizing the access
to this variable which results in a race condition.
Luckily Java supports thread-synchronization since the early days via the
synchronized keyword. We can utilize synchronized to fix the above
race conditions when incrementing the count:
IntStream.range(0, 10000)
.forEach(i -> executor.submit(this::incrementSync));
stop(executor);
System.out.println(count); // 10000
void incrementSync() {
synchronized (this) {
count = count + 1;
}
}
Internally Java uses a so called monitor also known as monitor lock or
intrinsic lock in order to manage synchronization. This monitor is bound
to an object, e.g. when using synchronized methods each method share
the same monitor of the corresponding object.
Locks
Instead of using implicit locking via the synchronized keyword the
Concurrency API supports various explicit locks specified by the Lock
interface. Locks support various methods for finer grained lock control
thus are more expressive than implicit monitors.
ReentrantLock
The class ReentrantLock is a mutual exclusion lock with the same basic
behavior as the implicit monitors accessed via the synchronized
keyword but with extended capabilities. As the name suggests this lock
implements reentrant characteristics just as implicit monitors.
Let's see how the above sample looks like using ReentrantLock :
ReentrantLock lock = new ReentrantLock();
int count = 0;
void increment() {
lock.lock();
try {
count++;
} finally {
lock.unlock();
}
}
A lock is acquired via lock() and released via unlock() . It's important
to wrap your code into a try/finally block to ensure unlocking in case
of exceptions. This method is thread-safe just like the synchronized
counterpart. If another thread has already acquired the lock subsequent
calls to lock() pause the current thread until the lock has been
unlocked. Only one thread can hold the lock at any given time.
Locks support various methods for fine grained control as seen in the
next sample:
executor.submit(() -> {
lock.lock();
try {
sleep(1);
} finally {
lock.unlock();
}
});
executor.submit(() -> {
System.out.println("Locked: " + lock.isLocked());
System.out.println("Held by me: " +
lock.isHeldByCurrentThread());
boolean locked = lock.tryLock();
System.out.println("Lock acquired: " + locked);
});
stop(executor);
While the first task holds the lock for one second the second task obtains
different information about the current state of the lock:
Locked: true
Held by me: false
Lock acquired: false
ReadWriteLock
executor.submit(() -> {
lock.writeLock().lock();
try {
sleep(1);
map.put("foo", "bar");
} finally {
lock.writeLock().unlock();
}
});
The above example first acquires a write-lock in order to put a new value
to the map after sleeping for one second. Before this task has finished
two other tasks are being submitted trying to read the entry from the map
and sleep for one second:
executor.submit(readTask);
executor.submit(readTask);
stop(executor);
When you execute this code sample you'll notice that both read tasks
have to wait the whole second until the write task has finished. After the
write lock has been released both read tasks are executed in parallel and
print the result simultaneously to the console. They don't have to wait for
each other to finish because read-locks can safely be acquired
concurrently as long as no write-lock is held by another thread.
StampedLock
Java 8 ships with a new kind of lock called StampedLock which also
support read and write locks just like in the example above. In contrast to
ReadWriteLock the locking methods of a StampedLock return a stamp
represented by a long value. You can use these stamps to either
release a lock or to check if the lock is still valid. Additionally stamped
locks support another lock mode called optimistic locking.
executor.submit(() -> {
long stamp = lock.writeLock();
try {
sleep(1);
map.put("foo", "bar");
} finally {
lock.unlockWrite(stamp);
}
});
executor.submit(readTask);
executor.submit(readTask);
stop(executor);
Obtaining a read or write lock via readLock() or writeLock() returns a
stamp which is later used for unlocking within the finally block. Keep in
mind that stamped locks don't implement reentrant characteristics. Each
call to lock returns a new stamp and blocks if no lock is available even if
the same thread already holds a lock. So you have to pay particular
attention not to run into deadlocks.
Just like in the previous ReadWriteLock example both read tasks have to
wait until the write lock has been released. Then both read tasks print to
the console simultaneously because multiple reads doesn't block each
other as long as no write-lock is held.
executor.submit(() -> {
long stamp = lock.tryOptimisticRead();
try {
System.out.println("Optimistic Lock Valid: " +
lock.validate(stamp));
sleep(1);
System.out.println("Optimistic Lock Valid: " +
lock.validate(stamp));
sleep(2);
System.out.println("Optimistic Lock Valid: " +
lock.validate(stamp));
} finally {
lock.unlock(stamp);
}
});
executor.submit(() -> {
long stamp = lock.writeLock();
try {
System.out.println("Write Lock acquired");
sleep(2);
} finally {
lock.unlock(stamp);
System.out.println("Write done");
}
});
stop(executor);
The optimistic lock is valid right after acquiring the lock. In contrast to
normal read locks an optimistic lock doesn't prevent other threads to
obtain a write lock instantaneously. After sending the first thread to sleep
for one second the second thread obtains a write lock without waiting for
the optimistic read lock to be released. From this point the optimistic read
lock is no longer valid. Even when the write lock is released the optimistic
read locks stays invalid.
So when working with optimistic locks you have to validate the lock every
time after accessing any shared mutable variable to make sure the read
was still valid.
Sometimes it's useful to convert a read lock into a write lock without
unlocking and locking again. StampedLock provides the method
tryConvertToWriteLock() for that purpose as seen in the next sample:
executor.submit(() -> {
long stamp = lock.readLock();
try {
if (count == 0) {
stamp = lock.tryConvertToWriteLock(stamp);
if (stamp == 0L) {
System.out.println("Could not convert to write
lock");
stamp = lock.writeLock();
}
count = 23;
}
System.out.println(count);
} finally {
lock.unlock(stamp);
}
});
stop(executor);
The task first obtains a read lock and prints the current value of field
count to the console. But if the current value is zero we want to assign a
new value of 23 . We first have to convert the read lock into a write lock
to not break potential concurrent access by other threads. Calling
tryConvertToWriteLock() doesn't block but may return a zero stamp
indicating that no write lock is currently available. In that case we call
writeLock() to block the current thread until a write lock is available.
Semaphores
In addition to locks the Concurrency API also supports counting
semaphores. Whereas locks usually grant exclusive access to variables
or resources, a semaphore is capable of maintaining whole sets of
permits. This is useful in different scenarios where you have to limit the
amount concurrent access to certain parts of your application.
IntStream.range(0, 10)
.forEach(i -> executor.submit(longRunningTask));
stop(executor);
Semaphore acquired
Semaphore acquired
Semaphore acquired
Semaphore acquired
Semaphore acquired
Could not acquire semaphore
Could not acquire semaphore
Could not acquire semaphore
Could not acquire semaphore
Could not acquire semaphore
This was the second part out of a series of concurrency tutorials. More
parts will be released in the near future, so stay tuned. As usual you find
all code samples from this article on GitHub, so feel free to fork the repo
and try it by your own.
I hope you've enjoyed this article. If you have any further questions send
me your feedback in the comments below. You should also follow me on
Twitter for more dev-related stuff!
For simplicity the code samples of this tutorial make use of the two helper
methods sleep(seconds) and stop(executor) as defined here.
AtomicInteger
The package java.concurrent.atomic contains many useful classes to
perform atomic operations. An operation is atomic when you can safely
perform the operation in parallel on multiple threads without using the
synchronized keyword or locks as shown in my previous tutorial.
Internally, the atomic classes make heavy use of compare-and-swap
(CAS), an atomic instruction directly supported by most modern CPUs.
Those instructions usually are much faster than synchronizing via locks.
So my advice is to prefer atomic classes over locks in case you just have
to change a single mutable variable concurrently.
Now let's pick one of the atomic classes for a few examples:
AtomicInteger
IntStream.range(0, 1000)
.forEach(i -> executor.submit(atomicInt::incrementAndGet));
stop(executor);
IntStream.range(0, 1000)
.forEach(i -> {
Runnable task = () ->
atomicInt.updateAndGet(n -> n + 2);
executor.submit(task);
});
stop(executor);
IntStream.range(0, 1000)
.forEach(i -> {
Runnable task = () ->
atomicInt.accumulateAndGet(i, (n, m) -> n + m);
executor.submit(task);
});
stop(executor);
LongAdder
The class LongAdder as an alternative to AtomicLong can be used to
consecutively add values to a number.
stop(executor);
This class is usually preferable over atomic numbers when updates from
multiple threads are more common than reads. This is often the case
when capturing statistical data, e.g. you want to count the number of
requests served on a web server. The drawback of LongAdder is higher
memory consumption because a set of variables is held in-memory.
LongAccumulator
LongAccumulator is a more generalized version of LongAdder. Instead of
performing simple add operations the class LongAccumulator builds
around a lambda expression of type LongBinaryOperator as
demonstrated in this code sample:
IntStream.range(0, 10)
.forEach(i -> executor.submit(() ->
accumulator.accumulate(i)));
stop(executor);
ConcurrentMap
The interface ConcurrentMap extends the map interface and defines one
of the most useful concurrent collection types. Java 8 introduces
functional programming by adding new methods to this interface.
The method putIfAbsent() puts a new value into the map only if no
value exists for the given key. At least for the ConcurrentHashMap
The method getOrDefault() returns the value for the given key. In case
no entry exists for this key the passed default value is returned:
Finally, the method merge() can be utilized to unify a new value with an
existing value in the map. Merge accepts a key, the new value to be
merged into the existing entry and a bi-function to specify the merging
behavior of both values:
ConcurrentHashMap
All those methods above are part of the ConcurrentMap interface, thereby
available to all implementations of that interface. In addition the most
important implementation ConcurrentHashMap has been further enhanced
with a couple of new methods to perform parallel operations upon the
map.
-Djava.util.concurrent.ForkJoinPool.common.parallelism=5
We use the same example map for demonstrating purposes but this time
we work upon the concrete implementation ConcurrentHashMap instead of
the interface ConcurrentMap , so we can access all public methods from
this class:
Search
// ForkJoinPool.commonPool-worker-2
// main
// ForkJoinPool.commonPool-worker-3
// Result: bar
// ForkJoinPool.commonPool-worker-2
// main
// main
// ForkJoinPool.commonPool-worker-1
// Result: solo
Reduce
The method reduce() already known from Java 8 Streams accepts two
lambda expressions of type BiFunction . The first function transforms
each key-value pair into a single value of any type. The second function
combines all those transformed values into a single result, ignoring any
possible null values.
// Transform: ForkJoinPool.commonPool-worker-2
// Transform: main
// Transform: ForkJoinPool.commonPool-worker-3
// Reduce: ForkJoinPool.commonPool-worker-3
// Transform: main
// Reduce: main
// Reduce: main
// Result: r2=d2, c3=p0, han=solo, foo=bar
I hope you've enjoyed reading the third part of my tutorial series about
Java 8 Concurrency. The code samples from this tutorial are hosted on
GitHub along with many other Java 8 code snippets. You're welcome to
fork the repo and try it by your own.
If you want to support my work, please share this tutorial with your
friends. You should also follow me on Twitter as I constantly tweet about
Java and programming related stuff.
Plenty of tutorials and articles cover the most important changes in Java
8 like lambda expressions and functional streams. But furthermore many
existing classes have been enhanced in the JDK 8 API with useful
features and methods.
This article covers some of those smaller changes in the Java 8 API -
each described with easily understood code samples. Let's take a deeper
look into Strings, Numbers, Math and Files.
Slicing Strings
Two new methods are available on the String class: join and chars .
The first method joins any number of strings into a single string with the
given delimiter:
The second method chars creates a stream for all characters of the
string, so you can use stream operations upon those characters:
"foobar:foo:bar"
.chars()
.distinct()
.mapToObj(c -> String.valueOf((char)c))
.sorted()
.collect(Collectors.joining());
// => :abfor
Not only strings but also regex patterns now benefit from streams.
Instead of splitting strings into streams for each character we can split
strings for any pattern and create a stream to work upon as shown in this
example:
Pattern.compile(":")
.splitAsStream("foobar:foo:bar")
.filter(s -> s.contains("bar"))
.sorted()
.collect(Collectors.joining(":"));
// => bar:foobar
The above pattern accepts any string which ends with @gmail.com and is
then used as a Java 8 Predicate to filter a stream of email addresses.
Crunching Numbers
Java 8 adds additional support for working with unsigned numbers.
Numbers in Java had always been signed. Let's look at Integer for
example:
System.out.println(Integer.MAX_VALUE); // 2147483647
System.out.println(Integer.MAX_VALUE + 1); // -2147483648
Java 8 adds support for parsing unsigned ints. Let's see how this works:
As you can see it's now possible to parse the maximum possible
unsigned number 2³² - 1 into an integer. And you can also convert this
number back into a string representing the unsigned number.
try {
Integer.parseInt(string, 10);
}
catch (NumberFormatException e) {
System.err.println("could not parse signed int of " +
maxUnsignedInt);
}
Do the Math
The utility class Math has been enhanced by a couple of new methods
for handling number overflows. What does that mean? We've already
seen that all number types have a maximum value. So what happens
when the result of an arithmetic operation doesn't fit into its size?
System.out.println(Integer.MAX_VALUE); // 2147483647
System.out.println(Integer.MAX_VALUE + 1); // -2147483648
Java 8 adds support for strict math to handle this problem. Math has
been extended by a couple of methods who all ends with exact , e.g.
addExact . Those methods handle overflows properly by throwing an
ArithmeticException when the result of the operation doesn't fit into the
number type:
try {
Math.addExact(Integer.MAX_VALUE, 1);
}
catch (ArithmeticException e) {
System.err.println(e.getMessage());
// => integer overflow
}
The same exception might be thrown when trying to convert longs to int
via toIntExact :
try {
Math.toIntExact(Long.MAX_VALUE);
}
catch (ArithmeticException e) {
System.err.println(e.getMessage());
// => integer overflow
}
Listing files
You might have noticed that the creation of the stream is wrapped into a
try/with statement. Streams implement AutoCloseable and in this case
we really have to close the stream explicitly since it's backed by IO
operations.
Finding files
The next example demonstrates how to find files in a directory or it's sub-
directories.
is the initial starting point and maxDepth defines the maximum folder
depth to be searched. The third argument is a matching predicate and
defines the search logic. In the above example we search for all
JavaScript files (filename ends with .js).
Reading text files into memory and writing strings into a text file in Java 8
is finally a simple task. No messing around with readers and writers. The
method Files.readAllLines reads all lines of a given file into a list of
strings. You can simply modify this list and write the lines into another file
via Files.write :
List<String> lines =
Files.readAllLines(Paths.get("res/nashorn1.js"));
lines.add("print('foobar');");
Files.write(Paths.get("res/nashorn1-modified.js"), lines);
Please keep in mind that those methods are not very memory-efficient
because the whole file will be read into memory. The larger the file the
more heap-size will be used.
If you need more fine-grained control you can instead construct a new
buffered reader:
So as you can see Java 8 provides three simple ways to read the lines of
a text file, making text file handling quite convenient.
I hope you've enjoyed this article. All code samples are hosted on GitHub
along with plenty of other code snippets from all the Java 8 articles of my
blog. If this post was kinda useful to you feel free to star the repo and
follow me on Twitter.
Keep on coding!
Avoiding Null Checks in Java 8
March 15, 2015
Tony Hoare, the inventor of the null reference apologized in 2009 and
denotes this kind of errors as his billion-dollar mistake.
class Outer {
Nested nested;
Nested getNested() {
return nested;
}
}
class Nested {
Inner inner;
Inner getInner() {
return inner;
}
}
class Inner {
String foo;
String getFoo() {
return foo;
}
}
Resolving a deep nested path in this structure can be kinda awkward. We
have to write a bunch of null checks to make sure not to raise a
NullPointerException :
We can get rid of all those null checks by utilizing the Java 8 Optional
Optional.of(new Outer())
.map(Outer::getNested)
.map(Nested::getInner)
.map(Inner::getFoo)
.ifPresent(System.out::println);
Please keep in mind that both solutions are probably not as performant
as traditional null checks. In most cases that shouldn't be much of an
issue.
Happy coding!
> UPDATE: I've updated the code samples thanks to a hint from
Zukhramm on Reddit.
Fixing Java 8 Stream Gotchas with
IntelliJ IDEA
March 05, 2015
Java 8 has been released almost one year ago in March 2014. At
Pondus we've managed to update all of our production servers to this
new version back in May 2014. Since then we've migrated major parts of
our code base to lambda expressions, streams and the new Date API.
We also use Nashorn to dynamically script parts of our application which
may change during runtime.
The most used feature besides lambdas is the new Stream API.
Collection operations are all around the place in almost any codebase
I've ever seen. And Streams are a great way to improve code readability
of all those collection crunching.
But one thing about streams really bothers me: Streams only provide a
few terminal operations like reduce and findFirst directly while others
are only accessible via collect . There's a utility class Collectors,
providing a bunch of convenient collectors like toList , toSet ,
joining and groupingBy .
For example this code filters over a collection of strings and creates a
new list:
stringCollection
.stream()
.filter(e -> e.startsWith("a"))
.collect(Collectors.toList());
After migrating a project with 300k lines of code to streams I can say that
toList , toSet and groupingBy are by far the most used terminal
operations in our project. So I really cannot understand the design
decision not to integrate those methods directly into the Stream interface
so you could just write:
stringCollection
.stream()
.filter(e -> e.startsWith("a"))
.toList();
This might look like a minor imperfection at first but it gets really annoying
if you have to use this kind of stuff over and over again.
Anyways. IntelliJ IDEA claims to be the most intelligent Java IDE. So let's
see how we can utilize IDEA to solve this problem for us.
How does Live Templates help with the problem described above?
Actually we can simply create our own Live Templates for all the
commonly used default Stream collectors. E.g. we can create a Live
Template with the abbreviation .toList to insert the appropriate
collector .collect(Collectors.toList()) automatically.
This part is important: After adding a new Live Template you have to
specify the applicable context at the bottom of the dialog. You have to
choose Java → Other. Afterwards you define the abbreviation, a
description and the actual template code.
// Abbreviation: .toList
.collect(Collectors.toList())
// Abbreviation: .toSet
.collect(Collectors.toSet())
// Abbreviation: .join
.collect(Collectors.joining("$END$"))
// Abbreviation: .groupBy
.collect(Collectors.groupingBy(e -> $END$))
The special variable $END$ determines the cursors position after using
the template, so you can directly start typing at this position, e.g. to define
the joining delimiter.
> Hint: You should enable the option "Add unambiguous imports on the
fly" so IDEA automatically adds an import statement to
java.util.stream.Collectors . The option is located in: Editor → General
→ Auto Import
Join
GroupBy
Live Templates in IntelliJ IDEA are an extremely versatile and powerful
tool. You can greatly increase your coding productivity with it. Do you
know other examples where Live Templates can save your live? Let me
know!
Still not satisfied? Learn everything you ever wanted to know about Java
8 Streams in my Streams Tutorial.
Happy coding.
Using Backbone.js with Nashorn
April 07, 2014
class Product {
String name;
double price;
int stock;
double valueOfGoods;
}
The backbone model also implements the business logic: The method
getValueOfGoods calculates the value of all products by multiplying
stock with price . Each time stock or price changes the property
valueOfGoods must be re-calculated.
var Product = Backbone.Model.extend({
defaults: {
name: '',
stock: 0,
price: 0.0,
valueOfGoods: 0.0
},
initialize: function() {
this.on('change:stock change:price', function() {
var stock = this.get('stock');
var price = this.get('price');
var valueOfGoods = this.getValueOfGoods(stock, price);
this.set('valueOfGoods', valueOfGoods);
});
},
load('http://cdnjs.cloudflare.com/ajax/libs/underscore.js/1.6.0/underscore-
min.js');
load('http://cdnjs.cloudflare.com/ajax/libs/backbone.js/1.1.2/backbone-
min.js');
load('product-backbone-model.js');
The script first loads the relevant external scripts Underscore and
Backbone (Underscore is a pre-requirement for Backbone) and our
Product backbone model as defined above.
Conclusion
Reusing existing javascript libraries on the Nashorn Engine is quite easy.
Backbone is great for building complex HTML5 front-ends. In my opinion
Nashorn and the JVM now is a great alternative to Node.js, since you can
make use of the whole Java eco-system in your Nashorn codebase, such
as the whole JDK API and all available libraries and tools. Keep in mind
that you're not tight to the Java Language when working with Nashorn -
think Scala, Groovy, Clojure or even pure Javascript via jjs .
The runnable source code from this article is hosted on GitHub (see this
file). Feel free to fork the repository or send me your feedback via Twitter.