Java Vs C
Java Vs C
html
http://scribblethink.org
Jan. 2003
updated 2004
This article surveys a number of benchmarks and finds that Java performance on
numerical code is comparable to that of C++, with hints that Java's relative
performance is continuing to improve. We then describe clear theoretical
reasons why these benchmark results should be expected.
Benchmarks
Five composite benchmarks listed below show that modern Java has acceptable
performance, being nearly equal to (and in many cases faster than) C/C++
across a number of benchmarks.
1. Numerical Kernels
The authors test some real numerical codes (FFT, Matrix factorization,
SOR, fluid solver, N-body) on several architectures and compilers. On Intel
they found that the Java performance was very reasonable compared to C
(e.g, 20% slower), and that Java was faster than at least one C compiler
(KAI compiler on Linux).
The authors conclude, "On Intel Pentium hardware, especially with Linux,
the performance gap is small enough to be of little or no concern to
programmers."
1 of 8 9/2/2008 8:48 AM
Java pulling ahead? Java versus C++ benchmarks http://www.idiom.com/~zilla/Computer/javaCbenchmark.html
Several years ago these benchmarks showed java performance at the time
to be somewhere in the middle of C compiler performance - faster than
the worst C compilers, slower than the best. These are
"microbenchmarks", but they do have the advantage that they were run
across a number of different problem sizes and thus the results are not
reflecting a lucky cache interaction (see more details on this issue in the
next section).
Note that these benchmarks are on Intel architecture machines. Java compilers
on some other processors are less developed at present.
2 of 8 9/2/2008 8:48 AM
Java pulling ahead? Java versus C++ benchmarks http://www.idiom.com/~zilla/Computer/javaCbenchmark.html
x = y + 2 * (...)
*p = ...
arr[j] = ...
z = x + ...
This pointer problem in C resembles the array bounds checking issue in Java:
in both cases, if the compiler can determine the array (or pointer) index at
compile time it can avoid the issue.
In the loop below, for example, a Java compiler can trivially avoid testing the
lower array bound because the loop counter is only incremented, never
decremented. A single test before starting the loop handles the upper bound test
if 'len' is not modified inside the loop (and java has no pointers, so simply
looking for an assignment is enough to determine this):
Consider what happens when you do a new/malloc: a) the allocator looks for an
empty slot of the right size, then returns you a pointer. b) This pointer is
pointing to some fairly random place.
With GC, a) the allocator doesn't need to look for memory, it knows where it is,
b) the memory it returns is adjacent to the last bit of memory you requested.
The wandering around part happens not all the time but only at garbage
collection. And then (depending on the GC algorithm) things get moved of
course as well.
How much of an effect is this? One rather dated (1993) example shows that
missing the cache can be a big cost: changing an array size in small C program
3 of 8 9/2/2008 8:48 AM
Java pulling ahead? Java versus C++ benchmarks http://www.idiom.com/~zilla/Computer/javaCbenchmark.html
from 1023 to 1024 results in a slowdown of 17 times (not 17%). This is like
switching from C to VB! This particular program stumbled across what was
probably the worst possible cache interaction for that particular processor
(MIPS); the effect isn't that bad in general...but with processor speeds
increasing faster than memory, missing the cache is probably an even bigger
cost now than it was then.
(It's easy to find other research studies demonstrating this; here's one from
Princeton: they found that (garbage-collected) ML programs translated from the
SPEC92 benchmarks have lower cache miss rates than the equivalent C and
Fortran programs.)
This is theory, what about practice? In a well known paper [2] several widely
used programs (including perl and ghostscript) were adapted to use several
different allocators including a garbage collector masquerading as malloc (with a
dummy free()). The garbage collector was as fast as a typical malloc/free; perl
was one of several programs that ran faster when converted to use a
garbage collector. Another interesting fact is that the cost of malloc/free is
significant: both perl and ghostscript spent roughly 25-30% of their time in
these calls.
Besides the improved cache behavior, also note that automatic memory
management allows escape analysis, which identifies local allocations that can
be placed on the stack. (Stack allocations are clearly cheaper than heap
allocation of either sort).
3) Run-time compilation
The JIT compiler knows more than a conventional "pre-compiler", and it may be
able to do a better job given the extra information:
The compiler knows what processor it is running on, and can generate
code specifically for that processor. It knows whether (for example) the
processor is a PIII or P4, if SSE2 is present, and how big the caches are. A
pre-compiler on the other hand has to target the least-common-
denominator processor, at least in the case of commercial software.
Because the compiler knows which classes are actually loaded and being
called, it knows which methods can be de-virtualized and inlined.
(Remarkably, modern java compilers also know how to "uncompile" inlined
calls in the case where an overriding method is loaded after the JIT
compilation happens.)
A dynamic compiler may also get the branch prediction hints right more
often than a static compiler.
It might also be noted that Microsoft has some similar comments regarding C#
performance [5]:
4 of 8 9/2/2008 8:48 AM
Java pulling ahead? Java versus C++ benchmarks http://www.idiom.com/~zilla/Computer/javaCbenchmark.html
What is slow?
The notion of "slow" in popular discussions is often poorly calibrated. If you write
a number of small benchmarks in several different types of programming
language, the broad view of performance might be something like this:
Despite this big picture, performance differences of less than a factor of two are
often upheld as evidence in speed debates. As we describe next, differences of
2x-4x or more are often just noise.
For one, it's common sense; the compiler may happen to do particularly well or
particularly poorly on the inner loop of the program; this doesn't generalize. The
fourth set of benchmarks above show Java as being faster than C by a factor two
on an FFT of an array of a particular size. Should you now proclaim that Java is
always twice as fast as C? No, it's just one program.
There is a more important issue than the code quality on the particular
benchmark, however:
Cache/Memory effects.
5 of 8 9/2/2008 8:48 AM
Java pulling ahead? Java versus C++ benchmarks http://www.idiom.com/~zilla/Computer/javaCbenchmark.html
The person who posted [3] demonstrated the fragility of his own benchmark in a
followup post, writing that "Java now performs as well as gcc on many tests"
after changing something (note that it was not the Java language that changed).
Nevertheless, the idea that "java is slow" is widely believed. Why this is so is
perhaps the most interesting aspect of this article.
Java circa 1995 was slow. The first incarnations of java did not java a JIT
compiler, and hence were bytecode interpreted (like Python for example).
6 of 8 9/2/2008 8:48 AM
Java pulling ahead? Java versus C++ benchmarks http://www.idiom.com/~zilla/Computer/javaCbenchmark.html
This explanation is implausible. Most "computer folk" are able to rattle off
the exact speed in GHz of the latest processors, and they track this
information as it changes each month (and have done so for years). Yet
this explanation asks us to believe that they are not able to remember
that a single and rather important language speed change occurred in
1996.
Java can be slow still. For example, programs written with the thread-safe
Vector class are necessarily slower (on a single processor at least) than
those written with the equivalent thread-unsafe ArrayList class.
Java program startup is slow. As a java program starts, it unzips the java
libraries and compiles parts of itself, so an interactive program can be
sluggish for the first couple seconds of use.
This approaches being a reasonable explanation for the speed myth. But
while it might explain user's impressions, it does not explain why many
programmers (who can easily understand the idea of an interpreted
program being compiled) share the belief.
Two of the most interesting observations regarding this issue are that:
Together these suggest that it is possible that no amount of data will alter
peoples' beliefs, and that in actuality these "speed beliefs" probably have little to
do with java, garbage collection, or the otherwise stated subject. Our answer
probably lies somewhere in sociology or psychology. Programmers, despite their
professed appreciation of logical thought, are not immune to a kind of
mythology.
Acknowledgements
Ian Rogers, Curt Fischer, and Bill Bogstad provided input and clarification of
some points.
7 of 8 9/2/2008 8:48 AM
Java pulling ahead? Java versus C++ benchmarks http://www.idiom.com/~zilla/Computer/javaCbenchmark.html
References
[1] K. Reinholtz, Java will be faster than C++, ACM Sigplan Notices, 35(2):
25-28 Feb 2000.
8 of 8 9/2/2008 8:48 AM