JVM Performance Engineering Inside Openjdk and The Hotspot Java Virtual Machine Monica Beckwith PDF Download
JVM Performance Engineering Inside Openjdk and The Hotspot Java Virtual Machine Monica Beckwith PDF Download
https://ebookbell.com/product/jvm-performance-engineering-inside-
openjdk-and-the-hotspot-java-virtual-machine-monica-
beckwith-58331716
https://ebookbell.com/product/jvm-performance-engineering-inside-
openjdk-and-the-hotspot-java-virtual-machine-1st-edition-monica-
beckwith-56486380
https://ebookbell.com/product/clojure-high-performance-jvm-
programming-1st-edition-eduardo-daz-38196700
https://ebookbell.com/product/clojure-high-performance-jvm-
programming-eduardo-diaz-shantanu-kumar-akhil-wali-232074192
https://ebookbell.com/product/clojure-high-performance-jvm-
programming-diaz-eduardo-kumar-153706882
Optimizing Cloud Native Java Practical Techniques For Improving Jvm
Application Performance 2nd Edition First Early Release 2nd First
Early Release Ben Evans
https://ebookbell.com/product/optimizing-cloud-native-java-practical-
techniques-for-improving-jvm-application-performance-2nd-edition-
first-early-release-2nd-first-early-release-ben-evans-53521098
https://ebookbell.com/product/optimizing-cloud-native-java-practical-
techniques-for-improving-jvm-application-performance-2nd-edition-
benjamin-j-evans-63479180
https://ebookbell.com/product/mastering-the-java-virtual-machine-an-
indepth-guide-to-jvm-internals-and-performance-optimization-
santana-55885106
https://ebookbell.com/product/mastering-the-java-virtual-machine-an-
indepth-guide-to-jvm-internals-and-performance-optimization-1st-
edition-otvio-santana-55869838
Preface
Acknowledgments
About the Author
Preface
Intended Audience
How to Use This Book
Acknowledgments
About the Author
For over 20 years, I have been immersed in the JVM and its associated
runtime, constantly in awe of its transformative evolution. This detailed and
insightful journey has provided me with invaluable knowledge and
perspectives that I am excited to share in this book.
As a performance engineer and a Java Champion, I have had the honor of
sharing my knowledge at various forums. Time and again, I’ve been
approached with questions about Java and JVM performance, the nuances
of distributed and cloud performance, and the advanced techniques that
elevate the JVM to a marvel.
In this book, I have endeavored to distill my expertise into a cohesive
narrative that sheds light on Java’s history, its innovative type system, and
its performance prowess. This book reflects my passion for Java and its
runtime. As you navigate these pages, you’ll uncover problem statements,
solutions, and the unique nuances of Java. The JVM, with its robust
runtime, stands as the bedrock of today’s advanced software architectures,
powering some of the most state-of-the-art applications and fortifying
developers with the tools needed to build resilient distributed systems. From
the granularity of microservices to the vast expanse of cloud-native
architectures, Java’s reliability and efficiency have cemented its position as
the go-to language for distributed computing.
The future of JVM performance engineering beckons, and it’s brighter than
ever. As we stand at this juncture, there’s a call to action. The next chapter
of JVM’s evolution awaits, and it’s up to us, the community, to pen this
narrative. Let’s come together, innovate, and shape the trajectory of JVM
for generations to come.
Intended Audience
This book is primarily written for Java developers and software engineers
who are keen to enhance their understanding of JVM internals and
performance tuning. It will also greatly benefit system architects and
designers, providing them with insights into JVM’s impact on system
performance. Performance engineers and JVM tuners will find advanced
techniques for optimizing JVM performance. Additionally, computer
science and engineering students and educators will gain a comprehensive
understanding of JVM’s complexities and advanced features.
With the hope of furthering education in performance engineering,
particularly with a focus on the JVM, this text also aligns with advanced
courses on programming languages, algorithms, systems, computer
architectures, and software engineering. I am passionate about fostering a
deeper understanding of these concepts and excited about contributing to
coursework that integrates the principles of JVM performance engineering
and prepares the next generation of engineers with the knowledge and skills
to excel in this critical area of technology.
Focusing on the intricacies and strengths of the language and runtime, this
book offers a thorough dissection of Java’s capabilities in concurrency, its
strengths in multithreading, and the sophisticated memory management
mechanisms that drive peak performance across varied environments.
In Chapter 1, we trace Java’s timeline, from its inception in the mid-1990s
Java’s groundbreaking runtime environment, complete with the Java VM,
expansive class libraries, and a formidable set of tools, has set the stage
with creative advancements and flexibility.
We spotlight Java’s achievements, from the transformative garbage
collector to the streamlined Java bytecode. The Java HotSpot VM, with its
advanced JIT compilation and avant-garde optimization techniques,
exemplifies Java’s commitment to performance. Its intricate compilation
methodologies, harmonious synergy between the “client” compiler (C1) and
“server” compiler (C2), and dynamic optimization capabilities ensure Java
applications remain agile and efficient.
The brilliance of Java extends to memory management with the HotSpot
Garbage Collector. Embracing generational garbage collection and the weak
generational hypothesis, it efficiently employs parallel and concurrent GC
threads, ensuring peak memory optimization and application
responsiveness.
From Java 1.1’s foundational features to the trailblazing innovations of Java
17, Java’s trajectory has been one of progress and continuous enhancement.
Java’s legacy emerges as one of perpetual innovation and excellence
In Chapter 2, we delve into the heart of Java: its type system. This system,
integral to any programming language, has seen a remarkable evolution in
Java, with innovations that have continually refined its structure. We begin
by exploring Java’s foundational elements—primitive and reference types,
interfaces, classes, and arrays—that anchored Java programming prior to
Java SE 5.0.
The narrative continues with the transformative enhancements from Java
SE 5.0 to Java SE 8, where enumerations and annotations emerged,
amplifying Java’s adaptability. Subsequent versions, Java 9 to Java 10,
brought forth the Variable Handle Typed Reference, further enriching the
language. And as we transition to the latest iterations, Java 11 to Java 17,
we spotlight the advent of Switch Expressions, Sealed Classes, and the
eagerly awaited Records.
We then venture into the realms of Project Valhalla, examining the
performance nuances of the existing type system and the potential of future
value classes. This chapter offers insights into Project Valhalla’s ongoing
endeavors, from refined generics to the conceptualization of classes for
basic primitives.
Java’s type system is more than just a set of types—it’s a reflection of
Java’s commitment to versatility, efficiency, and innovation. The goal of
this chapter is to illuminate the type system’s past, present, and promising
future, fostering a profound understanding of its intricacies.
Chapter 3 extensively covers the Java Platform Module System (JPMS),
showcasing its breakthrough impact on modular programming. As we step
into the modular era, Java, with JPMS, has taken a giant leap into this
future. For those new to this domain, we start by unraveling the essence of
modules, complemented by hands-on examples that guide you through
module creation, compilation, and execution.
Java’s transition from a monolithic JDK to a modular one demonstrates its
dedication to evolving needs and creative progress. A standout section of
this chapter is the practical implementation of modular services using JDK
17. We navigate the intricacies of module interactions, from service
providers to consumers, enriched by working examples. Key concepts like
encapsulation of implementation details and the challenges of Jar Hell
versioning are addressed, with the introduction of Jigsaw layers offering
solutions in the modular landscape. A hands-on segment further clarifies
these concepts, providing readers with tangible insights.
For a broader perspective, we draw comparisons with OSGi, spotlighting
the parallels and distinctions, to give readers a comprehensive
understanding of Java’s modular systems. Essential tools such as Jdeps,
Jlink, Jdeprscan, and Jmod are introduced, each integral to the modular
ecosystem. Through in-depth explanations and examples, we aim to
empower readers to effectively utilize these tools. As we wrap up, we
contemplate the performance nuances of JPMS and look ahead, speculating
on the future trajectories of Java’s modular evolution.
Logs are the unsung heroes of software development, providing invaluable
insights and aiding debugging. Chapter 4 highlights Java’s Unified
Logging System, guiding you through its proficiencies and best practices.
We commence by acknowledging the need for unified logging, highlighting
the challenges of disparate logging systems and the advantages of a unified
approach. The chapter then highlights the unification and infrastructure,
shedding light on the pivotal performance metrics for monitoring and
optimization.
We explore the vast array of log tags available, diving into their specific
roles and importance. Ensuring logs are both comprehensive and insightful,
we tackle the challenge of discerning any missing information. The
intricacies of log levels, outputs, and decorators are meticulously examined,
providing readers with a lucid understanding of how to classify, format, and
direct their logs. Practical examples further illuminate the workings of the
unified logging system, empowering readers to implement their newfound
knowledge in tangible scenarios.
Benchmarking and performance evaluation stand as pillars of any logging
system. This chapter equips readers with the tools and methodologies to
gauge and refine their logging endeavors effectively. We also touch upon
the optimization and management of the unified logging system, ensuring
its sustained efficiency. With continuous advancements, notably in JDK 11
and JDK 17, we ensure readers remain abreast of the latest in Java logging.
Concluding this chapter, we emphasize the importance of logs as a
diagnostic tool, shedding light on their role in proactive system monitoring
and reactive problem-solving. Chapter 4 highlights the power of effective
logging in Java, underscoring its significance in building and maintaining
robust applications.
Chapter 5 focuses on the essence of performance engineering within the
Java ecosystem, emphasizing that performance transcends mere speed—it’s
about crafting an unparalleled user experience. Our voyage commences
with a formative exploration of performance engineering’s pivotal role
within the broader software development realm. By unraveling the
multifaceted layers of software engineering, we accentuate performance’s
stature as a paramount quality attribute.
With precision, we delineate the metrics pivotal to gauging Java’s
performance, encompassing aspects from footprint to the nuances of
availability, ensuring readers grasp the full spectrum of performance
dynamics. Stepping in further we explore the intricacies of response time
and its symbiotic relationship with availability. This inspection provides
insights into the mechanics of application timelines, intricately weaving the
narrative of response time, throughput, and the inevitable pauses that
punctuate them.
Yet, the performance narrative is only complete by acknowledging the
profound influence of hardware. This chapter decodes the symbiotic
relationship between hardware and software, emphasizing the harmonious
symphony that arises from the confluence of languages, processors, and
memory models. From the subtleties of memory models and their bearing
on thread dynamics to the Java Memory Model’s foundational principles,
we journey through the maze of concurrent hardware, shedding light on the
order mechanisms pivotal to concurrent computing. Transitioning to the
realm of methodology, we introduce readers to the dynamic world of
performance engineering methodology. This section offers a panoramic
view, from the intricacies of experimental design to formulating a
comprehensive statement of work, championing a top-down approach that
guarantees a holistic perspective on the performance engineering process.
Benchmarking, the cornerstone of performance engineering, receives its
due spotlight. We underscore its indispensable role, guiding the reader
through the labyrinth of the benchmarking regime. This encompasses
everything from its inception in planning to the culmination in analysis. The
chapter provides a view into the art and science of JVM memory
management benchmarking, serving as a compass for those passionate
about performance optimization.
Finally, the Java Micro-Benchmark Suite (JMH) emerges as the pièce de
résistance. From its foundational setup to the intricacies of its myriad
features, the journey encompasses the genesis of writing benchmarks, to
their execution, enriched with insights into benchmarking modes, profilers,
and JMH’s pivotal annotations. This chapter should inspire a fervor for
relentless optimization and arms readers with the arsenal required to unlock
Java’s unparalleled performance potential.
Memory management is the silent guardian of Java applications, often
operating behind the scenes but crucial to their success. Chapter 6 offers a
leap into the world of garbage collection, unraveling the techniques and
innovations that ensure Java applications run efficiently and effectively. Our
journey begins with an overview of the garbage collection in Java, setting
the stage for the intricate details that follow. We then venture into Thread-
Local Allocation Buffers (TLABs) and Promotion Local Allocation Buffers
(PLABs), elucidating their pivotal roles in memory management. As we
progress, the chapter sheds light on optimizing memory access,
emphasizing the significance of the NUMA-Aware garbage collection and
its impact on performance.
The highlight of this chapter lies in its exploration of advanced garbage
collection techniques. We review the G1 Garbage Collector (G1 GC),
unraveling its revolutionary approach to heap management. From grasping
the advantages of a regionalized heap to optimizing G1 GC parameters for
peak performance, this section promises a holistic cognizance of one of
Java’s most advanced garbage collectors. But the exploration doesn’t end
there. The Z Garbage Collector (ZGC) stands as a pinnacle of technological
advancement, offering unparalleled scalability and low latency for
managing multi-terabyte heaps. We look into the origins of ZGC, its
adaptive optimization techniques, and the advancements that make it a
game-changer in real-time applications.
This chapter also offers insights into the emerging trends in garbage
collection, setting the stage for what lies ahead. Practicality remains at the
forefront, with a dedicated section offering invaluable tips for evaluating
GC performance. From sympathizing with various workloads, such as
Online Analytical Processing (OLAP) to Online Transaction Processing
(OLTP) and Hybrid Transactional/Analytical Processing (HTAP), to
synthesizing live data set pressure and data lifespan patterns, the chapter
equips readers with the tools and knowledge to optimize memory
management effectively. This chapter is an accessible guide to advanced
garbage collection techniques d that Java professionals dneed to navigate
the topography of memory management.
the ability to efficiently manage concurrent tasks and optimize string
operations stands as a testament to the language’s evolution and
adaptability. Chapter 7 covers the intricacies of Java’s concurrency
mechanisms and string optimizations, offering readers a comprehensive
exploration of advanced techniques and best practices. We commence our
journey with an extensive review of the string optimizations. From
mastering the nuances of literal and interned string optimization in the
HotSpot VM to the innovative string deduplication optimization introduced
in Java 8, the chapter sheds light on techniques to reduce string footprint.
We take a further look into the “Indy-fication” of string concatenation and
the introduction of compact strings, ensuring a holistic conceptualization of
string operations in Java.
Next, the chapter focuses on enhanced multithreading performance,
highlighting Java’s thread synchronization mechanisms. We study the role
of monitor locks, the various lock types in OpenJDK’s HotSpot VM, and
the dynamics of lock contention. The evolution of Java’s locking
mechanisms is meticulously detailed, offering insights into the
improvements in contended locks and monitor operations. To tap into our
learnings from Chapter 5, with the help of practical testing and performance
analysis, we visualize contended lock optimization, harnessing the power of
JMH and Async-Profiler.
As we navigate the world of concurrency, the transition from the thread-per-
task model to the scalable thread-per-request model is highlighted. The
examination of Java’s Executor Service, ThreadPools, ForkJoinPool
framework, and CompletableFuture ensures a robust comprehension of
Java’s concurrency mechanisms.
Our journey in this chapter concludes with a glimpse into the future of
concurrency in Java as we reimagine concurrency with virtual threads.
From understanding virtual threads and their carriers to discussing
parallelism and integration with existing APIs, the chapter is a practical
guide to advanced concurrency mechanisms and string optimizations in
Java.
In Chapter 8 the journey from startup to steady-state performance is
explored in depth.. This chapter ventures far into the modulation of JVM
start-up and warm-up, covering techniques and best practices that ensure
peak performance. We begin by distinguishing between the often-confused
concepts of warm-up and ramp-up, setting the stage for fully understanding
JVM’s start-up dynamics. The chapter emphasizes the importance of JVM
start-up and warm-up performance, dissecting the phases of JVM startup
and the journey to an application’s steady state. As we navigate the
application’s lifecycle, the significance of managing the state during start-
up and ramp-up becomes evident, highlighting the benefits of efficient state
management.
The study of Class Data Sharing offers insights into the anatomy of shared
archive files, memory mapping, and the benefits of multi-instance setups.
Moving on to Ahead-Of-Time (AOT) compilation, the contrast between
AOT and JIT compilation is meticulously highlighted, with GraalVM
heralding a paradigm shift in Java’s performance landscape and with
HotSpot VM’s up-and-coming Project Leyden and its forecasted ability to
manage states via CDS and AOT. The chapter also addresses the unique
challenges and opportunities of serverless computing and containerized
environments. The emphasis on ensuring swift startups and efficient scaling
in these environments underscores the evolving nature of Java performance
optimization.
Our journey then transitions to boosting warm-up performance with
OpenJDK HotSpot VM. The chapter offers a holistic view of warm-up
optimizations, from compiler enhancements to segmented code cache and
Project Leyden enhancements in the near future. The evolution from
PermGen to Metaspace is also highlighted to showcase start-up, warm-up,
and steady-state implications.
The chapter culminates with a survey of various OpenJDK projects, such as
CRIU, and CraC, revolutionizing Java’s time-to-steady state by introducing
groundbreaking checkpoint/restore functionality..
Our final chapter ( Chapter 9) focuses on the intersection of exotic
hardware and the Java Virtual Machine (JVM). This chapter offers readers a
considered exploration of the world of exotic hardware, its integration with
the JVM, and its galvanizing impact on performance engineering. We start
with an introduction to exotic hardware and its growing prominence in
cloud environments.
The pivotal role of language design and toolchains quickly becomes
evident, setting the stage for case studies showcasing the real-world
applications and challenges of integrating exotic hardware with the JVM.
From the light-weight Java gaming library (LWJGL), a baseline example
that offers insights into the intricacies of working with the JVM, to Aparapi,
which bridges the gap between Java and OpenCL, each case study is
carefully detailed, demonstrating the challenges, limitations, and successes
of each integration. The chapter then shifts to Project Sumatra, a significant
effort in JVM performance optimization, followed by TornadoVM, a
specialized JVM tailored for hardware accelerators.
Through these case studies, the symbiotic potential of integrating exotic
hardware with the JVM becomes increasingly evident, leading up to an
overview of Project Panama, a new horizon in JVM performance
engineering. At the heart of Project Panama lies the Vector API, a symbol of
innovation designed for vector computations. But it’s not just about
computations—it’s about ensuring they are efficiently vectorized and
tailored for hardware that thrives on vector operations. This API is an
example of Java’s commitment to evolving, ensuring that developers have
the tools to express parallel computations optimized for diverse hardware
architectures. But Panama isn’t just about vectors. The Foreign Function
and Memory API emerges as a pivotal tool, a bridge that allows Java to
converse seamlessly with native libraries. This is Java’s answer to the age-
old challenge of interoperability, ensuring Java applications can interface
effortlessly with native code, breaking language barriers.
Yet, every innovation comes with its set of challenges. Integrating exotic
hardware with the JVM is no walk in the park. From managing intricate
memory access patterns to deciphering hardware-specific behaviors, the
path to optimization is laden with complexities. But these challenges drive
innovation, pushing the boundaries of what’s possible. Looking to the
future, we envision Project Panama as the gold standard for JVM
interoperability. The horizon looks promising, with Panama poised to
redefine performance and efficiency for Java applications.
This isn’t just about the present or the imminent future. The world of JVM
performance engineering is on the cusp of a revolution. Innovations are
knocking at our door, waiting to be embraced—with Tornado VM’s Hybrid
APIs, and with HAT toolkit and Project Babylon on the horizon.
—Monica Beckwith
Acknowledgments
More than three decades ago, the programming languages landscape was
largely defined by C and its object-oriented extension, C++. In this period,
the world of computing was undergoing a significant shift from large,
cumbersome mainframes to smaller, more efficient minicomputers. C, with
its suitability for Unix systems, and C++, with its innovative introduction of
classes for object-oriented design, were at the forefront of this technological
evolution.
However, as the industry started to shift toward more specialized and cost-
effective systems, such as microcontrollers and microcomputers, a new set
of challenges emerged. Applications were ballooning in terms of lines of
code, and the need to “port” software to various platforms became an
increasingly pressing concern. This often necessitated rewriting or heavily
modifying the application for each specific target, a labor-intensive and
error-prone process. Developers also faced the complexities of managing
numerous static library dependencies and the demand for lightweight
software on embedded systems—areas where C++ fell short.
It was against this backdrop that Java emerged in the mid-1990s. Its
creators aimed to fill this niche by offering a “write once, run anywhere”
solution. But Java was more than just a programming language. It
introduced its own runtime environment, complete with a virtual machine
(Java Virtual Machine [JVM]), class libraries, and a comprehensive set of
tools. This all-encompassing ecosystem, known as the Java Development
Kit (JDK), was designed to tackle the challenges of the era and set the stage
for the future of programming. Today, more than a quarter of a century later,
Java’s influence in the world of programming languages remains strong, a
testament to its adaptability and the robustness of its design.
The performance of applications emerged as a critical factor during this
time, especially with the rise of large-scale, data-intensive applications. The
evolution of Java’s type system has played a pivotal role in addressing these
performance challenges. Thanks to the introduction of generics, autoboxing
and unboxing, and enhancements to the concurrency utilities, Java
applications have seen significant improvements in both performance and
scalability. Moreover, the changes in the type system had had far-reaching
implications for the performance of the JVM itself. In particular, the JVM
has had to adapt and optimize its execution strategies to efficiently handle
these new language features. As you read this book, bear in mind the
historical context and the driving forces that led to Java’s inception. The
evolution of Java and its virtual machine have profoundly influenced the
way developers write and optimize software for various platforms.
In this chapter, we will thoroughly examine the history of Java and JVM,
highlighting the technological advancements and key milestones that have
significantly shaped its development. From its early days as a solution for
platform independence, through the introduction of new language features,
to the ongoing improvements to the JVM, Java has evolved into a powerful
and versatile tool in the arsenal of modern software development.
Note
In this book, the acronym GC is used to refer to both garbage collection,
the process of automatic memory management, and garbage collector, the
module within the JVM that performs this process. The specific meaning
will be clear based on the context in which GC is used.
Note
HotSpot VM also provides an interpreter that doesn’t need a template,
called the C++ interpreter. Some OpenJDK ports1 choose this route to
simplify porting of the VM to non-x86 platforms.
1
https://wiki.openjdk.org/pages/viewpage.action?pageId=13729802
Print Compilation
A very handy command-line option that can help us better understand
adaptive optimization in the HotSpot VM is –XX:+PrintCompilation. This
option also returns information on different optimized compilation levels,
which are provided by an adaptive optimization called tiered compilation
(discussed in the next subsection).
The output of the –XX:+PrintCompilation option is a log of the HotSpot
VM’s compilation tasks. Each line of the log represents a single
compilation task and includes several pieces of information:
The timestamp in milliseconds since the JVM started and this
compilation task was logged.
The unique identifier for this compilation task.
Flags indicating certain properties of the method being compiled, such
as whether it’s an OSR method (%), whether it’s synchronized (s),
whether it has an exception handler (!), whether it’s blocking (b), or
whether it’s native (n).
The tiered compilation level, indicating the level of optimization
applied to this method.
The fully qualified name of the method being compiled.
For OSR methods, the bytecode index where the compilation started.
This is usually the start of a loop.
The size of the method in the bytecode, in bytes.
Here are a few examples of the output of the –XX:+PrintCompilation
option:
567 693 % ! 3 org.h2.command.dml.Insert::insertRows @ 76 (51
656 797 n 0 java.lang.Object::clone (native)
779 835 s 4 java.lang.StringBuffer::append (13 bytes)
These logs provide valuable insights into the behavior of the HotSpot VM’s
adaptive optimization, helping us understand how our Java applications are
optimized at runtime.
Tiered Compilation
Tiered compilation, which was introduced in Java 7, provides multiple
levels of optimized compilations, ranging from T0 to T4:
1. T0: Interpreted code, devoid of compilation. This is where the code
starts and then moves on to the T1, T2, or T3 level.
2. T1–T3: Client-compiled mode. T1 is the first step where the method
invocation counters and loop-back branch counters are used. At T2,
the client compiler includes profiling information, referred to as
profile-guided optimization; it may be familiar to readers who are
conversant in static compiler optimizations. At the T3 compilation
level, completely profiled code can be generated.
3. T4: The highest level of optimization provided by the HotSpot VM’s
“server.”
Prior to tiered compilation, the server compiler would employ the
interpreter to collect such profiling information. With the introduction of
tiered compilation, the code reaches client compilation levels faster, and
now the profiling information is generated by client-compiled methods
themselves, providing better start-up times.
Note
Tiered compilation has been enabled by default since Java 8.
Going forward, the hope is that the segmented code caches can
accommodate additional code regions for heterogeneous code such as
ahead-of-time (AOT)–compiled code and code for hardware accelerators.3
There’s also the expectation that the fixed sizing thresholds can be upgraded
to utilize adaptive resizing, thereby avoiding wastage of memory.
3
JEP 197: Segmented Code Cache. https://openjdk.org/jeps/197.
Deoptimization Scenarios
Deoptimization can occur in several scenarios when working with Java
applications. In this section, we’ll explore two of these scenarios.
class DriverLicense {
private boolean isTeenDriver;
private boolean isAdult;
private boolean isLearner;
😊
their longevity, thereby creating a “teenage wasteland,” as Charlie Hunt4
would explain.
4
Charlie Hunt is my mentor, the author of Java Performance
(https://ptgmedia.pearsoncmg.com/images/9780137142521/samplepages/01
37142528.pdf), and my co-author for Java Performance Companion
(www.pearson.com/en-us/subject-catalog/p/java-performance-
companion/P200000009127/9780133796827).
The generational garbage collection is based on two main characteristics
related to the weak-generational hypothesis:
1. Most objects die young: This means that we promote only long-lived
objects. If the generational GC is efficient, we don’t promote
transients, nor do we promote medium-lived objects. This usually
results in smaller long-lived data sets, keeping premature promotions,
fragmentation, evacuation failures, and similar degenerative issues at
bay.
2. Maintenance of generations: The generational algorithm has proven
to be a great help to OpenJDK GCs, but it comes with a cost. Because
the young-generation collector works separately and more often than
the old-generation collector, it ends up moving live data. Therefore,
generational GCs incur maintenance/bookkeeping overhead to ensure
that they mark all reachable objects—a feat achieved through the use
of “write barriers” that track cross-generational references.
Language Features
Generics introduced two major changes: (1) a change in syntax and (2)
modifications to the core API. Generics allow you to reuse your code for
different data types, meaning you can write just a single class—there is no
need to rewrite it for different inputs.
To compile the generics-enriched Java 5.0 code, you would need to use the
Java compiler javac, which was packaged with the Java 5.0 JDK. (Any
version prior to Java 5.0 did not have the core API changes.) The new Java
compiler would produce errors if any type safety violations were detected at
compile time. Hence, generics introduced type safety into Java. Also,
generics eliminated the need for explicit casting, as casting became implicit.
Here’s an example of how to create a generic class named
FreshmenAdmissions in Java 5.0:
In this example, K and V are placeholders for the actual types of objects. The
class FreshmenAdmissions is a generic type. If we declare an instance of this
generic type without specifying the actual types for K and V, then it is
Random documents with unrelated
content Scribd suggests to you:
And after this he was deprived of his bishopric, having a certain
pension assigned unto him for to live on in an abbey, and soon after
he died.
A SEA FIGHT (June 1, 1458).
Then on the Thursday the xth day of July, the year of our Lord 1460,
at two hours after noon, the said earls of March and Warwick let cry
through the field, that no man should lay hands upon the King nor
on the common people, but only on the lords, knights, and squires:
then the trumpets blew up, and both hosts encountered and fought
together half an hour,... The duke of Buckingham, the earl of
Shrewsbury, the lord Beaumont, the lord Egremont were slain by the
Kentishmen besides the King's tent, and many other knights and
squires. The ordinance of the King's guns availed not, for that day
was so great rain that the guns lay deep in water, and so were
quenched and might not be shot. When the field was done, and the
earls through mercy and help had the victory, they came to the King
in his tent, and said in this wise: "Most noble Prince, displease you
not, though it hath pleased God of his Grace to grant us the victory
of our mortal enemies, the which by their venomous malice have
untruly steered and moved your highness to exile us out of your
land. We come not to that intent for to inquiet nor grieve your said
highness, but for to please your most noble person, desiring most
tenderly the high welfare and prosperity thereof, and of all your
realm, and for to be your true liegemen while our lives shall endure."
The King of their words was greatly recomforted, and anon was led
into Northampton with procession, where he rested him three days,
and then came to London, the xvj day of the month abovesaid, and
lodged in the bishop's palace. For the which victory London gave to
Almighty God great laud and thanking.
THE WANDERINGS OF QUEEN
MARGARET (1460).
And in that journey was Owen Tudor taken and brought unto
Hereford, and he was beheaded at the market place, and his head
set upon the highest grice[19] of the market cross, and a mad
woman combed his hair and washed away the blood of his face, and
she got candles and set them about him, burning more than a
hundred. This Owen Tudor was father unto the Earl of Pembroke,
and had wedded Queen Catherine, King Harry the VI.'s mother,
thinking and trusting all the way that he should not be beheaded
until he saw the axe and the block, and when that he was in his
doublet he trusted on pardon and grace till the collar of his red
velvet doublet was ripped off. Then he said: "That head shall lie on
the stock that was wont to lie on Queen Catherine's lap," and put his
heart and mind wholly unto God, and full meekly to his death.
[19] Grices = steps upon which crosses are placed.
BATTLE OF TOWTON (1461).
ebookbell.com