Oracle Reference Architecture - General

Download as pdf or txt
Download as pdf or txt
You are on page 1of 78

Oracle® Reference Architecture

Application Infrastructure Foundation


Release 3.0
E14479-03

September 2010
ORA Application Infrastructure Foundation, Release 3.0

E14479-03

Copyright © 2009, 2010, Oracle and/or its affiliates. All rights reserved.

Primary Author: Anbu Krishnaswamy

Contributing Author: Stephen Bennett, Dave Chappelle, Bob Hensle, Mark Wilkins, Jeff McDaniel, Cliff
Booth

Warranty Disclaimer

THIS DOCUMENT AND ALL INFORMATION PROVIDED HEREIN (THE "INFORMATION") IS


PROVIDED ON AN "AS IS" BASIS AND FOR GENERAL INFORMATION PURPOSES ONLY. ORACLE
EXPRESSLY DISCLAIMS ALL WARRANTIES OF ANY KIND, WHETHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. ORACLE MAKES NO WARRANTY THAT
THE INFORMATION IS ERROR-FREE, ACCURATE OR RELIABLE. ORACLE RESERVES THE RIGHT TO
MAKE CHANGES OR UPDATES AT ANY TIME WITHOUT NOTICE.

As individual requirements are dependent upon a number of factors and may vary significantly, you should
perform your own tests and evaluations when making technology infrastructure decisions. This document
is not part of your license agreement nor can it be incorporated into any contractual agreement with Oracle
Corporation or its affiliates. If you find any errors, please report them to us in writing.

Third Party Content, Products, and Services Disclaimer

This document may provide information on content, products, and services from third parties. Oracle is not
responsible for and expressly disclaim all warranties of any kind with respect to third-party content,
products, and services. Oracle will not be responsible for any loss, costs, or damages incurred due to your
access to or use of third-party content, products, or services.

Limitation of Liability

IN NO EVENT SHALL ORACLE BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL OR
CONSEQUENTIAL DAMAGES, OR DAMAGES FOR LOSS OF PROFITS, REVENUE, DATA OR USE,
INCURRED BY YOU OR ANY THIRD PARTY, WHETHER IN AN ACTION IN CONTRACT OR TORT,
ARISING FROM YOUR ACCESS TO, OR USE OF, THIS DOCUMENT OR THE INFORMATION.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks
of their respective owners.
Contents

Send Us Your Comments ....................................................................................................................... vii

Preface ................................................................................................................................................................. ix
Document Purpose...................................................................................................................................... ix
Audience....................................................................................................................................................... ix
Document Structure .................................................................................................................................... x
How to Use This Document....................................................................................................................... x
Related Documents ..................................................................................................................................... x
Conventions ................................................................................................................................................. xi

1 Introduction
1.1 RASP Defined .............................................................................................................................. 1-2
1.1.1 Reliability .............................................................................................................................. 1-2
1.1.1.1 Message Reliability....................................................................................................... 1-2
1.1.1.2 Transaction Reliability ................................................................................................. 1-3
1.1.2 Availability ........................................................................................................................... 1-3
1.1.3 Scalability .............................................................................................................................. 1-3
1.1.4 Performance.......................................................................................................................... 1-4
1.2 RASP and Foundation Infrastructure ...................................................................................... 1-4

2 Computing Foundation
2.1 Distributed Computing.............................................................................................................. 2-2
2.2 On-Demand Computing ............................................................................................................ 2-2
2.3 Utility Computing....................................................................................................................... 2-2
2.4 Grid Computing.......................................................................................................................... 2-2
2.5 Cloud Computing ....................................................................................................................... 2-3
2.6 Elastic Computing....................................................................................................................... 2-3
2.7 Virtualization............................................................................................................................... 2-3

3 Distributed Computing
3.1 Choosing the right architecture ................................................................................................ 3-3
3.2 The Fallacies of Distributed Computing ................................................................................. 3-4
3.3 Distributed Computing and Java Enterprise Edition (JEE) .................................................. 3-4
3.4 Web Services Standards ............................................................................................................. 3-5

iii
3.5 Distributed Computing Principles ........................................................................................... 3-5

4 Grid Computing
4.1 Drivers for Grid Computing ..................................................................................................... 4-2
4.2 Grid Computing Capabilities.................................................................................................... 4-2
4.3 Grid computing and SOA.......................................................................................................... 4-3
4.4 Enterprise Grid ............................................................................................................................ 4-4
4.5 Application Grid ......................................................................................................................... 4-5
4.5.1 Drivers for Application Grid.............................................................................................. 4-6
4.5.2 Components of Application Grid...................................................................................... 4-6
4.5.3 Clustering.............................................................................................................................. 4-7
4.5.4 Architectural issues addressed .......................................................................................... 4-8
4.5.5 Data Grid............................................................................................................................... 4-8
4.5.5.1 Architectural issues addressed ................................................................................... 4-9
4.6 Database Grid ........................................................................................................................... 4-10
4.6.1 Service Grid Pattern ......................................................................................................... 4-12
4.7 Evolution of Grid Architecture .............................................................................................. 4-13
4.8 Grid Management .................................................................................................................... 4-15
4.9 Grid Computing Principles .................................................................................................... 4-16
4.10 Cloud Computing .................................................................................................................... 4-16

5 Virtualization
5.1 Server Virtualization .................................................................................................................. 5-2
5.1.1 Software Level Virtualization ............................................................................................ 5-3
5.1.2 Hardware Level ................................................................................................................... 5-3
5.1.3 Operating System Subsets .................................................................................................. 5-3
5.1.4 Paravirtualization ................................................................................................................ 5-3
5.2 Service Virtualization ................................................................................................................. 5-3
5.3 Virtual Machine (VM) Images or Templates .......................................................................... 5-4
5.4 Virtualization Principles ............................................................................................................ 5-4
5.5 Cloud Computing ....................................................................................................................... 5-4

6 Containers
6.1 Principles and best practices ..................................................................................................... 6-2

7 Data Caching Infrastructure


7.1 Data Caching Infrastructure...................................................................................................... 7-1
7.1.1 Caching concepts ................................................................................................................. 7-1
7.1.2 Caching Modes..................................................................................................................... 7-1
7.1.2.1 Read-Through Cache ................................................................................................... 7-2
7.1.2.2 Refresh-Ahead Cache................................................................................................... 7-2
7.1.2.3 Write-Though Cache .................................................................................................... 7-3
7.1.2.4 Write-Behind Cache ..................................................................................................... 7-4
7.1.3 Cache Topologies................................................................................................................. 7-4
7.1.4 Caching Data Access Patterns............................................................................................ 7-4
7.1.4.1 Data Access Distribution ............................................................................................. 7-4

iv
7.1.4.2 Cluster-Node Affinity .................................................................................................. 7-5
7.1.4.3 Read/Write Ratio and Data Sizes .............................................................................. 7-5
7.1.4.4 Interleaving Cache Reads and Writes........................................................................ 7-5

8 Product Mapping View


8.1 Grid Computing with Oracle Products ................................................................................... 8-2
8.2 Oracle Application Grid (OAG)................................................................................................ 8-3
8.3 Oracle WebLogic Server (OWLS) ............................................................................................. 8-4
8.4 Oracle JRockit Real Time (JRRT)............................................................................................... 8-4
8.4.1 Deterministic Garbage Collection (GC)............................................................................ 8-5
8.5 Oracle Coherence ........................................................................................................................ 8-6
8.5.1 Coherence and RASP .......................................................................................................... 8-6
8.5.1.1 Availability .................................................................................................................... 8-6
8.5.1.2 Reliability ....................................................................................................................... 8-7
8.5.1.3 Scalability....................................................................................................................... 8-7
8.5.1.4 Performance .................................................................................................................. 8-8
8.5.2 Data Grids Using Coherence.............................................................................................. 8-8
8.6 Oracle TimesTen ......................................................................................................................... 8-9
8.6.1 Oracle In-Memory Database Cache ............................................................................... 8-10
8.7 Oracle TimesTen and Coherence........................................................................................... 8-10
8.8 Oracle Exadata Storage Server ............................................................................................... 8-11
8.8.1 Building Storage Grids with Exadata ............................................................................ 8-11
8.9 Oracle Enterprise Manager (OEM) Grid Control................................................................ 8-11
8.10 Oracle TUXEDO and Service Architecture Leveraging Tuxedo (SALT) ......................... 8-12
8.11 Oracle VM ................................................................................................................................. 8-14
8.11.1 Oracle VM Templates....................................................................................................... 8-16
8.12 Oracle database and Oracle Real Application Clusters (RAC) ......................................... 8-16
8.12.1 Oracle Automatic Storage Management (ASM)........................................................... 8-16

9 Summary

A Further Reading
A.1 Related Documents.................................................................................................................... A-1
A.2 Other Resources and References.............................................................................................. A-1

v
List of Figures
1–1 ORA Application Infrastructure ............................................................................................... 1-1
2–1 Computing Model Relationships.............................................................................................. 2-1
3–1 Simple Client-Server Architecture............................................................................................ 3-2
3–2 Three-tier Architecture............................................................................................................... 3-2
3–3 N-Tier Architecture..................................................................................................................... 3-3
4–1 Enterprise Grid ............................................................................................................................ 4-5
4–2 Components of Application Grid ............................................................................................. 4-6
4–3 Data Grid...................................................................................................................................... 4-9
4–4 Database and Storage Grids ................................................................................................... 4-11
4–5 Service Grid Pattern................................................................................................................. 4-13
4–6 Evolution of Grid Computing ................................................................................................ 4-14
5–1 Virtualization Approaches ........................................................................................................ 5-2
7–1 Read-Through Cache.................................................................................................................. 7-2
7–2 Refresh-Ahead Cache ................................................................................................................. 7-3
7–3 Write-Through Cache................................................................................................................. 7-3
7–4 Write-Behind Cache.................................................................................................................... 7-4
8–1 ORA Foundation Infrastructure Oracle Technology Mapping............................................ 8-1
8–2 Grid Computing with Oracle Products ................................................................................... 8-3
8–3 Oracle Application Grid Components ..................................................................................... 8-3
8–4 JRockit Architecture.................................................................................................................... 8-5
8–5 Data Grid Using Oracle Coherence .......................................................................................... 8-9
8–6 Oracle Coherence and Oracle TimesTen .............................................................................. 8-10
8–7 Oracle TUXEDO ....................................................................................................................... 8-12
8–8 SALT .......................................................................................................................................... 8-14
8–9 Oracle VM ................................................................................................................................. 8-15

vi
Send Us Your Comments

ORA Application Infrastructure Foundation, Release 3.0


E14479-03

Oracle welcomes your comments and suggestions on the quality and usefulness of this
publication. Your input is an important part of the information used for revision.
■ Did you find any errors?
■ Is the information clearly presented?
■ Do you need more information? If so, where?
■ Are the examples correct? Do you need more examples?
■ What features did you like most about this document?

If you find any errors or have any other suggestions for improvement, please indicate
the title and part number of the documentation and the chapter, section, and page
number (if available). You can send comments to us at [email protected].

vii
viii
Preface

Underpinning Oracle Fusion solutions and infrastructure is a computing platform that


provides reliability, availability, scalability, and performance (RASP) qualities for
enterprise-class computing. Enterprise applications are expected to have high
performance, availability, scalability and reliability characteristics and SOA Services
are no exceptions from that rule. In fact the highly distributed, heterogeneous world
of SOA introduces additional challenges to these objectives. In order to meet the
rigorous non-functional requirements of the enterprise class applications, SOA
Services require a very robust, flexible and high performance foundation on which
they are deployed.
Recent trends in technology also influence the way applications, SOA Services and
business processes are deployed and consumed. Virtualization and Grid computing
technologies allow SOA Services to be scaled on demand and exhibit the highest
possible performance. Cloud computing allows dynamic low cost deployment on
public and private Cloud infrastructures. Caching technologies allow the
implementation of high performance distributed systems. Rules engine enable
business agility through externalization of business policies.
Oracle Reference Architecture (ORA) core set of documents and Enterprise
Technology Strategy (ETS) documents provide an overview of the architecture and
infrastructure required for successful implementation of Oracle Fusion technology.
This document covers the architecture of the application infrastructure layer that
provides the RASP related capabilities to the higher layers of ORA.

Document Purpose
This document describes the concepts and capabilities of the application infrastructure
and defines the platform on which solutions are built. The primary focus of this
document is the middleware stack (Oracle Fusion Middleware) but it also touches
upon a few relevant areas outside the middleware stack.

Audience
This document is primarily intended for Infrastructure Architects responsible for
building next generation enterprise infrastructure. Enterprise Architects and Project
Architects that want to understand the best way to build applications, processes, and
SOA Services will gather valuable insight from a good understanding of the
capabilities of the ORA application infrastructure.

ix
Document Structure
This document is organized into the following sections.
Chapter 1 - gives an introduction to ORA application infrastructure.
Chapter 2 - gives an overview of the computing foundation concepts.
Chapter 3 - discusses distributed computing concepts.
Chapter 4 - defines grid computing and provides an overview of the grid capabilities
and architectural concepts.
Chapter 5 - discusses the concepts of virtualization and how it plays a key role in the
foundation infrastructure.
Chapter 6 - briefly covers the role of containers in the application infrastructure.
Chapter 7 - gives a brief overview of the data management capabilities and how
caching plays an integral role in the foundation infrastructure.
Chapter 8 - gives an overview of the Oracle products that map to the application
infrastructure layers.
Chapter 9 - provides a summary of this document
Appendix A - provides a list of documents and URLs for further reading

How to Use This Document


This document should be read by everyone that is interested in learning about
architecting and building an enterprise class infrastructure using Oracle Fusion
technology. It is one of the documents in the collection that comprise Oracle Reference
Architecture.
This document can be read from beginning to end or as a reference. If specific
infrastructure components are not applicable to you at this point of time, you can skip
that part but be sure to read the interdependencies of the technologies/products to
ensure that there are no holes in the architecture.

Related Documents
IT Strategies from Oracle (ITSO) is a series of documentation and supporting collateral
designed to enable organizations to develop an architecture-centric approach to
enterprise-class IT initiatives. ITSO presents successful technology strategies and
solution designs by defining universally adopted architecture concepts, principles,
guidelines, standards, and patterns.

x
ITSO is made up of three primary elements:
■ Oracle Reference Architecture (ORA) - defines a detailed and consistent
architecture for developing and integrating solutions based on Oracle
technologies. The reference architecture offers architecture principles and
guidance based on recommendations from technical experts across Oracle. It
covers a broad spectrum of concerns pertaining to technology architecture,
including middleware, database, hardware, processes, and services.
■ Enterprise Technology Strategies (ETS) - offer valuable guidance on the adoption
of horizontal technologies for the enterprise. They explain how to successfully
execute on a strategy by addressing concerns pertaining to architecture,
technology, engineering, strategy, and governance. An organization can use this
material to measure their maturity, develop their strategy, and achieve greater
levels of adoption and success. In addition, each ETS extends the Oracle Reference
Architecture by adding the unique capabilities and components provided by that
particular technology. It offers a horizontal technology-based perspective of ORA.
■ Enterprise Solution Designs (ESD) - are industry specific solution perspectives
based on ORA. They define the high level business processes and functions, and
the software capabilities in an underlying technology infrastructure that are
required to build enterprise-wide industry solutions. ESDs also map the relevant
application and technology products against solutions to illustrate how
capabilities in Oracle’s complete integrated stack can best meet the business,
technical and quality of service requirements within a particular industry.
This document is one of the series of documents that comprise Oracle Reference
Architecture. Underpinning Oracle Fusion solutions and infrastructure is a computing
platform that provides reliability, availability, scalability, and performance (RASP)
qualities for enterprise-class computing. ORA Application Infrastructure describes
these concepts and capabilities and defines the platform on which solutions are built.
Please consult the ITSO web site for a complete listing of ORA documents as well as
other materials in the ITSO series.

Conventions
The following typeface conventions are used in this document:

xi
Convention Meaning
boldface text Boldface type in text indicates a term defined in the text, the ORA
Master Glossary, or in both locations.
italic text Italics type in text indicates the name of a document or external
reference.
underline text Underline text indicates a hypertext link.

In addition, the following conventions are used throughout the Oracle Fusion
Reference Architecture documentation:
“SOA Service” - In order to distinguish the “service” of Service-Oriented Architecture
from the wide variety of “services” within the industry, the term “SOA Service”
(although somewhat redundant) will be used throughout this document to make an
explicitly distinction for services that were created as part of an SOA initiative; thus
distinguishing SOA Services from other types of services such as Web Services, Java
Messaging Service, telephone service, etc.

xii
1
Introduction
1

Today's IT infrastructures are facing great demands in terms of reliability, availability,


scalability and performance (RASP). Business needs for a more agile, flexible and
responsive infrastructure have driven the IT organizations to look at on demand
computing infrastructure more seriously than ever.
Hardware is commoditized but has grown increasingly powerful and capable to meet
challenges. Hardware, storage and memory are getting cheaper and faster. Network
speeds are getting much faster. To manage these large scalable environments,
Enterprise Management requirements are also becoming more complex. Architects are
in the constant need to think about meeting service levels, performance and security
requirements of the business. Transaction volumes are increasing every second of the
day, making a case for Extreme Transaction Processing (XTP) systems. At the same
time, companies are cutting costs and IT budgets are shrinking.
In order to turn challenges into opportunities for future-proofing environments,
enterprises are rethinking their SOA Service and application infrastructures to support
the RASP requirements of the new era.

Figure 1–1 ORA Application Infrastructure

This infrastructure for SOA Services and business solutions requires a reliable, highly
available, scalable and high performance foundation that supports the stringent
non-functional requirements and core capabilities of the system. As shown in
Figure 1–1, the foundation infrastructure covers the following layers:

Introduction 1-1
RASP Defined

■ Platform:
■ Virtualization: Virtualization is a technique for hiding the physical
characteristics of computing resources from the way in which other systems,
applications, or end users interact with those resources. It creates a layer of
abstraction that allows the underlying resources to be managed independent
of the applications that run on them.
■ Containers: Containers provide a runtime platform for SOA Services and
applications. They provide common services like transaction support,
management, security, and network communications that can be leveraged by
the applications. Applications Servers (e.g. Oracle WebLogic Server), OLTP
monitors (e.g. Oracle TUXEDO), and Inversion of Control (IoC) containers
(e.g. Spring) are some of the examples in this category.
■ Computing Foundation:
■ Distributed Computing: Distributed computing allows multiple, autonomous
computers to work in concert to solve a problem or provide a business
solution. Distributed computing is used in the vast majority of modern
enterprise business solutions.
■ Grid Computing: Grid computing provides the ability to pool and share
physical resources. It is a form of distributed computing that allows software
to run on multiple physical machines in order to achieve availability and
scalability requirements. Load can be shifted from one physical machine to
another in order to recover from a failure or to respond to changes in demand.
■ Caching: Caching increases performance by keeping data in memory rather
than requiring the data to be retrieved from external storage. Distributed
caching uses the memory of multiple physical machines to store data while
keeping all instances of the data consistent regardless of physical location.
These models are discussed further in Chapter 7.

1.1 RASP Defined


The ORA application infrastructure provides support for the RASP qualities, namely,
reliability, availability, scalability, and performance. These qualities are defined below:

1.1.1 Reliability
Reliability is the ability of a system or component to perform its required functions
under stated conditions for a specified period of time. Another way to think about
reliability is that it is the percentage of time that an application is able to operate
correctly. For instance, an application may be available, yet unreliable if it cannot fully
provide the capabilities required of it. An example to illustrate high availability but
low reliability is a mobile phone network: While most mobile phone networks have
very high uptimes (referring to availability), dropped calls tend to be relatively
common (referring to reliability).
Several aspects of reliability are important within ORA, particularly the reliability of
the messages and the transaction reliability.

1.1.1.1 Message Reliability


Services are often made available over a network with possibly unreliable
communication channels. Although techniques for ensuring the reliable delivery of

1-2 ORA Application Infrastructure Foundation


RASP Defined

messages are reasonably well understood and available in some messaging


middleware products today, messaging reliability is still a problem.
Message reliability is addressed in various ways. Java Messaging Services (JMS) offers
ways to reliably transmit messages through message queuing and Store and Forward
(SAF).
SOA addresses message reliability through standards like WS-ReliableMessaging and
WS-Reliability. WS-ReliableMessaging and the less widely adopted WS-Reliability
(OASIS Consortium) address the problems and define protocols that enable SOA
Services to ensure reliable, interoperable exchange of messages with specified delivery
assurances. These specifications define four basic delivery assurances:
■ In-order delivery - The messages are delivered in the same order in which they
were sent.
■ At-least-once delivery - Each message that is sent is delivered at least one time.
■ At-most-once delivery - No duplicate messages are delivered.
■ Exactly once - Each message is sent only once.

1.1.1.2 Transaction Reliability


Transaction reliability means the transaction or request operates correctly and either
does not fail or promptly reports any failures to the caller. Exceptions should be
handled gracefully and appropriate status should be conveyed back to the caller.
Transactions should not leave the system in an inconsistent state. The use of
infrastructure that provides atomic transactions helps improve reliability by ensuring
that partial data updates are rolled back when processing exceptions occur.
In a data context, transaction reliability usually means ACID (Atomicity, Consistency,
Isolation, Durability) transactions to ensure system does “all or none” operations
(rollback and recovery), preserving data integrity in case of failures.

1.1.2 Availability
Availability is the degree to which a system or component is operational and accessible
when required for use. Availability is generally expressed in terms of the percentage
up time using 9s. For example, five 9s availability means 99.999% up time, which
translates to no more than 5.26 minutes of unplanned downtime per year.
In most cases, mission critical and business critical applications are deployed on
Oracle Fusion stack. This makes availability one of the high priorities of ORA. ORA
foundation must ensure that the platform can support the stringent availability
requirements of the solutions and SOA Services that run on top of it. In particular,
SOA is built around the principles of sharing and reuse. This means that the SOA
Services should support consumers with different consumption patterns and service
levels. The availability of the SOA Services is determined based on the aggregate
requirements of the consumer community, and it puts higher expectations on the
availability of the SOA Service itself. The infrastructure must be able to guarantee the
high availability requirements of the SOA Services.

1.1.3 Scalability
Scalability is the ability to seamlessly grow the system to support the increase in
capacity requirements. This has to be done by adding resources without the need to
modify the application. Today's businesses demand a high degree of flexibility and
agility. The architecture should be highly dynamic to meet the changes in business.
Most enterprises are concerned about capital expenses leading to the trend in

Introduction 1-3
RASP and Foundation Infrastructure

on-demand capacity. This drives the need to package and manage resources efficiently
and allocate them dynamically to achieve the scalability requirements.
Linear scalability is the goal of a scalable architecture, but it is difficult to achieve. The
measurement of how well an application scales is called the scaling factor (SF). A
scaling factor of 1.0 represents linear scalability, while a scaling factor of 0.0 represents
no scalability. When planning for extreme scale, the first thing to understand is that
application scalability is limited by any necessary shared resource that does not exhibit
linear scalability.
Vertical Scalability: Vertical scalability means adding resources to a single node in a
system, typically involving the addition of CPUs or memory to a single computer. It is
also referred to as scaling up.
Horizontal Scalability: Horizontal scalability means adding more nodes to an
operating environment, such as adding a new computer. It is also referred to as scaling
out. Grid infrastructures enable systems to scale out linearly and respond rapidly to
changes in load.

1.1.4 Performance
Performance refers to the responsiveness and throughput of the system. Performance
should be measured in the context of the Service Level Agreements (SLA). The
architecture should ensure that the performance requirements stated in the SLA are
met and there is sufficient room for future growth and contingency. Performance is
generally expressed in terms of the response time, latency, or transactions per second.
Performance is an important consideration in a Oracle Fusion environment. The
design and architecture should consider optimization of performance to address the
challenges faced. Each area in ORA has its own challenges and requirements for high
performance. For example, the key challenges for SOA performance in Oracle Fusion
environment are:
■ SOA is a distributed architecture. The SOA Service components and the
consumers are highly distributed and communicate over the network. This would
introduce network latency and other aggregation delays.
■ SOA is built on the principles of loose coupling and location transparency. This
introduces additional layers of abstraction in discovering and consuming SOA
Services.
■ Oftentimes SOA involves intermediaries for mediation, monitoring, and
management. This adds an additional cost to the performance.
■ SOA requires support for heterogeneous platforms. This drives the need for
standardized message formats and protocols that might not be the optimal from a
performance standpoint.

1.2 RASP and Foundation Infrastructure


The foundation infrastructure should support and satisfy the RASP requirements of
the enterprise. The table below summarizes various aspects of the RASP qualities.

Reliability Availability Scalability Performance


Characteristics Trust, transactional Transparent failover, Low latency, high Low latency, fast and
integrity, quality, no business continuity, throughput, Capacity predictable response
single point of minimize downtime, on demand, static or times, fault tolerant,
failures (SPOF) disaster recovery dynamic resource transaction integrity
growth

1-4 ORA Application Infrastructure Foundation


RASP and Foundation Infrastructure

Reliability Availability Scalability Performance


Realization Redundancy, Rolling upgrades, On-demand Caching, data
Approaches intermediate store HA hardware, HA infrastructure, grid brokering, high
and forward (SaF), software, HA design, architecture, data performance
reliable transports, redundancy, failover grid, hardware hardware, low
exception handling, infrastructure upgrades, resource latency OS, JVMs,
WS-Reliable replication, high databases, and TPMs,
Messaging, speed networks, reliable networks,
WS-Reliability, JMS virtualization reliable protocols
Product Mapping OWLS, OSB, Oracle OWLS, Oracle OWLS, Oracle Oracle Coherence,
Advanced Queue, Database, Oracle Coherence, Oracle Oracle WebLogic
Oracle Coherence, RAC, Oracle Database, Oracle Application Grid,
Oracle TimesTen, Coherence, Oracle RAC, Oracle JRockit Real Time,
Oracle Database TimesTen TUXEDO, Oracle Oracle database,
VM, Oracle TimesTen Oracle TimesTen
Considerations Delivery mode, Integrity, consistency, Multi-tenancy, Bandwidth,
performance, Maximum performance, responsiveness,
throughput, Availability flexibility extreme transaction
transactions, security, Architecture processing
standards compliance

■ Characteristics summarize the key traits of the particular quality. These


characteristics drive the need for that quality attribute.
■ Realization approaches are the ways through which the quality can be realized or
implemented. This document covers most of these technologies and associated
products.
■ Product mapping lists the Oracle products that play role in achieving the given
quality attribute.
■ Considerations section provides the key considerations in selecting the
architecture and technologies to realize the architectural quality attribute.

Introduction 1-5
RASP and Foundation Infrastructure

1-6 ORA Application Infrastructure Foundation


2
Computing Foundation
2

Over the past several decades, the field of computing has evolved quite rapidly. The
evolution has seen many paradigms including monolithic architecture, client/server
computing, object oriented computing, web computing, and distributed computing.
Businesses have started demanding a lot more flexibility, performanc,e and
effectiveness. The role of IT organizations has shifted from business support to
strategic partnership. Businesses required dynamic and cost effective business models
to expand and grow. IT organizations needed to get creative to keep up with the
demands of the business. This trend spawned a series of on-demand and utility
computing models. This means that architects have a lot more choices on the
computing foundation they want to implement. However, a deeper understanding of
these choices is required in order to run the tradeoff analysis and pick the most
appropriate model.
Figure 2–1 shows the various computing models and their relationships to each other.
Each of these models is described briefly below. Some of these models do not have
industry standard definitions and there is a lot of confusion around what these terms
mean and how they relate to each other. The objective of this diagram is to clarify the
definitions and roles of these models.

Figure 2–1 Computing Model Relationships

Computing Foundation 2-1


Distributed Computing

2.1 Distributed Computing


Distributed computing provides a scalable runtime platform capable of handling
many concurrent users by allowing related components to be spread out but at the
same time enabling them to work in unison. It allows applications to be broken down
into smaller, modular components and be deployed across a distributed infrastructure
that leverages the power and flexibility of networked servers. Layered architecture
enables separation of concern by defining individual logical layers that can be
deployed independently, taking advantage of the distributed infrastructure.
Distributed architectures allow selective scalability of the layers that require more
capacity to handle the load. This allows efficient use of the hardware and software
resources and optimization of performance by fine-tuning the appropriate layer or
component. In contrast to some of the other models, distributed computing generally
is a CAPEX model where the distributed infrastructure is built in-house for
applications to be deployed.

2.2 On-Demand Computing


On-Demand computing is a model that allows capacity or resources to be allocated
dynamically based on the need. The idea behind on-demand computing is to drive the
efficiency within organizations by allowing them to adopt a pay-as-you-go type of
models. It is an OPEX model that allows businesses to spend on an as needed basis.
Several of today's architectures such as Grid computing and Cloud computing are
classified as on-demand computing.

2.3 Utility Computing


Utility computing is an on-demand approach that combines outsourced computing
resources and infrastructure management with a usage-based payment structure. It
covers the packaging of computing resources, such as computation and storage, as a
metered service similar to a physical public utility. Utility computing has the
advantage of a low or no initial cost to acquire hardware as computational resources
are essentially rented.
The focus of utility computing is the business model on which providing the
computing services are based. The main benefit of utility computing is better
economics. Corporate data centers are typically underutilized. Utility computing
allows companies to only pay for the computing resources they need, when they need
them. Utility computing is very similar to public cloud computing, except perhaps it
doesn't necessarily imply self-service, elastic capacity, or multi-tenancy. It does imply
pay-per-use.

2.4 Grid Computing


Grid computing is a technology architecture that virtualizes and pools IT resources,
such as compute power, storage, and network capacity into a set of shared services
that can be distributed and re-distributed as needed. Grid computing involves server
virtualization, clustering, and dynamic provisioning.
With Grid computing, groups of independent, modular hardware and software
components can be pooled and provisioned on demand to meet the changing needs of
businesses. Grid computing is really a form of distributed computing and it aims to
deliver flexible and dynamic infrastructures using tiered optimization. It uses
virtualization at various levels of the middleware and database layer to achieve it.

2-2 ORA Application Infrastructure Foundation


Virtualization

Grid computing distribution is at a more fine-grained level compared to Cloud


computing. Software and applications are replicated at the container (JVM) level in the
corresponding tier. It is interesting to note that Grid computing can also be thought of
as a type of on-demand computing since most Grid computing architectures are
designed to adjust the capacity on demand.

2.5 Cloud Computing


Wikipedia defines Cloud computing as "a style of computing in which dynamically scalable
and often virtualized resources are provided as a service over the Internet. Users need not have
knowledge of, expertise in, or control over the technology infrastructure in the cloud that
supports them."
Cloud computing is often characterized by the following aspects.
■ Virtualized computing resources
■ Seemingly limitless capacity/scalability
■ Dynamic provisioning
■ Multi-tenancy
■ Self-service
■ Pay-for-use pricing
Cloud computing can further be classified into the following three categories based on
the deployment model.
■ Public Cloud
■ Private Cloud
■ Hybrid Cloud
Cloud computing allows the delivery of resources and services using the following
models.
■ Software as a Service
■ Infrastructure as a Service
■ Platform as a Service
Cloud computing is covered in detail in the Cloud computing Enterprise Technology
Strategy.

2.6 Elastic Computing


Elastic computing is just another term that generally refers to Cloud computing. The
term was coined based on the notion that resources are dynamically added or
removed, providing "elastic" characteristic to the infrastructure capacity.

2.7 Virtualization
Virtualization is a technique for hiding the physical characteristics of computing
resources from the way in which other systems, applications, or end users interact
with those resources. Virtualization benefits both Grid computing and Cloud
computing. Virtualization can happen at different levels, server, hardware,
middleware, database, container, or services. Grid computing generally uses
middleware and database virtualization, while cloud computing uses server

Computing Foundation 2-3


Virtualization

virtualization. Virtualization plays a key role in reducing complexity and


consolidating servers. It also helps organizations achieve platform uniformity by
creating an abstraction layer on top of the underlying heterogeneous platforms.
Chapter 5 covers Virtualization in detail.

2-4 ORA Application Infrastructure Foundation


3
3Distributed Computing

In distributed computing a program is split up into parts that run simultaneously on


multiple computers communicating over a network. Distributed computing is a form
of parallel computing, but parallel computing is most commonly used to describe
program parts running simultaneously on multiple processors in the same computer.
Both types of processing require dividing a program into parts that can run
simultaneously, but distributed programs often must deal with heterogeneous
environments, network links of varying latencies, and unpredictable failures in the
network or the computers.
Various hardware and software architectures are used for distributed computing. At a
lower level, it is necessary to interconnect multiple CPUs with some sort of network,
regardless of whether that network is printed onto a circuit board or made up of
loosely-coupled devices and cables. At a higher level, it is necessary to interconnect
processes running on those CPUs with some sort of communication system.
Distributed programming typically falls into one of several basic architectures or
categories such as Client-server, three-tier architecture, and N-tier architecture.
In Client-server computing clients connects to the server for data, then format and
visualize data for the user. The client may handle business logic in addition to the
presentation logic. Figure 3–1 shows a simple client-server architecture.

Distributed Computing 3-1


Figure 3–1 Simple Client-Server Architecture

In the three tier architecture, business logic is handled in the middle tier, presentation
rendering is handled on the client and data management is handled in the backend, as
shown in Figure 3–2. This architecture allows multiple clients to access centrally
deployed business logic components. This allows centralized distribution and
management of resources.

Figure 3–2 Three-tier Architecture

3-2 ORA Application Infrastructure Foundation


Choosing the right architecture

N-tier architecture refers to multi-tier architecture that may involve several layers. Web
architectures are typically N-tier architectures with client tier, web tier, application tier,
and data tier. N-tier architectures enable tiered optimization by allowing tuning and
scaling of individual tiers. A sample N-tier architecture is shown in Figure 3–3.

Figure 3–3 N-Tier Architecture

3.1 Choosing the right architecture


The choice of which topology to use depends on a number of factors. This tradeoff
should consider aspects like flexibility, performance, management, and security.
Simple client-server architecture is suitable for smaller applications that have limited
number of users. It has smaller number of moving parts that may help enhance the
performance of the applications. The security model is also much simpler in this case.
However, it introduces challenges with respect to distribution, scalability, and
synchronization. It also requires more powerful client machines as some of the logic is
processed on the client side.
Three-tier architecture allows the data tier and middle tier to scale independently. It
also allows multiple clients to share the business logic running in the middle tier. This
makes distribution of the application a lot easier. Since security, transactions
management, and connection management are handled in the middle tier, it gives
better control of the resources. Three-tier architecture is more scalable than the simple
client-server model and requires less powerful client side machines. Due to these
characteristics this architecture is suitable for small to medium enterprise
deployments.
More complex, mission critical, and business critical applications benefit from N-Tier
architecture. This configuration gives high flexibility and higher scalability than the
other configurations. This configuration allows selective scalability of specific layers
that require more compute power. The hardware and software can be customized to
suit the specific needs of the layers to achieve optimal performance. Designed right,
this architecture can overcome any performance overhead introduced by additional
network hops, by maximizing throughput through optimal use of resources. Web
based applications and those that have a high end user requirement could benefit from
this configuration.

Distributed Computing 3-3


The Fallacies of Distributed Computing

3.2 The Fallacies of Distributed Computing


The fallacies of distributed computing are a set of common but flawed assumptions
made by architects and programmers when first developing distributed applications.
The fallacies are summarized as follows:
■ The network is reliable
■ The network is not always reliable. Network outages and performance issues
should be taken into account when designing distributed systems.
■ Latency is zero
■ The network speeds have improved by an order of magnitude in the recent
years, yet the network latency can not be ignored. In tiered architectures, there
is a cost involved in distributing "too much".
■ Bandwidth is infinite
■ Network bandwidths have also improved significantly over the years. At the
same time applications have become more verbose than ever, requiring more
bandwidth. Given the popularity of XML and its verbosity, bandwidth can not
be assumed infinite.
■ The network is secure
■ Network is not secure unless appropriate security measures are taken.
Transport level security, message level security, encryption, and digital
signatures are some ways of enhancing security.
■ Topology doesn't change
■ Infrastructure and applications must be built for change. Requirements and
technology change all the time, resulting in changes to the architecture and
topology.
■ There is one administrator
■ There may potentially be several administrators who are responsible for
monitoring, managing, troubleshooting, and upgrading the system.
Administrators specialize in specific parts of the system (e.g. OS, application
servers, web servers, BPM etc…) and may not get the full view of the system.
So the applications should be designed for easy management and
troubleshooting.
■ The network is homogeneous
■ The network can not be assumed to be homogeneous. An enterprise may have
various types of networks, routers, and protocols with different speeds and
characteristics. So proprietary protocols and network technologies should be
avoided in favor of standard technologies.

3.3 Distributed Computing and Java Enterprise Edition (JEE)


Java Platform, Enterprise Edition or Java EE (JEE) is a widely used platform for server
programming in the Java programming language. It defines standards which provide
functionality to deploy fault-tolerant, distributed, multi-tier Java software, based
largely on modular components running on an application server. The JEE platform
simplifies enterprise applications by basing them on standardized, modular
components, by providing a complete set of services to those components, and by
handling many details of application behavior automatically, without complex
programming.

3-4 ORA Application Infrastructure Foundation


Distributed Computing Principles

JEE provides a Java based distributed computing platform. The Java/JEE standards
like EJB, Servlet/JSP, JDBC, JMS, RMI and the associated design patterns allow
scalable distributed object platforms to be deployed as a sophisticated foundation for
SOA.
Today's enterprises gain competitive advantage by quickly developing and deploying
custom applications that provide unique business services. Whether they are internal
applications for employee productivity or internet applications for specialized
customer or vendor services, quick development and deployment are key to success.
Portability and scalability are also important for long term viability. Enterprise
applications must scale from small working prototypes and test cases to complete 24 x
7, enterprise-wide services, accessible by tens, hundreds, or even thousands of clients
simultaneously.
However, multitier applications are hard to architect. They require bringing together a
variety of skill sets and resources, legacy data and legacy code. In today's
heterogeneous environment, enterprise applications have to integrate services from a
variety of vendors with a diverse set of application models and other standards.
As a single standard that can sit on top of a wide range of existing enterprise systems,
database management systems, transaction monitors, naming and directory services,
and more, the JEE platform breaks the barriers inherent between current enterprise
systems. The unified JEE standard wraps and embraces existing resources required by
multitier applications with a unified, component-based application model. This
enables the next generation of components, tools, systems, and applications for solving
the strategic requirements of the enterprise.
The JEE specification also supports emerging Web Services technologies through
inclusion of the WS-I Basic Profile. WS-I Basic Profile compliance means that the
developers can build applications on the JEE platform as Web services that
interoperate with Web services from non-JEE compliant environments.

3.4 Web Services Standards


Openness is the property of distributed systems such that each subsystem is
continually open to interaction with other systems. Web services protocols are
standards which enable distributed systems to be extended and scaled. In general, an
open system that scales has an advantage over a perfectly closed and self-contained
system.
Web Services are a natural fit for building distributed computing platforms.
■ WSDL, SOAP, and XML promote language and platform independence and
interoperability.
■ UDDI allows services deployed in a distributed infrastructure to be discovered
and consumed.
■ Simple Object Access Protocol (SOAP) enables service binding and invocation
using standards based protocols such as HTTP and JMS.
■ The platform for SOA should support the WS-* standards described in the ORA
SOA Foundation document.

3.5 Distributed Computing Principles


■ The distributed computing platform must be based on common industry
standards supported by multiple vendors. It must offer standards-based security,
integration, and distributed transaction management.

Distributed Computing 3-5


Distributed Computing Principles

■ Distributed architecture should promote sharing of resources to maximize


business value.
■ Distributed computing must be chosen as the architecture of choice when the
scalability and availability requirements can be best met by further layering the
system.

3-6 ORA Application Infrastructure Foundation


4
Grid Computing
4

Enterprise grid computing is an emerging information technology (IT) architecture


that delivers more flexible, resilient, and lower cost enterprise information systems.
With grid computing, groups of independent, modular hardware and software
components can be pooled and provisioned on demand to meet the changing needs of
businesses.
The accelerating adoption of grid technology is in direct response to the challenges
that IT organizations face with today's rapidly changing and unpredictable business
cycles. IT departments are under pressure to increase operational agility, to establish
and meet IT service levels and to control costs.
Using enterprise grid computing technology, IT departments can adapt to rapid
changes in the business environment while meeting higher service levels. Enterprise
grid computing has also revolutionized IT economics by both extending the life of
existing systems and exploiting rapid advances in processing power, storage capacity,
network bandwidth, as well as energy and space efficiency.
There are no standard definitions for Grid computing in the industry but in general
Grid computing refers to the aggregation of multiple, distributed computing
resources, making them function as a single computing resource with respect to a
particular computational task. Grid is a form of virtualization in the sense that it hides
the details of resources and creates a layer that is suitable for use. It is a technology
architecture that virtualizes and pools IT resources, such as compute power, storage,
and network capacity into a set of shared resources that can be provisioned as needed.
Grid computing primarily involves the following three concepts:
■ Server virtualization
■ Clustering
■ Dynamic provisioning
The following list summarizes the key characteristics of Grid infrastructures.
■ Grid integrates and coordinates resources and users that live within different
domains.
■ Grid is built from multi-purpose protocols and interfaces that address such
fundamental issues as authentication, authorization, resource discovery, and
resource access. It is important that these protocols and interfaces be standard and
open.
■ Grid allows its constituent resources to be used in a coordinated fashion to deliver
various qualities of service, relating, for example, to response time, throughput,
availability, and security, and/or co-allocation of multiple resource types to meet
complex user demands, so that the utility of the combined system is significantly
greater than that of the sum of its parts.

Grid Computing 4-1


Drivers for Grid Computing

It is important to understand the difference between Distributed computing and Grid


computing. Distributed computing is primarily about application design whereas Grid
computing is about infrastructure design. Distributed design of applications allows
components to be “distributed” over the network for simplicity and manageability of
development and operations. In contrast, Grid computing enables an infrastructure
design that would help “replicate” applications and SOA Services for achieving
greater scalability and agility.

4.1 Drivers for Grid Computing


Today's businesses rely on IT for innovation and competitive advantage. As a result,
IT has become increasingly more complex. The industry has evolved from having a
few mainframes to having thousands of servers and desktops distributed throughout
the enterprise. This change has led to architectures that drive better efficiency. The
evolution of grid computing has had many drivers that include the following.
■ Better Agility and Flexibility: Businesses experience constant change and the
underlying IT infrastructure should be agile enough to support that kind of
change. SOA and BPM are great enablers of agility. The supporting hardware,
storage, and middleware infrastructure should assist SOA and BPM to achieve this
goal by providing an equally agile foundation infrastructure.
■ Lower Operational Cost: Businesses are always under substantial cost pressures
with rising costs and reducing margins. International competition is becoming a
norm and they have greater cost and labor advantages over the incumbents. This
market trend is forcing organizations to lower operational and IT cost.
■ Improved Server Utilization: Most enterprises have under utilized data centers.
This is a result of an emphasis on business continuity planning and serving
fluctuating loads. So there is a move towards improving server utilizations. Grid
computing allows companies to lower cost through efficient use of resources.
■ Shared/Dynamic Infrastructure: Most companies have adopted centralized
shared IT infrastructure management for lowering cost. Shared infrastructure was
one of the primary drivers for Grid 1.0. Modern IT organizations demand dynamic
infrastructures that led to the evolution of Grid 2.0. Section 4.7 explains this in
more detail.

4.2 Grid Computing Capabilities


The key capabilities of Grid computing are listed below:
■ Provisioning on demand: Most applications today are tied to specific software
and hardware silos that limit their ability to adapt to changing workloads. This
can be a costly and inefficient use of IT resources because IT departments are
forced to over provision their hardware so that each application can handle the
peak or worst-case workload scenario. Grid computing enables the allocation and
de-allocation of IT resources in a dynamic, on-demand fashion, providing much
greater responsiveness to changing workloads on a global scale.
■ Proactive Monitoring and Management: Grid computing enables an organization
to tie its business requirements, through service level agreements, to its IT
architecture with demonstrable metrics and proactive monitoring and
management. This encourages a “shared service bureau” approach to IT with a
focus on measuring and meeting higher service levels and better alignment
between IT and business goals. In the end, high systems administration overhead,
costly integration projects and runaway budgets can be eliminated.

4-2 ORA Application Infrastructure Foundation


Grid computing and SOA

■ Centralized Management: Managing the Grid components centrally is an


essential capability for a successful grid deployment. With grid architecture,
infrastructure and application components are heavily distributed across the
enterprise, hence making management all the more important.
■ Centralized Monitoring: In order to analyze and troubleshoot, the
components of the grid infrastructure must be constantly monitored and the
intelligence gathered should be correlated and presented to improve decision
making.
■ High Availability: Grid architecture eliminates single sources of failure and
provides powerful high-availability capabilities through the entire software stack,
protecting valuable information assets and ensuring business continuity.
■ Workload Management and Resource Provisioning: Grid computing practices
focus on operational efficiency and predictability. Easier grid workload
management and resource provisioning puts more power in the hands of IT staff,
enabling IT departments to maintain current staffing levels even as computing
demands continue to skyrocket. Because computing resources can be applied
incrementally when needed, customers enjoy much higher computing and storage
capacity utilization. They can also use a more cost- effective scale-out or “pay as
you grow” procurement strategy. Companies can avoid buying extra hardware or
additional software licenses before they are actually needed. They can also take
advantage of the price performance benefits that come with the rapid growth in
processing power and greater energy efficiency.
■ Clustering: Clustering is a way of achieving grid architecture at the application
and database layers. Clustering allows multiple instances of servers connected to
each other to run applications in a coordinated manner to make the most efficient
use of the resources available. Server instances may join or leave the cluster to
dynamically manage the workload.
■ Virtualization: An enterprise grid needs to provide virtualization capabilities at
multiple levels.
■ Server virtualization: Server virtualization adds grid capabilities at the
hardware and operating system level. Server virtualization allows hardware
resources to be pooled and dynamically allocated to meet the capacity
requirements. Server virtualization is discussed in detail in Chapter 5.
■ Service Virtualization: With SOA Service virtualization, consumers are
insulated from the service infrastructure details such as service endpoint
location, service inter-connectivity, policy enforcement, service versioning,
and dynamic service management information.
■ Application virtualization: Application virtualization allows applications to
be scaled and managed more efficiently. Clustering and load balancing
provide the flexibility to manage the applications in a grid environment.

4.3 Grid computing and SOA


Enterprise grid computing delivers maximum scalability through the ability to add
computing, storage, and network capacity on demand. The ability to “scale out” comes
from clustering standard hardware and software components and virtualizing them to
effectively create one large, virtual computer.
Like grid computing, Service-Oriented Architecture (SOA) applications often consume
services from widely varied sources, greatly reducing silos of disconnected
information and application logic. However, SOA applications can also introduce

Grid Computing 4-3


Enterprise Grid

more unpredictability in the computing workload as newer, more powerful SOA


Services are introduced. As these SOA Services become increasingly popular, more
and more programmers (and thus programs) will consume them. This may strain
these SOA Services beyond the initial intention of the provider. Increased adoption by
more and more applications can rapidly outpace the original scope of the SOA
Services unless appropriately planned ahead. Creating and following a Reference
Architecture and applying sound service engineering principles are required to ensure
that it does not happen. The practitioner guide, Service Engineering - An Overview,
covers the details of an enterprise class service engineering framework.
This is why grid computing is ideally suited as the underlying software infrastructure
for SOA applications. New SOA Services introduced into a SOA environment require
dynamic allocation of computing power in order to perform and scale predictably.
With a grid computing infrastructure, these SOA Services can get access to a
virtualized pool of compute power and storage on an as needed basis. This can
provide significant cost savings due to reduced server hardware and software licenses
and improved application uptime.
A basic principle of SOA is the decoupling of applications and SOA Services. This
leads to increased changes to existing applications and SOA Services, as well as more
frequent creation of new ones. Supporting many more applications and SOA Services,
whose requirements change almost continuously, means more hardware, more
infrastructure software, and more administration. When added to IT's mandate to
meet stringent business service-level agreements, keep costs low, and implement
environmentally sustainable technologies, these additional support requirements can
stretch IT resources to the limit. To meet all these demands, IT needs to be able to
support rapid application changes, dynamically adjust resource allocation, and
increase the use of shared IT infrastructure. Traditional application architectures -
typically involving islands of large enterprise software, installed or provisioned on
fixed and dedicated hardware stacks - are not keeping up with the demands of this
new world.

4.4 Enterprise Grid


More and more organizations are looking at ways of composing an Enterprise
Application Grid. An Enterprise Grid is able to scale out on multiple levels, guarantee
service availability, while providing the means for ease of scale out and manageability.

4-4 ORA Application Infrastructure Foundation


Application Grid

Figure 4–1 Enterprise Grid

Enterprise Grids have two primary grid layers: Database Grid and Application Grid
that encompasses Data Grid.
Application Grid layer provides the following capabilities:
■ Application virtualization
■ Service virtualization
■ Provisioning on demand
■ Data Grid layer that provides the following capabilities:
■ Data virtualization services
■ Parallel processing.
■ Transaction integrity
Database grid layer provides the following capabilities:
■ Guaranteed Quality of Service
■ Scale out persistence
■ Automatic storage management
These layers are discussed in detail below.

4.5 Application Grid


Application Grid applies the Grid concept to application servers and describes an
architecture in which multiple application server instances work together to provide a
shared, dynamically assignable pool of resources to a set of applications.
Application Grid is an emerging architectural approach for middleware infrastructure
that leverages existing technologies as well as new innovations. It makes infrastructure
more flexible and efficient. Because applications and services rarely hit peak demand
at the same time, pooling, sharing, and dynamically adjusting the allocation of
hardware and infrastructure software resources with an application grid enables IT
shops to be agile and efficient.

Grid Computing 4-5


Application Grid

4.5.1 Drivers for Application Grid


Application Grid has several market and technical drivers as listed below.
Market Drivers:
■ The need for continuous availability of applications and SOA Services.
■ The need for the ability to grow infrastructure on demand.
■ Fast and agile delivery of new services for competitive advantage. The rate of
change on the business side is faster than the ability of IT to manage, deploy, and
re-engineer solutions.
Technical Drivers:
■ Increasing demands on data response performance, scalability, and transactional
integrity.
■ Extreme and predictable performance requirements of applications and SOA
Services.
■ The need for unlimited, linear scalability.
■ Growing high availability requirements.
■ Increased demand for data center efficiency and utilization.
■ The need for better SOA Service and application management and diagnostics.

4.5.2 Components of Application Grid


Application Grid is an approach, architecture, and set of practices related to
foundational middleware technologies such as application servers and transaction
processing monitors. The idea is to bring to middleware grid computing techniques of
pooling resources and dynamically adjusting their allocation across demands.

Figure 4–2 Components of Application Grid

Figure 4–2 above shows the components of Application Grid. The primary
components are:
■ Grid Services and Applications: are the SOA Services and applications that are
deployed on the Grid.

4-6 ORA Application Infrastructure Foundation


Application Grid

■ Data Grid: is the layer that provides in-memory distributed data caching services.
This is an important part of the Application Grid as it provides the ability to
cluster and replicate server instances while maintaining performance. This layer
provides persistence, events, messaging and analytics capabilities to maintain
integrity and consistency of data. Data Grids are explained more in Section 4.5.5.
■ Containers: host the applications, SOA Services and data grid components.
Application servers like Oracle WebLogic, Spring, TUXEDO, and CORBA are
some of the examples of containers. These containers run on JVMs or natively.
■ Management: Grid control and application/service monitoring and management
are provided by the management components.
■ Development tools: In addition to the application development tools and IDEs,
Grid will require addition configuration and management tools to provision and
manage the grid environments.

4.5.3 Clustering
The ability to distribute work across nodes in a cluster is the basic enabler for
containers to be assembled in a grid architecture and form the application grid. The
particular qualities of a container's clustering mechanism, like how quickly and
dynamically nodes can be added to and removed from clusters and how automatic
such adjustment can be made, strongly determine how effective a particular
application grid architecture can be made.
Clustering is one of the basic building blocks of grid computing and helps fulfill high
availability and fault tolerance requirements. Clustering is typically done at the
container level. The runtime “container” is the foundational element in middleware. In
the Java world the container is most commonly an application server such as Oracle
WebLogic Server; in the C/C++/COBOL world it is a transaction processing monitor
(TPM) such as Oracle Tuxedo.
There are two broad categories of functionality provided by a container: the functions
invoked explicitly by the contained code, and the functions carried out irrespective of
the contained code. The first category includes the application programming interfaces
(APIs) implemented by the container--in the case of Java application servers these are
the Java Enterprise Edition APIs such those supporting transactions, Web services, etc.
The second category includes management, availability, and scaling capabilities that
are independent of the contained code.
The first category, the APIs, are largely about developer productivity (developers
focus on higher level business logic rather than constantly re-inventing/rewriting
low-level code) reliability (API functionality implemented in the container is mature
and well-tested) and ease of integration (interfaces are standardized). The second
category of container functions is where “enterprise-class” is manifested: this is where
a basic piece of code becomes manageable, scalable, highly available, and
high-performance.
The fundamental mechanism that containers implement to achieve manageable and
reliable performance at scale is clustering. Clustering is the ability to have multiple
instances of the container (“nodes”) be grouped together and contain identical copies
of code and/or data. The nodes may each run on a different physical server or in some
more complex configuration.
A cluster can be used most simply as a high availability mechanism in which all nodes
in the cluster are identical, and in the event that one node fails, its workload is picked
up and carried on by another node in the cluster.

Grid Computing 4-7


Application Grid

A more sophisticated use of container clustering is for “scaling out”. In this case the
cluster has nodes containing identical copies of the application code but workload is
“load balanced” across the nodes. Each node will be serving different users or sessions
or subsets of transaction work and thus have different data or “state”.

4.5.4 Architectural issues addressed


Dynamic Scalability: With an Application Grid, the allocation of machines to
applications is dynamic since it becomes easier to bring both new machines and new
applications into service. With a stovepipe under an application, increasing capacity
typically means adding another application server/Operating System/machine stack
and then putting a mechanism in place to load-balance. This causes inefficiency
because you don't get linear scaling as doubling the number of servers doesn't double
the number of transactions per second or concurrent users. By contrast application
grid-enabled application servers support clustering that scales much better.
Improved hardware utilization: An Application Grid also helps improve hardware
efficiency because excess capacity can be redirected to applications that need it most.
By sharing and pooling resources, an application grid allows the total compute
resources required to be less than the sum of all the applications' peak demands. Since
few applications hit their peak loads at the same time in most environments, shared
resources can be moved from lower-demand applications to those with higher
demand. Continuous, automated, dynamic adjustment of resources is one of the
primary capabilities of the Application Grid architecture.
Improved Quality of Service: An Application Grid enables a higher quality of service.
Faster response times and higher reliability, which come from the Application Grid's
ability to parallelize computation, replicate data across distributed nodes. Application
Grid architecture reduces interruptions from network problems or Java garbage
collection, allows more computation per unit of time, and improves resiliency by
eliminating single points of failures. An Application Grid also provides tools to
manage a collection of machines in an aggregated way, enabling faster administrative
response and reducing human error while automating failover.

4.5.5 Data Grid


An essential component of an Application Grid is data grid infrastructure. Because
more SOA Services and applications are now creating, reading, updating, and deleting
(CRUD) operational data, the ability to establish concurrent control, transactional
integrity, and response performance is more important and more challenging than
ever. Significantly more data is generated with CRUD operations, and the amount is
unpredictable. The ability to dynamically scale the enterprise data repository and to
ensure reliable availability of data SOA Services, even when the database reaches full
capacity or becomes unavailable, is critical.
A Data Grid is a system composed of multiple servers that work together to manage
information and related operations such as computations in a distributed
environment. An In-Memory Data Grid is a Data Grid that stores the information in
memory to achieve very high performance, and uses redundancy by keeping copies of
that information synchronized across multiple servers to ensure the resiliency of the
system and the availability of the data in the event of server failure.
One of the important benefits of SOA is the opening of legacy applications and their
data stores to much wider use. An associated challenge, however, is that the legacy
asset may not be architected in a way that supports the transactional demands
resulting from that wider use. By providing a buffer against the legacy data store, a
data grid can serve as a caching layer that scales linearly to extreme levels. IT does not

4-8 ORA Application Infrastructure Foundation


Application Grid

have to re-architect the legacy asset; the Data Grid enables IT to offer broader access
with high service levels.
When choosing an SOA strategy, corporations must rely on solutions that ensure data
availability, reliability, performance, and scalability. They must also avoid “weak link”
vulnerabilities that can sabotage SOA strategies. A data grid infrastructure, built with
clustered caching, addresses these concerns. It provides a framework for improved
data access that can create a competitive edge, improve the financial performance of
corporations, and sustain customer loyalty.

Figure 4–3 Data Grid

As shown in Figure 4–3, data grids provide distributed data persistence services to
various backend interfaces such as JPA, SDO, XML, Web Services etc. They provide the
data grid services to various consumers such as composite applications (Web 2.0), SOA
Services, JEE applications and standalone frameworks.
One of the hardest and most underestimated parts of building large scale grid based
applications is persistence integration, mapping the application model to a backend
database and maintaining transactional integrity. In Java EE 5.0 , this has formally
being standardized with the Java Persistence API (JPA) and is a natural complement to
the Data Grid services where clearly data eventually has to come to rest in a
persistence store.

4.5.5.1 Architectural issues addressed


Data Grids address a number of architectural issues. The key ones are summarized
below.
■ Low response times: Data Grid achieves low response times for data access by
keeping the information in-memory and in the application object form, and by
sharing that information across multiple servers. In other words, applications and
SOA Services may be able to access the information that they require without an
additional network hop and without any data transformation step such as Object
Relational mapping (ORM). Data Grid also avoids Single Point of Bottleneck
(SPOB) by partitioning and spreading out information across the grid, with each
server being responsible for managing its share of the total set of information.

Grid Computing 4-9


Database Grid

■ High throughput: Data Grid enhances performance and makes it possible to


achieve high levels of throughput. This is typically achieved by optimizing access
to partitioned data. By partitioning the information, as servers are added each one
assumes responsibility for its fair share of the total set of information, thus
load-balancing the data management responsibilities into smaller and smaller
portions.
■ Predictable scalability: By using dynamic partitioning to eliminate bottlenecks
and achieving predictably low latency regardless of the number of servers, the
Data Grid provides predictable scalability of applications. While certain
applications can use Data Grid to achieve linear scalability, that is largely
determined by the nature of the application, and thus varies from application to
application.
■ Eliminate bottlenecks: Data Grids eliminate bottlenecks by queuing up
transactions that have occurred in memory and asynchronously write the end
result to a system of record. This is particularly appropriate in systems that have
extremely high rates of change due to the processing of many small transactions,
particularly when only the end result needs to be made persistent.
■ Continuous availability: Data Grids replicate data across the grid and failover to
the secondary instance when the primary is not available. When the primary
server fails, the secondary becomes the primary and a new secondary should be
identified.
■ Information reliability: In a distributed environment, reliability is one of the
primary concerns. The rightful owner should be responsible for managing the
master copy of the data and Data Grid ensures that information is owned by a
specific server until it fails. The data ownership is transparently transitioned over
to the secondary server in the case of primary failure.

4.6 Database Grid


Database and storage are the most heavily accessed components of most architectures.
They are accessed frequently by system components as well as user components alike.
The DBMS should be extremely scalable and highly performant to ensure that it
doesn't become the bottleneck. Similarly, storage should be fault tolerant and reliable
to ensure the integrity and consistency of data. Databases and storage combined with
grid technology provides a very powerful solution to data management and storage
requirements.

4-10 ORA Application Infrastructure Foundation


Database Grid

Figure 4–4 Database and Storage Grids

As shown in Figure 4–4 above, the Database Grid is a hybrid that contains elements of
the (DBMS) application and storage grids. Like an application grid, it deploys DBMS
code redundantly on multiple servers (or nodes), which break up the workload based
on an optimized scheme and execute tasks in parallel against common data resources.
If any node fails, its work is taken up by a surviving node to ensure high availability.
The storage for the database is managed in a manner consistent with a storage grid.
Storage is made available on a range of disks, with data managed in an optimized way
to ensure scalability and high availability while ensuring that disks may be added or
removed in a manner that is transparent to the user and has no definitional effect on
the application or database (that is, the application or database does not need to be
changed to refer to different files on different volumes when disks are added or
removed); the application or user is insulated from the exact nature of the physical
storage layout.
Database clusters and automatic storage management capabilities provide:
■ Scalability, including the ability to build clusters of database servers with low-cost
hardware that do the kind of work once reserved for expensive symmetrical
multiprocessing (SMP) systems
■ High availability through automatic failover so that if one node in the cluster fails,
then the other nodes can take up the work.
■ Flexibility, since resources, including both systems and storage, can be assigned to
the database as needed, and removed if necessary, without reconfiguring the
database on either the server or the storage so that there is no need for excess
capacity and unused processor power or storage
In order to be effective in a grid environment, a Database Grid must provide the
following capabilities.
■ Workload Management: Workload management helps to manage and distribute
the load to provide peak performance and high availability and ensures

Grid Computing 4-11


Database Grid

applications always receive the necessary processing resources to meet defined


service levels. This requires separation of services from underlying physical
details. The database should also provide clustering, connection load balancing,
and failover features along with management and notification services.
■ Automatic Storage Management (ASM): In a grid environment, storage
management might get very complex. This means that any manual steps should
be reduced in favor of automation. The task of planning, initializing, allocating,
and managing many disks for several databases, if not single large database,
becomes unwieldy. The database should handle file system, volume, and disk
management activities in addition to the database administration function to
reduce the complexity for DBAs. ASM creates a single pool of shared storage that
can be provisioned on demand and automatically managed to ensure space
utilization is optimized and that I/O bottlenecks are avoided.
■ In-Memory Database Cache: In-memory Database Cache Grid provides
horizontal scalability in performance and capacity. A Cache Grid consists of a
collection of In-Memory Database Caches (IMDB Cache) that collectively manage
an application’s cached data. Cached data is distributed between the grid
members and is available to the application with location transparency and
transactional consistency across all grid members. Online addition and removal of
cache grid members are performed without any service interruption to the
application.

4.6.1 Service Grid Pattern


The Service Grid pattern addresses the question of how deferred SOA Service state can
be scaled and kept fault tolerant. Accessing state data from backend databases and
other information sources could be very expensive. This effect is multiplied in
enterprise SOA Services running in a SOA environment as the SOA Services are
consumed more frequently by multiple departments.
There may also be scenarios where the same state information is needed by multiple
SOA Services. In order to solve these issues, the pattern suggests deferring the state
data to a collection of stateful system SOA Services that form a grid that is responsible
for state replication, redundancy, and supporting infrastructure.
A Service Grid establishes replicated instances of stateful grid SOA Services across
different server machines, resulting in increased scalability and reliability of state data.

4-12 ORA Application Infrastructure Foundation


Evolution of Grid Architecture

Figure 4–5 Service Grid Pattern

Figure 4–5 above shows the Service Grid pattern in which the Data Grid is caching the
data in memory. P and B represent primary and backup nodes respectively. The SOA
Services get and update the data from the primary node but the Data Grid replicates
the data to the backup node. Any updates to the data are also propagated back to the
source through the DB grid. The SOA Service providers get the data from the data grid
that fronts the data source.

4.7 Evolution of Grid Architecture


The premise of earlier versions of Grid architecture was to pool and share computing
power. The evolution of Grid architecture is primarily driven by the need to manage
information more efficiently. Data Grids and Application Grids evolved to meet the
extreme transaction processing requirements of the information era. Figure 4–6 below
demonstrates the evolution of Grid, starting from a distributed architecture to a
mature Grid. Organizations are at different points of this evolution spectrum based on
their IT maturity. Similarly vendors provide products with capabilities at different
levels of this evolution spectrum based on the maturity of the products and
technology.

Grid Computing 4-13


Evolution of Grid Architecture

Figure 4–6 Evolution of Grid Computing

Dedicated infrastructures that were built to serve the needs of individual business
units demanded high availability and scalability. This was achieved through
clustering and redundancy. The popularity of SOA, shared infrastructures and focus
on server efficiency paved the way for earlier Grid architecture. It brought in several
benefits including enhanced flexibility of the infrastructure, improved server
utilization, storage consolidation, and management efficiency. Mature Grid
infrastructure is geared towards real-time infrastructures that support policy based
service level automation. This requires a dynamic infrastructure that can seamlessly
adjust itself based on the demand conditions.
The table below shows how Grid is transforming as it matures.

Early Grid Mature Grid


Flexible infrastructure Dynamic infrastructure
Optimization within a tier End-to-end optimization
Unbreakable Always online
Specialized deployments Standard deployment
Value for some Value for all
Persistent storage for database data Persistent storage for all data

These key characteristics of mature Grid are discussed in detail below.


Dynamic Infrastructure: is the kind of infrastructure that can dynamically adapt to
changing business requirements and environments. It uses policy-based automation to
meet defined service level objectives. It allows fast provisioning and de-provisioning
of resources as needed. Dynamic infrastructure also enables end-to-end application
Quality of Service (QoS) management.
End-to-End Optimization: means sharing and optimization of resources across the
entire stack. Service level objectives are specified end-to-end, from web server through
storage. It provides the ability to move applications and workloads around the entire
grid and the tools to manage the entire infrastructure. Tiers in this architecture

4-14 ORA Application Infrastructure Foundation


Grid Management

communicate and negotiate with each other to deliver service level objectives
efficiently.
Always Online: means more than protection from failures. It means zero unplanned
and planned downtime. This will require online patching, upgrades, application
changes and real-time system health monitoring. It should provide the ability to
rejuvenate application components proactively and conduct root cause analysis after
eviction of problem nodes.
Standard Deployment: Grid becomes the standard infrastructure for all applications
and all tiers that include database, application servers/applications, web servers, and
storage. From the perspective of the applications, Grid is a scalable and highly
available platform that behaves like and is managed like a large single server.
Applications need not be grid-aware to run on the grid infrastructure.
Value for All: Grid optimizes resources for larger as well as smaller applications. Grid
virtualizes within and across resources to optimize any size application. Server
virtualization is a key new component of Grid that complements existing capabilities.
Virtualization makes it possible by enabling both aggregation and disaggregation of
resources. Large applications can dynamically scale across many resources to meet
service level objectives and small applications can share resources using server
virtualization technology.
Persistent Storage for All Data: Initial Grid implementations provided persistent
storage for database data and database files while mature Grid includes storage grid
manages all types of data and files. Data Grids and Applications Grids make it
possible to enhance performance through distributed caching while preserving data
integrity.

4.8 Grid Management


Grid architectures bring in several benefits to the enterprise but unless managed
effectively, those benefits won't be realized. Management of a grid might be
challenging but with the right tools, it may not be difficult to achieve. The key in grid
management is to create a unified management infrastructure that can monitor and
manage all layers of the grid. The resources must be constantly monitored and
automatically provisioned based on the current demand conditions. It would also be
very valuable to correlate the events happening across various layers to troubleshoot
and ensure performance. Individual tools and consoles to monitor and manage the
components of the grid will be a management nightmare and will not scale in large
enterprises. A grid management infrastructure removes such shortcomings by
consolidating the management tasks.
A grid management infrastructure provides the following management capabilities
■ Service Level Management: is the ability to specify and monitor the service-level
agreements on key business transactions to ensure that the applications and SOA
Services meet the performance and availability requirements of the clients.
■ Policy Management: Ability to define and deploy policies to automatically
manage the resources in the grid and the capability to identify any policy
violations.
■ Deployment and Provisioning Management: Deploying application components
and provisioning underlying resources is an important function of a grid manager.
■ Consolidated Logging and Reporting: provide the ability to correlate and
troubleshoot the system.

Grid Computing 4-15


Grid Computing Principles

■ System Monitoring and Alerts: Grid management should provide the dashboard
capabilities to monitor the various components of the system and generate alerts
based on the health of the components.

4.9 Grid Computing Principles


■ Business solutions must be location independent, e.g. they must not be hard-coded
for a specific physical server or network address.
■ Business solutions must be highly available such that the failure of a physical
machine will not impact uptime or performance.
■ Business solutions must be scalable to meet peak load requirements.
■ Infrastructure must support clustering of server processes for high availability,
failover, and scalability

4.10 Cloud Computing


Grid architecture as discussed above is related to Cloud computing in that Grid
architectures are likely to be included in Cloud architectures especially when
delivering Platform-as-a-Service via the Cloud. Cloud computing is covered in detail
by the Cloud Enterprise Technology Strategy.

4-16 ORA Application Infrastructure Foundation


5
5Virtualization

Virtualization is a technique for hiding the physical characteristics of computing


resources from the way in which other systems, applications, or end users interact
with those resources. It creates a layer of abstraction that allows the underlying
resources to be managed independent of the applications that run on them.
Virtualization provides a flexible and highly dynamic foundation that ORA, SOA, and
grid computing require. Virtualization can help ORA in the following ways.
■ It can create a layer of uniformity on top of disparate underlying systems to
deploy and manage SOA Services. This helps administrators deal with similar
systems rather than multiple, different systems. In large enterprise deployments,
the skill set and cost benefits derived from this uniformity are huge.
■ It can create multiple virtual machines on consolidated server environment to
enhance scalability and modularity of enterprise deployments. This makes the
server operations a lot simpler and helps administrators focus more on the
business activity monitoring than the everyday technical challenges.
■ SOA promotes the notion of centralized, shared service environments where a
collection of SOA Services are centrally deployed and managed. This is a great
opportunity for administrators to optimize resources and reduce cost. With
virtualization they can move underlying resources around to support various SOA
Services based on their current usage to ensure that service-level agreements are
met most efficiently.
Figure 5–1 shows the two approaches of virtualization. Server virtualization approach
creates multiple logical virtual machines on top of the hardware platform. This gives
flexibility to the administrators to deal with right-sized logical machines leading to
administrative and cost benefits. Server pooling or consolidation approach abstracts
the complexities of the underlying pool of servers by creating logical machines that
aggregate the underlying physical servers. In addition to expanding the capacity, this
approach promotes uniformity over the underlying heterogeneous systems.

Virtualization 5-1
Server Virtualization

Figure 5–1 Virtualization Approaches

Virtualization can be done at different levels such as hardware level, software level,
and service level. The basic idea is to encapsulate the details of the underlying
resources so that the services requiring those resources can be run independent of the
resources.

5.1 Server Virtualization


Server virtualization enables a single physical server to house multiple operating
environments. Virtualization emulates the physical presence of a server so that more
than one server, for example a Windows print server and a Linux file server, can be
hosted on a single computer. Each of the software server environments is referred to
as a virtual machine. Virtual machines are independent operating environments
comprised of the underlying operating system and all required server software and
data.
Virtual machines appear to the guest operating system as independent systems but are
actually simulated by the host system. Virtualization, in effect, decouples software
from the hardware on which it runs. As a result, virtualization provides a method for
managing systems and resources by function rather than by locations or how they are
organized.
There are various methods of server virtualization that include software level,
hardware level, Operating System (OS) Subsets, and Paravirtualization. These
methods are compared in the table below.

Software level Hardware level OS Subsets Paravirtualization


Concept Thin software layer Thick software layer Thin Software Layer - Shared OS context
(paravirtualization Hypervisor
layer) partitions
physical server
Effect on the Small modifications Unmodified Unmodified Heavily modified
Operating System
Performance Better Poor Good Better
Suitable for Mission critical, Smaller, non-critical Enterprise Enterprise
distributed applications. applications and SOA applications and SOA
applications and SOA Services that require Services that require
Services that require stability. high performance
fault tolerance and and uniform runtime
independence. platform.

5-2 ORA Application Infrastructure Foundation


Service Virtualization

5.1.1 Software Level Virtualization


Server virtualization can be implemented entirely at the software level, where the
hardware is fully emulated. This enables the operating systems to run unmodified, but
operating system calls to hardware are trapped and simulated by a thick software
layer. Typically, performance is relatively poor with this model and driver support is
very limited.

5.1.2 Hardware Level


Server virtualization can be implemented at the hardware level, again with an
unmodified operating system where a very thin software layer, known as the
hypervisor, controls the use of resources. One of the key benefits of hardware-level
virtualization is that applications do not need to be modified or re-certified, therefore
it is considered a least disruptive method.

5.1.3 Operating System Subsets


Another way to implement server virtualization is to create subsets of the operating
system. These are not really fully individual virtual machines with their own
operating systems, but virtual environments that share some amount of operating
system context with each other. Performance is generally better than other methods
because a large amount of resources can be shared between environments. However,
the operating systems must be heavily modified for this model. Because this model
does not create a full virtual server abstraction, creating different environments by
creating operating system subsets is not true server virtualization.

5.1.4 Paravirtualization
Server virtualization can also be implemented by a thin paravirtualization software
layer, which requires small modifications to the operating system. This layer partitions
the physical server into separate areas on which the virtual machines then run.
Computing resources from the underlying server are viewed as a pool of resources
that can then be shared amongst the virtual machines. With the exception of sharing
these computing resources, each virtual machine is independent.
Problems with an application on one virtual machine do not affect other virtual
machines on the same hardware platform. With paravirtualization, virtual machines
are similar to separate physical servers. Each has its own distinct hostnames, IP
addresses, and configurations. Each virtual machine is managed independently of the
others.

5.2 Service Virtualization


Applying the principles of virtualization to SOA, service virtualization helps insulate
service infrastructure details such as service endpoint location, service
inter-connectivity, policy enforcement, service versioning, and dynamic service
management information from service consumers.
The SOA infrastructure discussed in ORA SOA Infrastructure document provides the
basic building blocks required for SOA Service virtualization.
■ The Enterprise Service Bus (ESB) adds a layer of abstraction by providing
mediation and routing capabilities to hide the details of the backend SOA Service
implementation.
■ The Service Registry provides location transparency and lookup services.

Virtualization 5-3
Virtual Machine (VM) Images or Templates

■ The Service Management infrastructure provides service management, end point


virtualization, policy centralization and enforcement, mediation, load balancing,
and policy based routing capabilities. Managing the service end-points allow the
physical deployment of the SOA Service to change without affecting the client
code.
■ Clustering and load balancing capabilities allow the capacity to adjust
dynamically based on the current load conditions.

5.3 Virtual Machine (VM) Images or Templates


An important concept with respect to virtualization is VM Images or VM Templates.
VM Images allow “freeze-drying” of a software installation and configuration for easy
and fast provisioning. This is an essential building block for enterprise grids and
enterprise clouds. The idea of cloning the software configuration not only simplifies
the deployment but also contributes to improving the agility of the business.

5.4 Virtualization Principles


■ Infrastructure must provide the ability to execute multiple runtime environments
on a single physical machine.
■ Virtualization must be chosen as a technology choice when the business benefits
are clearly justified.
■ Uniform virtual machines must be created to standardize the development and
operational environments.
■ The type of virtualization (Hardware, software, para, etc.) must be chosen based
on the Total Cost of Ownership (TCO) to meet the required service levels.

5.5 Cloud Computing


Virtualization and Cloud computing are related in that virtualization as discussed
above is a common underlying capability required to provide Cloud computing.
Cloud computing is covered in detail by the Cloud Enterprise Technology Strategy.

5-4 ORA Application Infrastructure Foundation


6
6Containers

Containers play a key role in the foundation architecture. They provide the common
capabilities required for enterprise deployments so that the application developers can
focus on the business logic specific to their business. Containers create the extensible
foundation that is scalable and extensible. This allows the higher layers to leverage the
foundation capabilities and build value added services on top of them.
Containers provide several capabilities that include the following:
■ Transaction Support
■ A transaction is a unit of activity, in which many updates to resources can be
made atomic. The details of how a transaction is handled can be externalized
from the application logic and be provided as a container capability.
■ Security Support
■ A central point through which access to data and portions of the application
itself can be managed is considered a security benefit, passing responsibility
for authentication and authorization away from the potentially insecure client
layer without exposing the database layer.
■ Scalability and Performance
■ Containers provide scalability capabilities to grow the capacity of applications.
Most containers support clustering of servers to scale the system. They also
allow optimization of deployments to boost the performance of applications
and SOA Services.
■ Thread Management
■ Containers manage threads to enable multi processing of requests.
■ Data and Code Integrity
■ By centralizing business logic on an individual or small number of server
machines, updates and upgrades to the application for all users can be
guaranteed. There is no risk of old versions of the application accessing or
manipulating data in an older, incompatible manner.
■ Centralized Configuration
■ Changes to the application configuration, such as a move of database server,
or system settings, can be done centrally.
■ Connection and Session Management
■ Containers manage the client connections and sessions and provide
transaction timeout capability.
■ Abstraction

Containers 6-1
Principles and best practices

■ Containers shield the applications from the low level details of the hardware
and operating systems. They provide a uniform interface layer on top of the
hardware and operating systems to promote interoperability and portability.
Containers are available for several programming languages and platforms. Some of
the examples of containers include JEE application servers, CORBA servers,
transaction monitors like TUXEDO, and lightweight framework containers like Spring.
Containers may run applications, SOA Services, batch programs, and other types of
business solutions.

6.1 Principles and best practices


■ Developers must not write infrastructure code. Commercial Off The Shelf (COTS)
software should be used for delegating security management, transaction
management, and other infrastructure tasks.
■ Applications must use the container APIs to access the underlying resources
instead of raw resource APIs.
■ Containers must support open standards.
■ The code deployed on containers must be portable to other similar containers.

6-2 ORA Application Infrastructure Foundation


7
Data Caching Infrastructure
7

The foundation infrastructure for ORA must include robust and scalable data caching
capabilities. Unless designed properly, data access might become a bottleneck in
enterprise architectures due to the high volume and frequency of messages
transmitted. Also almost every component of the enterprise technology stack depends
on databases to manage information. Data management is a very broad area, and as
such is beyond the scope of this document. This section focuses on data caching
without delving into the underlying data persistence.

7.1 Data Caching Infrastructure


Caching foundation provides the building blocks necessary to duplicate the backend
data near the processing zone to avoid the cost of accessing the data over the network.
Caching provides several benefits including scalability and performance. In a
distributed architecture caching function is more complex as data needs to be kept
synchronized and consistent. A good caching infrastructure should provide
distributed caching functionality along with real time update propagation.

7.1.1 Caching concepts


Cache hit and cache miss: If the data is found in the cache when clients look for it, it is
called a cache hit and if the data is not found in the cache, it is called cache miss. In
case of a cache miss, the data is brought from the backend store, sent back to the client
and stored in the cache.
Hit rate or ratio: The percentage of times the cache was hit successfully over the total
number of tries is called the hit ratio.
Read-only caches store data for query purposes. Read-only caches are shared among
all users and therefore offer greater performance benefit. However, objects read from a
read-only cache should not be modified.
Read/write caches store data for both read and update operations. If there is an
intention to use objects for retrieval and modification, a read/write cache is
recommended.

7.1.2 Caching Modes


There are several approaches of caching data. These modes are discussed in this
section.

Data Caching Infrastructure 7-1


Data Caching Infrastructure

7.1.2.1 Read-Through Cache


In a read-through cache, if the data is not found in the cache, it is fetched from the
backend data source, placed in the cache, and finally returned to the caller as shown in
Figure 7–1. The fetching and update happens synchronously within the transaction.
The backup cache is also updated to reflect the change in the cached data.

Figure 7–1 Read-Through Cache

7.1.2.2 Refresh-Ahead Cache


In the refresh-ahead scenario, the cache is automatically and asynchronously refreshed
before its expiration. The asynchronous refresh is only triggered when an object that is
sufficiently close to its expiration time is accessed. If the object is accessed after its
expiration time, a synchronous read from the cache store will be performed to refresh
its value. Figure 7–2 shows the refresh-ahead cache process.
This approach has performance benefits as the data is ready to be served in most cases
when the client request arrives.

7-2 ORA Application Infrastructure Foundation


Data Caching Infrastructure

Figure 7–2 Refresh-Ahead Cache

7.1.2.3 Write-Though Cache


In a write-through cache, every write to the cache causes a synchronous write to the
backend store as shown in Figure 7–3. In this approach, the data is updated in the
backend data store, then the primary cache, all within the scope of the transaction.
Then the backup cache is also updated to maintain consistency of data.

Figure 7–3 Write-Through Cache

Data Caching Infrastructure 7-3


Data Caching Infrastructure

7.1.2.4 Write-Behind Cache


In a write-back or write-behind cache, writes are not immediately mirrored to the
store. Instead, the cache tracks which of its locations have been written over. The data
in these locations is written back to the backend store when those data is evicted from
the cache. This process is shown in Figure 7–4.

Figure 7–4 Write-Behind Cache

7.1.3 Cache Topologies


Several cache topologies exist in practice, but the most predominant ones are
replicated, partitioned, and near topologies.
■ Replicated: In this mode, each machine contains a full copy of the dataset and
read access is instantaneous.
■ Partitioned or Distributed: Each machine contains a unique partition of the
dataset in this mode. Adding machines to the cluster will increase the capacity of
the cache. Both read and write access involve network transfer and
serialization/de-serialization.
■ Near: In this mode, each machine contains a small local cache which is
synchronized with a larger partitioned cache, optimizing read performance. There
is some overhead involved with synchronizing the caches.

7.1.4 Caching Data Access Patterns


This section briefly describes some of the caching data access patterns.

7.1.4.1 Data Access Distribution


When caching a large dataset, typically a small portion of that dataset will be
responsible for most data accesses. The 80-20 rule applies here. Roughly 80% of
operations are against a 20% object subset. Obviously the most effective return on
investment will be gained by caching the most active objects (20%); caching the

7-4 ORA Application Infrastructure Foundation


Data Caching Infrastructure

remaining 80% will provide only a minor improvement while requiring a significant
increase in resources.
However, if every object is accessed equally often (for example in sequential scans of
the dataset), then caching will require more resources for the same level of
effectiveness. In this case, achieving 80% cache effectiveness would require caching
80% of the dataset versus 20%. In practice, almost all non-synthetic (benchmark) data
access patterns are uneven, and will respond well to caching subsets of data.
In cases where a subset of data is active, and a smaller subset is particularly active,
Near caching can be very beneficial.

7.1.4.2 Cluster-Node Affinity


Near cache technology should transparently take advantage of cluster-node affinity.
With cluster-node affinity, the requests are always served locally. This improves
performance by avoiding unnecessary round trips. This topology is particularly useful
when used with a sticky load-balancer.

7.1.4.3 Read/Write Ratio and Data Sizes


Generally the following cache topologies are best for the following use cases:

Topology Use cases


Replicated cache Small amounts of read-heavy data (for example, metadata)
Partitioned cache Large amounts of read/write data (for example, large data caches)
Near cache Similar to Partitioned, but has further benefits from read-heavy
tiered access patterns and “sticky” data access. Depending on the
synchronization method (expiry, asynchronous, synchronous), the
worst case performance may range from similar to a Partitioned
cache to considerably worse.

7.1.4.4 Interleaving Cache Reads and Writes


Interleaving refers to the number of cache reads between each cache write. The
Partitioned cache is not affected by interleaving as it is designed for 1:1 interleaving.
The Replicated and Near caches by contrast are optimized for read-heavy caching, and
prefer a read-heavy interleave. This is because they both locally cache data for
subsequent read access. Writes to the cache will force these locally cached items to be
refreshed, a comparatively expensive process. Note that with the Near cache
technology, worst-case performance is still similar to the Partitioned cache; the loss of
performance is relative to best-case scenarios.

Data Caching Infrastructure 7-5


Data Caching Infrastructure

7-6 ORA Application Infrastructure Foundation


8
Product Mapping View
8

This section maps Oracle products to the ORA application infrastructure architecture
elements laid out in the previous section.

Figure 8–1 ORA Foundation Infrastructure Oracle Technology Mapping

■ Caching Layer
■ Oracle Coherence
■ Oracle TimesTen
■ Grid Layer
■ Oracle Application Grid
■ Oracle Enterprise Manager - Grid Control
■ Oracle Exadata Storage
■ Oracle RAC
■ Distributed Layer
■ Oracle WebLogic Server (OWLS)
■ Oracle TUXEDO
■ Oracle SALT
■ Oracle Database
■ Containers Layer
■ Oracle WebLogic Server (OWLS)

Product Mapping View 8-1


Grid Computing with Oracle Products

■ Oracle TUXEDO
■ Virtualization Layer
■ Oracle VM
■ Oracle JRockit
Oracle Maximum Availability Architecture (MAA) aims at providing guidelines to
achieve high availability at every layer of the Oracle stack. It is Oracle's best practices
blueprint based on proven Oracle high availability technologies and
recommendations. The goal of MAA is to achieve the optimal high availability
architecture at the lowest cost and complexity. The principles around which Fusion
middleware Maximum Availability Architecture has been designed include
■ Process Management and death detection
■ If a process dies, detect it quickly and restart it if possible.
■ Redundancy
■ Provide redundant components that can take over in case of a planned or
unplanned downtime.
■ Connection management
■ Make sure that tiers load balance their outbound connections and respond
accordingly to failures in the other tiers.

8.1 Grid Computing with Oracle Products


Oracle's grid computing architecture is built upon a stack of Oracle products as shown
in Figure 8–2. WebLogic server clustering provides the application virtualization
capabilities in the application layer. It serves as the SOA Service and application
development platform that is highly scalable and reliable. Coherence data grid
provides the in-memory cache and replication required for extreme transaction
processing. Oracle RAC and Active Data Guard provide the capabilities for the
database grid layer. Automatic storage management (ASM) and Exadata storage
solution provide the storage grid capabilities. Oracle Enterprise Manager grid control
provides the management capabilities required to manager enterprise-class grid
deployments.

8-2 ORA Application Infrastructure Foundation


Oracle Application Grid (OAG)

Figure 8–2 Grid Computing with Oracle Products

8.2 Oracle Application Grid (OAG)


OAG is an implementation of the application grid technology described in Section 4.5.
It is composed primarily of the components shown in Figure 8–3.

Figure 8–3 Oracle Application Grid Components

These components are listed below.


■ Oracle WebLogic Server
■ Scalable and robust application server that provides an array of clustering
capabilities
■ Full support for JEE with powerful extensions
■ Oracle Coherence Grid Edition
■ No single point of bottleneck or failure due to the distributed architecture

Product Mapping View 8-3


Oracle WebLogic Server (OWLS)

■ Supports linear scalability of hundreds of servers by design


■ Offers Java, .NET, C++ Support
■ It provides capabilities for persistence, events, messaging and analytics.
■ • JRockit Real Time
■ JVM with unprecedented deterministic garbage collection
■ Zero coding requirements
■ Superior runtime application performance tooling
■ WebLogic Operations Control
■ Multi-domain management
■ Policy based automation
■ Application level virtualization
■ Oracle Enterprise Manager Diagnostics Pack for Middleware (for Application
Health)
■ Real time visibility and proactive monitoring of SOA Services and applications
■ Predictive alerts, detailed and real time root cause diagnosis
■ Availability of historical data for application server and components usage,
load, performance, etc.
■ Oracle Development Tools
■ JDeveloper and Eclipse are the development tools for application
development and deployment.
With the increasing importance of middleware in modern architectures, virtualization
and clustering in the middle tier is also critical for the continuous operation,
predictable high performance, and scale-out of enterprise applications.

8.3 Oracle WebLogic Server (OWLS)


Oracle WebLogic Server (OWLS) provides a JEE based foundation platform for the
application grid. It provides superior scalability and clustering features required for
deploying mission critical grid infrastructure. Coupled with JRockit Real Time, OWLS
provides high performance and predictable latency for the components and
applications deployed on it. The coherence data grid is deployed on the OWLS
platform.
More information on OWLS can be found in the documentation listed in the further
reading section, Appendix A.

8.4 Oracle JRockit Real Time (JRRT)


Oracle JRockit Real Time (JRRT) provides lightweight, front-office infrastructure for
low latency, event-driven applications. For companies in highly-competitive
environments where performance is key and every millisecond counts, JRRT provides
the first Java-based, real-time computing infrastructure.

8-4 ORA Application Infrastructure Foundation


Oracle JRockit Real Time (JRRT)

Figure 8–4 JRockit Architecture

Figure 8–4 shows the architecture of JRockit. The components shown in the figure are
described below.
■ I/O handles communication with files, databases, and network.
■ Memory Management is concerned with things like garbage collection, when the
JVM reclaims unused memory, and finding the optimal heap size for an
application.
■ Threads Management schedules threads, handles synchronization and locks.
■ The Java Model takes care of very java specific areas like reflection and class
loading.
■ In Code Generation the JVM translates the java code to assembler code that runs
directly on the target operating system. JRockit JVM will also detect and perform
possible optimizations of the application code.
■ The External Interfaces and Monitoring/Management are used to get information
directly from inside the JVM, and to control some of it's runtime features, used by,
among others, JRockit Mission Control.
JRockit Mission Control (JRMC) is a multi functional tool suite that allows users to
manage, monitor, and profile their applications. Designed for low overhead it can be
used in production environments. Oracle JRockit Mission Control consists of three
different tools, the management console, the runtime analyzer, and the memory leak
detector.

8.4.1 Deterministic Garbage Collection (GC)


Some applications require very high throughput, faster response times, and
predictable performance. With JRRT one can request a pause time SLA as low as 1 ms
to achieve predictable latency. The key objective of JRRT is deterministic GC pause
time. Finer control of GC behavior adds workload to the VM. Optimizing for
low-latency with determinism typically reduces overall throughput. General JRockit
optimizations benefit both latency and throughput while the impact on throughput is
highly dependent on the application. It is important to understand that what is offered
is determinism in pause times, not maximum throughput.

Product Mapping View 8-5


Oracle Coherence

8.5 Oracle Coherence


Coherence is an essential ingredient for building reliable, high-scale, clustered
applications. The term clustering refers to the use of more than one server to run an
application, usually for reliability and scalability purposes. Coherence provides all of
the necessary capabilities for applications to achieve the maximum possible
availability, reliability, scalability, and performance. Virtually any clustered
application will benefit from using Coherence.
One of the primary uses of Coherence is to cluster an application's objects and data. In
the simplest sense, this means that all of the objects and data that an application
delegates to Coherence are automatically available to and accessible by all servers in
the application cluster. None of the objects or data will be lost in the event of server
failure.
By clustering the application's objects and data, Coherence solves many of the difficult
problems related to achieving availability, reliability, scalability, performance,
serviceability, and manageability of clustered applications.

8.5.1 Coherence and RASP


The RASP qualities have been discussed in Section 1.1. This section discusses how
Coherence helps fulfill the RASP qualities of the ORA applications.

8.5.1.1 Availability
Coherence is used to achieve high availability in several different ways:
■ Supporting Redundancy in Java Applications
■ Coherence makes it possible for an application to run on more than one server,
which means that the servers are redundant. Coherence enables redundancy
by allowing an application to share, coordinate access to, update, and receive
modification events for critical runtime information across all of the redundant
servers.
■ Enabling Dynamic Cluster Membership
■ Coherence tracks exactly what servers are available at any given moment.
When the application is started on an additional server, Coherence is instantly
aware of that server coming online, and automatically joins it into the cluster.
This allows redundancy (and thus availability) to be dynamically increased by
adding servers.
■ Exposing Knowledge of Server Failure
■ Coherence reliably detects most types of server failure in less than a second,
and immediately fails over all of the responsibilities of the failed server
without losing any data. Consequently, server failure does not impact
availability.
■ Part of an availability management is Mean Time To Recovery (MTTR), which
is a measurement of how much time it takes for an unavailable application to
become available. Since server failure is detected and handled in less than a
second, and since redundancy means that the application is available even
when that server goes down, the MTTR due to server failure is zero from the
point of view of application availability, and typically sub-second from the
point of view of a load-balancer re-routing an incoming request.
■ Eliminating Other Single Points Of Failure (SPOFs)

8-6 ORA Application Infrastructure Foundation


Oracle Coherence

■ Coherence provides insulation against failures in other infrastructure tiers. For


example, Coherence write-behind caching and Coherence distributed parallel
queries can insulate an application from a database failure.
■ Providing Support for Disaster Recovery (DR) and Contingency Planning
■ Coherence can insulate against failure of an entire data center, by clustering
across multiple data centers and failing over the responsibilities of an entire
data center.

8.5.1.2 Reliability
Coherence is explicitly built to achieve very high levels of reliability. For example,
server failure does not impact in-flight operations, since each operation is atomically
protected from server failure, and will internally re-route to a secondary node based
on a dynamic pre-planned recovery strategy. In other words, every operation has a
backup plan ready to go!
Coherence is designed based on the assumption that failures are always about to
occur. Consequently, the algorithms employed by Coherence are carefully designed to
assume that each step within an operation could fail due to a network, server,
operating system, JVM or other resource outage. An example of how Coherence plans
for these failures is the synchronous manner in which it maintains redundant copies of
data; in other words, Coherence does not gamble with the application's data, and that
ensures that the application will continue to work correctly, even during periods of
server failure.

8.5.1.3 Scalability
Coherence provides several capabilities designed to help SOA Services and
applications achieve linear scalability. Coherence helps to solve the scalability problem
by targeting obvious bottlenecks, and by completely eliminating bottlenecks whenever
possible. It accomplishes this through a variety of capabilities, including:
■ Distributed Caching
■ Coherence uses a combination of replication, distribution, partitioning, and
invalidation to reliably maintain data in a cluster in such a way that regardless
of which server is processing, the data that it obtains from Coherence is the
same. In other words, Coherence provides a distributed shared memory
implementation, also referred to as Single System Image (SSI) and Coherent
Clustered Caching.
■ Partitioning
■ Partitioning refers to the ability for Coherence to load-balance data storage,
access and management across all of the servers in the cluster.
■ Coherence accomplishes failover without data loss by synchronously
maintaining a configurable number of copies of the data within the cluster.
■ Coherence prevents loss of data even when multiple instances of the
application run on a single physical server within the cluster.
■ Partitioning supports linear scalability of both data capacity and throughput.
■ Session Management
■ This capability is provided by the Coherence*Web module, which is a built-in
feature of Coherence. Coherence*Web provides linear scalability for HTTP
Session Management in clusters of hundreds of production servers.

Product Mapping View 8-7


Oracle Coherence

■ Coherence*Web has the same latency regardless of the size of the cluster since
all HTTP session read operations that cannot be handled locally are spread out
evenly across the rest of the cluster, and all update operations are likewise
spread out evenly across the rest of the cluster. The result is linear scalability
with constant latency, regardless of the size of the cluster.

8.5.1.4 Performance
Coherence provides a large number of capabilities designed to eliminate operations
that could possibly have high latencies.
■ Replication and Near caching
■ Replication ensures that a desired set of data is up-to-date on every single
server in the cluster at all times. Replication allows operations running on any
server to obtain the data that they need locally, at basically no cost, because
that data has already been replicated to that server.
■ To eliminate the latency associated with partitioned data access, near caching
maintains frequently- and recently-used data from the partitioned cache on
the specific servers that are accessing that data, and it keeps that data coherent
with event-based invalidation. In other words, near caching keeps the
most-likely-to-be-needed data near to where it will be used, thus providing
good locality of access, yet backed up by the linear scalability of partitioning.
■ Write-Behind
■ Since the transactional throughput in the cluster is linearly scalable, the cost
associated with data changes can be a fixed latency, typically in the range of a
few milliseconds, and the total number of transactions per second is limited
only by the size of the cluster.
■ Coherence provides a Write-Behind capability, which allows the application to
change data in the cluster, and those changes are asynchronously replayed to
the application's database.

8.5.2 Data Grids Using Coherence


Oracle Coherence establishes in-memory “data grids” for Java and .Net applications to
access objects in-memory that are distributed across multiple physical machines in the
middle tier. This enhances the processing capability of middle-tier application servers
and provides horizontal scalability, high-availability, and predictable, high
performance. Coherence provides this high performance because in-memory
processing in the middle tier reduces network overhead and minimizes reading and
writing of data to disk. This approach of matching the data demand with the data
supply is shown in Figure 8–5 below.

8-8 ORA Application Infrastructure Foundation


Oracle TimesTen

Figure 8–5 Data Grid Using Oracle Coherence

The Coherence architecture has been shown to scale linearly as additional nodes are
added. High-availability is achieved by storing copies of the data on different servers
in the data grid to avoid a single point of failure in case an individual middle-tier
system crashes or is taken offline for maintenance.
Oracle Coherence's grid architecture enables the addition of application instances to be
started on the fly. Oracle Coherence is designed for lights-out management, which
provides the ability to expand and contract Oracle Coherence almost instantaneously
in response to changing demand.

8.6 Oracle TimesTen


Oracle TimesTen in-memory database empowers applications with instant
responsiveness and very high throughput for performance-critical functions in
real-time enterprises and industries. With an Oracle TimesTen, applications are able to
access, capture, or update information many times faster while using standard
relational database technology and familiar programming interfaces. Real-time data
replication between servers delivers high availability and ensures that applications are
continuously available.
Oracle TimesTen delivers real-time performance by changing the assumptions about
where data resides at runtime. By managing data in memory and optimizing data
structures and access algorithms accordingly, database operations execute with
maximum efficiency, achieving dramatic gains in responsiveness and throughput,
even compared with a fully cached, disk-based relational database management
system (RDBMS). Oracle TimesTen In-Memory Database libraries are also embedded
within applications, eliminating context switching and unnecessary network
operations, further improving performance.
Following the standard relational data model, SQL, JDBC, and ODBC are used to
access Oracle TimesTen In-Memory Databases. The use of SQL to shield applications
from system internals allows databases to be altered or extended without impacting

Product Mapping View 8-9


Oracle TimesTen and Coherence

existing applications. New services can be quickly added into a production


environment simply by adding application modules, tables, and columns. As with any
mainstream RDBMS, a cost-based optimizer automatically determines the fastest way
to process queries and transactions. Any developer familiar with Oracle Databases or
SQL interfaces can be immediately productive developing real-time applications with
Oracle TimesTen.

8.6.1 Oracle In-Memory Database Cache


Oracle In-Memory Database Cache is an Oracle Database product option that includes
the Oracle TimesTen In-Memory Database, and is used as a database cache at the
application tier to cache data and reduce the workload on the Oracle database. It also
provides the connection and transfer of data between the database and the TimesTen
cache, as well as facilitates the capture and processing of high-volume event flows into
a TimesTen database and subsequent transfer of data into an Oracle database.
Oracle data is cached in a TimesTen database by defining a cache grid and then
creating cache groups. A cache group in a TimesTen database can cache a single Oracle
table or a group of related Oracle tables.
A cache grid is a set of distributed TimesTen in-memory databases that work together
to cache data from an Oracle database and guarantee cache coherence among the
TimesTen databases. A grid consists of one or more in-memory database grid members
that collectively manage the application data using the relational data model. The
members of a grid cache data from a single Oracle database. Each grid member is
backed by either a standalone TimesTen database or an active standby pair.

8.7 Oracle TimesTen and Coherence


It is important to understand the distinction between Oracle TimesTen and Oracle
Coherence and when one should be used over the other. Oracle TimeTen is more of a
database grid component whereas Oracle Coherence is more of a application grid
component. Figure 8–6 below shows where these products should be applied. Oracle
TimesTen is used for accessing data using familiar SQL interface on single node
deployments. Coherence offers search and aggregation capabilities over clusters and
grid and it offers superior scale out features required for mission critical enterprise
applications.

Figure 8–6 Oracle Coherence and Oracle TimesTen

8-10 ORA Application Infrastructure Foundation


Oracle Enterprise Manager (OEM) Grid Control

8.8 Oracle Exadata Storage Server


The Oracle Exadata Storage Server is a storage product highly optimized for use with
the Oracle database and is the storage building block of the Oracle Database Machine.
It uses a massively parallel architecture to speed up Oracle data warehouses by off
loading data-intensive query processing from Oracle Database Servers and doing the
processing closer to the data. Simple to deploy and manage, the Oracle Exadata
Storage Server provides unlimited I/O scalability and mission-critical reliability in
addition to extremely fast query processing for SOA and data warehouse applications.
The Oracle Exadata Storage Server Software enables the Exadata Storage Server to
quickly process database queries and only return the relevant rows and columns to the
database server. By pushing SQL processing to the Oracle Exadata Storage Server all
the disks can operate in parallel, reducing database server CPU consumption while
consuming much less bandwidth to move data between storage and database servers.
The Oracle Exadata Storage Server returns a query result set rather than entire tables,
eliminates network bottlenecks, and frees up database server resources.

8.8.1 Building Storage Grids with Exadata


Oracle Exadata Storage Servers can be installed into a standard rack and are connected
to database servers. Oracle Exadata Storage Servers have dual links which provide
connectivity many times faster than traditional storage or server networks. Further,
Oracle's interconnect protocol uses direct data placement to ensure very low CPU
overhead by directly moving data from the wire to database buffers with no extra data
copies being made.
Oracle Exadata Storage Servers are architected to scale-out to any level of
performance. To achieve higher performance and greater storage capacity, additional
Oracle Exadata Storage Servers are added to the system. Scaling out is easy, and as
more Oracle Exadata Storage Servers are added, capacity and performance increases
linearly. This coupled with faster interconnect and the reduction of data transferred
due to the offload processing yields very large performance improvements.

8.9 Oracle Enterprise Manager (OEM) Grid Control


In the Oracle environment, the responsibilities for maintaining complex grids
naturally fall within the realm of Oracle Enterprise Manager (OEM) with its Grid
Control functionality. The Oracle database and application server help manage
individual instances of these products through self-managing capabilities.
The Oracle database includes a built-in intelligent management infrastructure that
monitors and diagnoses internal performance and availability. The database's internal
manager tracks which SQL statements are consuming the most resources, where the
bottlenecks are, and how resources such as storage and memory are being used.
Similar management capabilities reside within Oracle Application Server, and together
these automation building blocks enable Grid Control to simplify the management of
clustered, mid-tier grid environments.
Grid Control provides extensive middle-tier management and monitoring capabilities
in one integrated tool that spans the entire grid environment. Its capabilities include
multisystem management, provisioning and configuration management, automated
administration, policy-driven standardization across sets of systems, and end-to-end
diagnostics.
Grid Control views the availability and performance of the grid infrastructure as a
unified whole rather than as isolated storage units, databases, and application servers.

Product Mapping View 8-11


Oracle TUXEDO and Service Architecture Leveraging Tuxedo (SALT)

It allows grouping of hardware nodes, databases, and application servers into single
logical entities and manage a group of targets as one unit. Grid Control provides a
simplified, centralized management framework for managing enterprise resources and
analyzing a grid's performance. Grid administrators can manage the complete grid
environment via a Web browser throughout the entire system's software lifecycle, front
to back, from any network location.
In terms of its monitoring capabilities, Grid Control provides administrators with
proactive tools, letting them create representative transactions that give them a
window into actual performance across the grid. It generates alerts that are based on
the application's actual performance, not just that of individual components such as
the database, application server, HTTP server, or network routers. A key attribute of
Grid Control is that it's designed for multitier, heterogeneous environments with an
ability to reach across all the tiers of resources that affect the environment.
Grid Control is but one aspect of Oracle Enterprise Manager. The ORA Management
and Monitoring document describes a complete monitoring and management
architecture.

8.10 Oracle TUXEDO and Service Architecture Leveraging Tuxedo (SALT)


Oracle Tuxedo provides a solid foundation for application services, with strong
reliability and transaction integrity, ultra-high performance, linear scalability, and
configuration-based deployment. As the distributed transaction-processing platform
of choice, it provides the operational backbone for large mission-critical systems.
Oracle Tuxedo keeps these systems up and running even when deploying new
application services, scaling server configurations to handle additional workload, or
failing over within or across data centers.

Figure 8–7 Oracle TUXEDO

Oracle Tuxedo provides a service-oriented infrastructure for efficiently routing,


dispatching, and managing requests, events, and application queues across system
processes and application services. With virtually limitless scalability, it manages peak
transaction volumes efficiently, improving business agility and letting IT organizations
quickly react to changes in business demands and throughput. Oracle Tuxedo
optimizes transactions across multiple databases and ensures data integrity across all
participating resources, regardless of access protocol. The system tracks transaction

8-12 ORA Application Infrastructure Foundation


Oracle TUXEDO and Service Architecture Leveraging Tuxedo (SALT)

participants and supervises an extended addressing (XA) two-phase commit protocol,


ensuring that all transaction commits and rollbacks are properly handled.
The Oracle Tuxedo SOA lets enterprises develop composite (or hybrid) end-to-end
solutions that combine the availability and scalability of Oracle Tuxedo with the
extensibility of Java. As shown in Figure 8–7, Oracle Tuxedo applications can be
extended to
■ Java clients via Oracle Tuxedo Jolt
■ Microsoft .NET clients via Oracle Tuxedo .NET workstation client
■ Web services and Service Component Architecture (SCA) support via Oracle
Service Architecture Leveraging Tuxedo (SALT).
■ Bidirectional Enterprise JavaBeans integration with Oracle WebLogic Server
■ Heterogeneous, mediated messaging with Oracle Service Bus
Oracle Tuxedo supports many different connectivity and interoperability standards, so
that applications and services can be used across heterogeneous environments. The
domains architecture supports interoperability among different messaging and
transaction-processing applications running in separate environments, networks,
geographic locations, and companies as well as across application server platforms,
including Oracle WebLogic Server and Oracle Service Bus and mainframes running
IBM CICS or IMS TM.
Figure 8–8 shows the architecture of Oracle SALT. With Oracle SALT, Oracle Tuxedo
services can transparently call external Web services as if calling another Oracle
Tuxedo service and external applications can transparently call Tuxedo services
through standard Web service interface. In addition to basic Web services protocols,
Oracle SALT complies with most primary Web services specifications, including
WS-Addressing, WS-Security, and WS-Reliable Messaging. Oracle SALT also provides
a Service Component Architecture (SCA) container. SCA programming provides
component reuse, multi-container support, and ability to focus on business logic.
Support of SCA programming model on top of Tuxedo will render the ability to more
effectively manage service lifecycle, including systematic re-use of existing services in
composite applications based on SCA, as well as runtime discovery of service
signature. SCA support will render Tuxedo-based applications (those commonly
written in C/C++ and COBOL) interoperable with other enterprise applications
written in Java, .NET, etc., a significant benefit to customers.

Product Mapping View 8-13


Oracle VM

Figure 8–8 SALT

Oracle SALT can be integrated with Oracle Service Registry and Oracle Enterprise
Repository to publish Tuxedo services metadata for broad access within the enterprise,
enabling their use in Oracle BPEL PM, Oracle Business Rules, and Oracle Service Bus
as well as any third party users of the Registry and Repository.

8.11 Oracle VM
Oracle VM is server virtualization software that fully supports both Oracle and
non-Oracle applications, and delivers more efficient performance. Oracle VM offers
scalable, low-cost server virtualization. Consisting of open source server software and
an integrated Web browser-based management console, Oracle VM provides an
easy-to-use graphical interface for creating and managing virtual server pools, running
on x86 and x86-64-based systems, across an enterprise.

8-14 ORA Application Infrastructure Foundation


Oracle VM

Figure 8–9 Oracle VM

Oracle VM Templates deliver rapid software deployment and eliminate installation


and configuration costs by providing pre-installed and pre-configured software
images. Oracle VM and Fusion middleware products combine the benefits of server
clustering and server virtualization technologies, delivering integrated clustering,
virtualization, storage, and management for grid computing.
Users can create and manage Virtual Machines (VMs) that exist on the same physical
server but behave like independent physical servers. Each virtual machine created
with Oracle VM has its own virtual CPUs, network interfaces, storage and operating
system. With Oracle VM, users have an easy-to-use browser-based tool for creating,
cloning, sharing, configuring, booting and migrating VMs.
The components of Oracle VM as shown in Figure 8–9 are:
■ Oracle VM Manager: Provides the user interface, which is a standard ADF
(Application Development Framework) web application, to manage Oracle VM
Servers. Manages virtual machine lifecycle, including creating virtual machines
from installation media or from a virtual machine template, deleting, powering
off, uploading, deployment and live migration of virtual machines. Manages
resources, including ISO files, virtual machine templates, and sharable hard disks.
■ Oracle VM Server: A self-contained virtualization environment designed to
provide a lightweight, secure, server-based platform for running virtual machines.
Oracle VM Server is based upon an updated version of the underlying Xen
hypervisor technology, and includes Oracle VM Agent.
■ Oracle VM Agent: Installed with Oracle VM Server. It communicates with Oracle
VM Manager for management of virtual machines.
■ Hypervisor: Oracle VM Server is architected such that the hypervisor (or monitor,
or Virtual Machine Manager) is the only fully privileged entity in the system, but
is also extremely small and tightly written. It controls only the most basic
resources of the system, including CPU and memory usage, privilege checks, and
hardware interrupts.

Product Mapping View 8-15


Oracle database and Oracle Real Application Clusters (RAC)

■ Domains: Most of the responsibility of hardware detection in a Oracle VM Server


environment is passed to the management domain, referred to as domain zero or
dom0. Domains other than the management domain are referred to as domU.
These domains are unprivileged domains with no direct access to the hardware or
device drivers.

8.11.1 Oracle VM Templates


Oracle VM Templates provide an innovative approach to deploying a fully configured
software stack by offering pre-installed and pre-configured software images. Use of
Oracle VM Templates eliminates the installation and configuration costs, and reduces
the ongoing maintenance costs helping organizations achieve faster time to market
and lower cost of operations. Oracle VM Templates of many key Oracle products are
available for download, including Oracle Database, Enterprise Linux, Fusion
Middleware, and many more. Oracle also provides tools like Oracle VM Template
Builder to create VM templates for third party software and applications.

8.12 Oracle database and Oracle Real Application Clusters (RAC)


Persisting and managing the data is a fundamental requirement of any infrastructure.
Large, mission critical deployments that utilize grid infrastructure require high
performance databases with very high availability requirements as data access can not
become a bottleneck. Oracle database is a proven and market leading database that
offers sophisticated clustering and management features.
Oracle Real Application Cluster (RAC) provides scalability and high availability at the
database tier. The database tier can be horizontally scaled by adding database server
instances that access the storage grid. The RAC instances can be configured for load
balancing or failover based on the specific needs of the application.
Fast Application Notification (FAN), is a feature of Oracle Real Application Clusters
(RAC) that further differentiates it for high availability and scalability. FAN enables
the automated recovery of applications when cluster components fail. The RAC HA
framework provides notifications of any change in the cluster configuration.
Applications can subscribe to events and react quickly so their users can immediately
take advantage of additional resources and are unaffected (or minimally affected) by a
reduction in available resources.

8.12.1 Oracle Automatic Storage Management (ASM)


Automatic Storage Management (ASM) provides a vertical integration of the file
system and volume manager specifically built for the Oracle database files. ASM
provides the following benefits.
■ ASM distributes I/O load across all available resources to optimize performance
while removing the need for manual I/O tuning (spreading out the database files
avoids hotspots).
■ ASM helps DBAs manage a dynamic database environment by allowing them to
grow the database size without having to shutdown the database to adjust the
storage allocation.
■ ASM virtualizes storage to a set of disk groups and provides redundancy options
to enable a high level of protection.
■ ASM facilitates non-intrusive storage configuration changes with automatic
rebalancing. It spreads database files across all available storage to optimize
performance and resource utilization.

8-16 ORA Application Infrastructure Foundation


9
9Summary

Businesses are increasingly looking at IT to provide the capabilities needed to stay


competitive and agile in the marketplace. SOA, BPM, BI and other technologies enable
IT to meet the functional demands of the business. Business expansion and growth
require agile and high performance systems that can respond quickly to the needs of
the business. ORA Application Infrastructure provides the capabilities required to
build flexible and dynamic infrastructure that has the cost benefits of the on-demand
model.
The foundation infrastructure must be designed with the needs of the business in
mind to achieve the specific benefits expected. Among other benefits, it may help
lower the cost and improve time to market. Technology strategies like SOA, BPM,
MDM, and BI can take advantage of the grid infrastructure to add on-demand
capabilities and enhance performance. Caching technology and data grids help
enhance the performance of the infrastructure multi-fold by improving response times
and throughput.
Building the next generation IT requires companies to lay out a rich and flexible
foundation on which technologies and solutions can be deployed. Take the current and
future needs of the business into account when designing systems and ensure that the
infrastructure can meet the demands and challenges. Oracle offers the products and
technologies required to build the enterprise foundation and solutions efficiently and
cost effectively.

Summary 9-1
9-2 ORA Application Infrastructure Foundation
A
Further Reading
A

The IT Strategies From Oracle series contains a number of documents that offer
insight and guidance on many aspects of technology. In particular, the following
documents pertaining to the SOA infrastructure may be of interest:

A.1 Related Documents


ORA SOA Foundation - The SOA Foundation document presents important basic
concepts of SOA that are instrumental to building applications for a SOA
environment. It covers topics including the components of a service, service layering,
service types, the service model, composite applications, invocation patterns, and
standards that apply to SOA.
ORA Integration - The ORA Integration document examines the most popular and
widely used forms of integration, putting them into perspective with current trends
made possible by SOA standards and technologies. It offers guidance on how to
integrate systems in the Oracle Fusion environment, bringing together modern
techniques and legacy assets.
ORA Security - The ORA Security document describes important aspects of security
including identity, role, and entitlement management, authentication, authorization,
and auditing (AAA), and transport, message, and data security.

A.2 Other Resources and References


In addition, the following materials and sources of information relevent to SOA
Infrastructure may be useful:
■ Oracle Grid Computing - An Oracle whitepaper, May 2008
■ http://soapatterns.org : SOA Patterns - Describes a list of SOA patterns
that includes the "Service Grid" pattern discussed in this document.
■ http://www.oracle.com/appserver/docs/data-grids-soa-whitepaper.
pdf - Data Grids and Service Oriented Architecture, An Oracle White Paper
■ http://en.wikipedia.org/wiki/Cloud_computing - Cloud Computing in
Wikipedia
■ http://www.oracle.com/technology/tech/cloud/index.html - Oracle
Cloud Computing Center
■ http://aws.amazon.com/ec2/ - Amazon Elastic Compute Cloud (EC2)
■ http://www.gridtalk.org/briefings/gridsandclouds.pdf - Grids and
Clouds: The new computing

Further Reading A-1


Other Resources and References

■ http://en.wikipedia.org/wiki/Fallacies_of_Distributed_
Computing - Fallacies of distributed computing
■ http://en.wikipedia.org/wiki/Software_as_a_service - Software As
A Service
■ http://www.oracle.com/technologies/virtualization/index.html -
Oracle Virtualization
■ http://www.oracle.com/ondemand/collateral/virtualization-oracle
-vm-wp.pdf - Oracle virtualization white paper
■ Grid computing with Oracle database 11g - IDC Whitepaper
■ Next-Generation Grid-Enabled SOA: Not Your MOM's Bus, Chappell & Berry, SOA
Magazine, Jan 2008
■ A move to cloud computing should involve SOA and BPM; Kavis, Mike;
SearchCIO.com
■ The Grid 2, Second Edition: Blueprint for a New Computing Infrastructure, Foster and
Kesselman
■ http://www.oracle.com/database/exadata.html : Oracle Exadata
■ http://www.oracle.com/technology/deploy/availability/htdocs/maa
.htm - Oracle Maximum Availability Architecture - MAA
■ http://en.wikipedia.org/wiki/Rete_algorithm - Rete algorithm for
rules engines.
■ http://www.oracle.com/technology/products/ias/business_
rules/pdf/businessWhitepaper.pdf - Oracle Business Rules whitepaper

A-2 ORA Application Infrastructure Foundation


Glossary

Please refer to the ORA Master Glossary for a comprehensive list of glossary terms.

CAPEX
Capital expenditures or CAPEX are expenditures creating future benefits. A capital
expenditure is incurred when a business spends money either to buy fixed assets or to
add to the value of an existing fixed asset with a useful life that extends beyond the
current year.

OPEX
An operating expenditure or OPEX is an on-going cost for running a product,
business, or system. In contrast to the CAPEX model, OPEX model is aimed at
incurring the expenses as the services are rendered.

Service Oriented Architecture (SOA)


Service Oriented Architecture (SOA) is an IT strategy for constructing
business-focused, software-intensive systems from loosely coupled, interoperable
building blocks (called Services) that can be combined and reused quickly, within and
between enterprises, to meet business needs.

Universal Description, Discovery and Integration (UDDI)


Universal Description, Discovery and Integration (UDDI) is a platform-independent,
XML based, open industry initiative, sponsored by the Organization for the
Advancement of Structured Information Standards (OASIS), enabling businesses to
publish service listings and discover each other and define how the services or
software applications interact over the Internet.

Glossary-1
Universal Description, Discovery and Integration (UDDI)

Glossary-2

You might also like