D80153GC30 sg2
D80153GC30 sg2
D80153GC30 sg2
Al Saganich Disclaimer
Tom Eliason This document contains proprietary information and is protected by copyright and
Serge Moiseev other intellectual property laws. You may copy and print this document solely for your
own use in an Oracle training course. The document may not be modified or altered
Mark Lindros in any way. Except where your use constitutes "fair use" under copyright law, you
may not use, share, download, upload, copy, print, display, perform, reproduce,
TJ Palazzolo publish, license, post, transmit, or distribute this document in whole or in part without
the express authorization of Oracle.
Technical Contributors The information contained in this document is subject to change without notice. If you
and Reviewers find any problems in the document, please report them in writing to: Oracle University,
500 Oracle Parkway, Redwood Shores, California 94065 USA. This document is not
Joe Greenwald warranted to be error-free.
Bill Bell
Restricted Rights Notice
Elio Bonazzi
Tom McGinn If this documentation is delivered to the United States Government or anyone using
the documentation on behalf of the United States Government, the following notice is
Publishers
Pavithran Adka
Srividya Rameshkumar
Veena Narasimhan
3003062020
Contents
1 Course Introduction
Course Objectives 1-2
Target Audience 1-4
Course Prerequisites 1-5
Introductions and Setting Expectations 1-6
Course Schedule 1-7
iii
Major Upgrade 3-7
Quiz 3-8
Agenda 3-9
What Is a Rolling Upgrade? 3-10
Multiple Installation and Domain Locations 3-11
Leverage WebLogic Clusters to Avoid Down Time 3-12
Quiz 3-13
Agenda 3-14
Rolling Upgrade Process: Overview 3-15
Backup 3-16
Shutdown 3-17
iv
SQL Scripts and LDAP Data 4-25
Creating Windows Start Menu Entries 4-26
Creating an Extension Template 4-27
Using a Custom Template with the Configuration Wizard 4-29
Post Domain Creation Tasks 4-35
Using a Custom Extension Template with the Configuration Wizard 4-36
Summary 4-38
Practice 4-1 Overview: Creating and Using a Custom Domain Template 4-39
v
Jython 6-6
Using Jython 6-7
Variable Declaration 6-8
Conditional Expressions 6-9
Loop Expressions 6-10
I/O Commands 6-11
Exception Handling 6-12
Quiz 6-13
Agenda 6-14
WLST Modes 6-15
WLST Example 6-16
vi
Creating an LDAP Authentication Provider 6-50
Modifying a Domain Offline 6-51
Deploying an Application 6-52
Creating a Dynamic Cluster 6-53
Scaling Up a Dynamic Cluster 6-55
Dynamic Cluster Scaling 6-56
Scaling Down a Dynamic Cluster 6-57
Script Interceptors 6-58
Quiz 6-60
Agenda 6-61
Some FMW Commands 6-62
vii
Common Workflows Modify a Configuration Using an Edit 7-29
Common Workflows Deploy/View/Undeploy an Application 7-30
Common Workflows Start/View/Stop Servers 7-31
Common Workflows Monitor Servers 7-32
Summary 7-33
Quiz 7-34
Practice 7-1 Overview: Creating and Managing Servers Using REST 7-35
viii
9 Application Work Managers
Objectives 9-2
Agenda 9-3
WebLogic Server Threads 9-4
Monitoring a Server Thread Pool 9-5
Monitoring Server Threads 9-6
Stuck Thread Handling 9-7
Configuring Stuck Thread Handling 9-8
Application Stuck Thread Handling 9-9
Quiz 9-10
Agenda 9-11
ix
Configuring New Groups 10-14
Configuring Group Memberships 10-15
Configuring New Roles 10-16
What Is Role Mapping? 10-17
Configuring Role Mapping 10-18
Configuring Roles Using WLST 10-19
Configuring New Policies 10-20
Configuring Policies Using WLST 10-21
Security Configuration Sources 10-22
Configuring Sources Using WLST and weblogic.Deployer 10-23
Deployment Descriptor Security Example: weblogic.xml 10-24
x
Backing Up a Domain Configuration 11-14
Recovery of the Administration Server Configuration 11-15
Restarting an Administration Server on a New Computer 11-16
Quiz 11-17
Agenda 11-18
Java Transaction API (JTA) Review 11-19
What Is Service Migration? 11-20
Service Migration Prerequisites 11-21
Service Migration Architecture: Database Leasing 11-22
Service Migration Architecture: Consensus Leasing 11-23
What Is a Migratable Target? 11-24
xi
Data Source Interceptor 12-12
Practice 12-1 Overview: Controlling a Data Source 12-13
Agenda 12-14
Oracle Real Application Clusters (RAC): Overview 12-15
Oracle GridLink for RAC 12-16
Agenda 12-17
Multi Data Sources 12-18
Multi Data Source Architecture 12-19
Comparison of GridLink and Multi Data Sources 12-20
Failover Option 12-21
Load Balancing Option 12-22
13 Diagnostic Framework
Objectives 13-2
Agenda 13-3
WebLogic Diagnostics Framework (WLDF) 13-4
WLDF Architecture 13-5
Diagnostic Archives 13-6
Configuring Server Diagnostic Archives 13-7
Diagnostic Modules 13-8
Dynamic Diagnostic Modules 13-10
Resource Descriptors 13-11
Creating a Diagnostic Module 13-12
WLST: Example 13-13
WLST Commands for WLDF 13-14
xii
Quiz 13-15
Agenda 13-16
What Is a Diagnostic Image? 13-17
Capturing a Server Diagnostic Image 13-18
WLST: Example 13-19
Quiz 13-20
Agenda 13-21
What Is a Harvester? 13-22
Metric Collectors 13-23
Configuring a Metric Collector 13-24
WLST: Example 13-25
xiii
Coherence Container: Benefits 14-19
Coherence Cluster 14-20
Managed Coherence Server 14-21
Quiz 14-22
Summary 14-23
Practice 14-2 Overview: Configuring Managed Coherence Servers 14-24
xiv
Using an Existing Deployment Plan 15-38
WLST createPlan 15-39
weblogic.PlanGenerator 15-40
Using Plan Generator 15-41
Oracle Enterprise Pack for Eclipse 15-42
Managing Deployment Plans 15-46
Summary 15-47
Practice 15-1 Overview: Creating and Using a Deployment Plan 15-48
xv
Agenda 16-33
Oracle Linux Container Services 16-34
Host Resource Requirements 16-35
Time Synchronization and Request Forwarding for Kubernetes 16-36
Kubernetes Port Use 16 -37
Docker Container Registry 16-38
Oracle Registry Mirror Servers 16-39
Using Private Docker Registry 16-40
Local Container Registry 16-41
Consider Using minikube Tool 16-42
Summary 16-43
xvi
Summary A-38
[Optional] Practice Appendix A-1 Overview: Using Production Redeployment A-39
C Oracle Cloud
Agenda C-2
What Is Cloud? C-3
What Is Cloud Computing? C-4
History – Cloud Evolution C-5
Components of Cloud Computing C-6
Characteristics of Cloud C-7
Cloud Deployment Models C-8
Cloud Service Models C-9
Industry Shifting from On-Premises to the Cloud C-13
Oracle IaaS Overview C-15
Oracle PaaS Overview C-16
Oracle SaaS Overview C-17
Summary C-18
xvii
D Oracle Java Cloud Service Overview
Objectives D-2
Introducing Java Cloud Service Your platform for running business applications in
the cloud D-3
Java Cloud Service: Three Options D-4
Java Cloud Service Main Use Cases D-5
Java Cloud Service Feature: Provisioning D-6
Java Cloud Service Feature: Patching D-7
Java Cloud Service Feature: Backup / Restore D-8
Java Cloud Service Feature: Scaling D-9
Oracle Coherence Option: Data Caching & Scaling D-10
xviii
11
• Disaster recovery:
– Ensures application availability after the loss of an entire data center
– Addresses catastrophic failures, such as natural disasters
– May or may not guarantee zero down time during the recovery process
• A common disaster recovery implementation involves deploying multiple data center
sites in an active/passive fashion:
– The production (active) site handles all application traffic.
– The standby (passive) site is used only in the event of a disaster.
Enterprise deployments need protection from unforeseen disasters and natural calamities. One protection
solution involves setting up a standby site at a geographically different location from the production site. The
standby site may have equal or fewer services and resources compared to the production site. Application
data, metadata, configuration data, and security data are replicated to the standby site on a periodic basis.
The standby site is normally in passive mode; it is started when the production site is not available. This
deployment model is sometimes referred to as an active/passive model. This model is normally adopted
when the two sites are connected over a WAN and network latency does not allow clustering across the two
sites.
Global Load
Balancer
Production
Load Balancer
Standby Load
Balancer
Synchronize
Clients access the production site during normal operation. During disaster recovery, clients access the
standby site. Ideally, this change should be seamless from the client’s perspective, although practically
there may be a very brief interruption in service, depending on the implementation details.
For disaster recovery to be effective, the production and standby sites must be synchronized to some
degree. At a very minimum, a version of the same application must be deployed to both sites.
Symmetric Both the topology and hardware are identical to the production site.
Partially Symmetric The topology is the same as the production site, but the hardware is different
(configurations with a different number of machines, for example).
A disaster recovery configuration that is completely identical across tiers on the production site and standby
site is called a symmetric site. A site can be completely symmetric or partially symmetric. In a completely
symmetric site, the production site and standby site are identical in all respects. That is, they have identical
hardware, operating systems, load balancers, middleware instances, applications, and databases. The
same port numbers are used for both sites as well.
In a partially symmetric site, the production site and standby site are identical in topology, but not in
hardware. That is, they have the same number of middleware instances, applications, and databases on
each site, but the hardware or operating system is not identical. For example, you can have ten machines
on the production site and eight machines on the standby site. It is recommended but not required to have
identical hardware and operating systems on the production and standby sites when planning a disaster
recovery site.
In an asymmetric topology, the standby site has fewer resources than the production site. Typically, the
standby site in an asymmetric topology has fewer hosts, load balancers, Fusion Middleware instances, and
applications than the production site. It is important to ensure that an asymmetric standby site has sufficient
resources to provide adequate performance when it assumes the production role. Otherwise, it may become
oversaturated and unavailable.
Production Standby
Corporate WAN
(Ethernet)
Fusion
Middleware
Tier
Network
Storage Replication
Oracle’s recommended disaster recovery strategy facilitates data protection in two main ways. You should
replicate the storage of one system to the other to protect the middleware product binaries, configurations,
metadata files, and application data that reside on the file system. Oracle Data Guard protects the Oracle
database, which may or may not be running on Exadata machines. This database contains Oracle Fusion
Middleware Repository data, as well as customer data.
Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or
more standby databases to enable production Oracle databases to survive disasters and data corruptions. It
maintains these standby databases as copies of the production database. Then, if the production database
becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby
database to the production role, minimizing the down time associated with the outage. Data Guard can be
used with traditional backup, restoration, and cluster techniques to provide a high level of data protection
and data availability.
Oracle Active Data Guard, an option built on the infrastructure of Data Guard, allows a physical standby
database to be open read-only while changes are applied to it from the primary database. Currently, Oracle
Fusion Middleware does not support configuring Oracle Active Data Guard for its database repositories.
• Place all data on shared storage; only use compute node local storage for the OS.
• Bind to and use common host names instead of IP addresses.
• Use separate DNS servers and/or hosts files on each site.
• Replicate any external resources on which WebLogic processes depend (LDAP,
database, and so on).
In a disaster recovery topology, the host names used for wiring intra component and inter component
communication need to be same. Typically, the site where the Oracle Fusion Middleware installation is done
first dictates the host name used. The standby site instantiated subsequently should be configured to
resolve these host names to the local standby site IP addresses. Therefore, it is important to plan the host
names for the production site and standby site. It is also very important that the configuration at all levels
use only host names. When configuring each component, use host name–based configuration instead of IP-
based configuration, unless the component requires you to use IP-based configuration. For example, if you
are configuring the listen address of an Oracle Fusion Middleware component to a specific IP address such
as 192.168.10.33, use the host name wlsvhn1.mycompany.com, which resolves to 192.168.10.33.
Production Site:
10.200.2.50 bakrepl.mycompany.com
Common host names
across both sites
Site-specific addresses
and host names
Standby Site:
10.100.2.50 prodrepl.mycompany.com
The other site's
storage appliance
In a disaster recovery topology, the production site host names must be resolvable to the IP addresses of
the corresponding peer systems at the standby site. This can be set up by creating a host name alias in the
/etc/host file (the second entry shown in the slide). Create host name aliases for all the hosts on the
production and standby sites. This example includes a WebLogic administration server, a WebLogic
managed server, and a proxy server. Also, add entries for the replication channel on each storage
appliance. The examples in this lesson assume that a symmetric disaster recovery site is being set up,
where the production site and standby site have the same number of hosts and servers. Each host at the
production site has a peer host at the standby site and the peer hosts are configured the same. For
example, hosts at one site use the same port numbers as their counterparts at the other site.
Answer: b
• A domain must have exactly one instance of WebLogic Server acting as the
administration server. An administration server is part of exactly one domain.
• The administration server is:
– The central point through which you configure and manage all domain resources
– Solely in charge of the domain’s configuration. It distributes configuration changes to
other servers in the domain
– An instance of WebLogic Server and, therefore, a fully functional Java Enterprise
Edition application server
Admin Server
All domains contain a special server called the administration server. You use the administration server to
configure and manage all the domain resources. Any other WebLogic Servers in the domain are called
managed servers.
In most domains, the applications are deployed to the managed servers. The administration server is used
only for domain configuration and management.
Because an administration server is an instance of WebLogic Server, it can perform any task of a Java
Enterprise Edition application server. Applications can be deployed and run on the administration server.
For simplicity, often a development-time domain will contain only the administration server. Developers
deploy and test their applications on the administration server.
When you first start a managed server, it must be able to connect to the administration server to retrieve a
copy of the configuration. Subsequently, you can start a managed server even if the administration server is
not running.
If a managed server cannot connect to the administration server during its start up, then it uses the locally
cached configuration information. A managed server that starts without synchronizing its configuration with
the administration server is running in Managed Server Independence (MSI) mode. By default, MSI mode is
enabled. However a managed server cannot start in MSI mode for the first time because the local
configuration is not available.
The failure of an administration server does not affect the operation of managed servers in the domain, but it
does prevent you from changing the domain’s configuration. If an administration server fails because of a
hardware or software failure on its host computer, other server instances on the same computer may be
similarly affected.
If an administration server becomes unavailable while the managed servers in the domain are running, then
those managed servers continue to run. Periodically, the managed servers attempt to reconnect to the
administration server. When the connection is successful, the configuration state is synchronized with that of
the administration server.
For clustered managed server instances, the load balancing and failover capabilities supported by the
domain configuration continue to remain available.
Under Domain > Configuration > General > Advanced, you can enable the automatic backup of the
configuration at the domain level. Each startup of the administration server creates two files in the domain
directory: config-booted.jar and config-original.jar. In addition, each saved change of the
configuration file makes a backup named configArchive/config-n.jar, where n is a sequential
number. The Archive Configuration Count attribute limits the number of retained configuration JARs, so that
in the example shown, there are never more than two kept: the most recent backup and the one immediately
before that. Older backups are automatically deleted. If you made a series of mistakes, this provides a very
easy way to return to a previous recent configuration. However, be aware that a typical configuration change
requires clicking the Activate Changes button a few times, and each one then cycles the stored JARs.
You may want to set a higher number such as 10 or 20 for the Archive Configuration Count depending on:
• The available disk space
• The need for backup and restoration
• The time taken for backup and restore activity
Note: Although you use the configuration backup feature, it is always a good practice to manually make a
backup of a known working configuration at an important milestone.
Managed Server Independence (MSI) reduces the urgency to fix the outage.
Enabled
by default
The administration server is required only for making changes to the active configuration; it is not required
for the normal operation of the managed servers as long as the managed servers are in Managed Server
Independence Enabled mode, which is the default. This allows you time to recover the administration server
without any service outages. As shown in the screenshot, the heartbeat detected between the
administration server and the managed servers is, by default, a one-minute period. After four minutes of not
hearing from the administration server, the managed servers become independent. After the administration
server is fixed, the heartbeats start up again and the managed servers deactivate their independence, but
MSI is still enabled for a future event. These times can all be changed to suit your particular environment.
OLD NEW
AdminServer1 AdminServer1
admin.example.com admin.example.com Name reassigned via DNS
(192.168.0.1) 192.168.0.2 or virtual ip.
If a hardware crash prevents you from restarting the administration server on the same computer, you can
recover the management of the running managed servers as follows:
1. Install the Oracle WebLogic Server software on the new computer designated as the replacement
administration server.
2. Make your application files available to the new administration server by copying them from backups
or by using a shared disk. Your files must be available in the same relative location on the new file
system as on the file system of the original administration server.
3. Make your configuration and security files available to the new administration computer by copying
them from backups or by using a shared disk. These files are located under the directory of the
domain being managed by the administration server.
4. Restart the administration server on the new computer.
When the administration server starts, it communicates with the already-running managed servers via a
Node Manager and informs the servers that the administration server is now running on a different IP
address.
Note: You cannot have two administration servers at the same time, both claiming ownership of the same
managed servers. This is not a warm standby; this must be a cold standby. The original administration
server must be stopped or dead for the backup administration server to contact the managed servers.
Answer: d
You can have only one administration server at a time; To achieve high availability, you configure the
Administration Server on a virtual host so that if the machine that the Administration Server runs on fails,
you can fail it over to another host in the domain. The Administration Server is configured to use a virtual IP
to overlap the backup hosts. You configure the Administration Server to listen on this virtual IP. The benefit
of using a virtual host and virtual IP is that you do not need to add a third machine; if failover occurs, the
virtual host can be mapped to a surviving host in the domain by "moving" the virtual IP.
WebLogic Server’s implementation of JTA provides the following support for transactions. It:
• Creates a unique transaction identifier when a client application initiates a transaction.
• Supports an optional transaction name describing the business process that the transaction
represents. The transaction name makes statistics and error messages more meaningful.
• Works with the WebLogic Server infrastructure to track objects that are involved in a transaction and,
therefore, must be coordinated when the transaction is ready to commit.
• Notifies the resource managers (typically, databases) when they are accessed on behalf of a
transaction. Resource managers lock the accessed records until the end of the transaction.
• Orchestrates the two-phase commit when the transaction completes, which ensures that all the
participants in the transaction commit their updates simultaneously. WebLogic Server coordinates
the commit with any databases that are being updated by using the Open Group’s XA protocol. Most
relational databases support this standard.
Server 3
Server 1 Server 2
JMS Server 2
JMS Server 1 JMS Server 2
JTA Service 2
JTA Service 1 JTA Service 2 Custom
Service
Custom Service Custom Service
JMS Server 3
Service-level migration in WebLogic Server is the process of moving the pinned services from one server
instance to a different server instance that is available within the cluster. The migration framework provides
tools and infrastructure for configuring and migrating targets, and, in the case of automatic service migration,
it leverages WebLogic Server’s health monitoring subsystem to monitor the health of services hosted by a
migratable target.
High availability is achieved by migrating a migratable target from one clustered server to another when a
problem occurs on the original server. You can also manually migrate a migratable target for scheduled
maintenance or you can configure the migratable target for automatic migration.
JTA services are singleton services, and, therefore, are not active on all server instances in a cluster. To
ensure that singleton JTA services do not introduce a single point of failure for dependent applications in the
cluster, WebLogic Server can be configured to automatically or manually migrate them to any server
instance in the migratable target list.
The transaction service automatically attempts to recover transactions on system startup by parsing all
transaction log records for incomplete transactions and completing them.
Within an application, you can also define a custom singleton service that can be used to perform tasks that
you want to be executed on only one member of a cluster at any give time.
When using consensus leasing, the member servers maintain leasing information in-memory, which
removes the requirement of using a database. But this type of leasing also requires that you use Node
Manager to control servers within the cluster. It also requires that all servers that are either migratable or
that could host a migratable target must have a Node Manager associated with them. The Node Manager is
required to get health monitoring information about the member servers involved.
Node Manager must be running on every machine hosting managed servers within the cluster only if
pre/post-migration scripts are defined. If pre/post-migrations are not defined and the cluster uses the
database leasing option, then Node Manager is not required.
To migrate the JTA Transaction Recovery Service from a failed server in a cluster to another server in the
same cluster, the backup server must have access to the transaction log (TLOG) records from the failed
server. Transaction log records are stored in the default persistent store for the server or in a shared
database.
Cluster
Migrate
Server
Leasing is the process WebLogic Server uses to manage services that are required to run on only one
member of a cluster at a time. Leasing ensures exclusive ownership of a
cluster-wide entity. Within a cluster, there is a single owner of a lease. Additionally, leases can failover in
case of server failure. This helps to avoid having a single point of failure.
Setting a cluster’s Migration Basis to database leasing requires that the Data Source For Automatic
Migration option be set with a valid JDBC data source. It also implies that there is a table created in the
database that the managed servers will use for leasing.
To accommodate service migration requests, each server also performs basic health monitoring on
migratable services that are deployed to it. A server also has a direct communication channel to the leasing
system, and can request that the lease be released (thus triggering a migration) when bad health is
detected.
Cluster
Check Health
Node Manager Node Manager
Server Server
Migrated Services
Setting a cluster’s Migration Basis to consensus leasing means that the member servers maintain leasing
information in-memory, which removes the requirement of using a database. However, this version of
leasing requires that you use Node Manager to control servers within the cluster. In other words, all servers
that are migratable, or which could host a migratable target, must have a Node Manager associated with
them. The Node Manager is required to get health monitoring information about the member servers
involved.
A migratable target:
• Defines a group of servers within a cluster, along with a migration policy
• Identifies a primary or “preferred” server
• Can identify a list of candidate servers to use for migration
• Allows you to group pinned resources that should be migrated together
Configured clusters
only!
Migratable
Preferred Server Target
A migratable target is a special target that can migrate from one server in a cluster to another. As such, a
migratable target provides a way to group migratable services that should move together. When the
migratable target is migrated, all services hosted by that target are migrated.
To configure a JMS service for migration, it must be deployed to a migratable target. A migratable target
specifies a set of servers that can host a target. Specifically it defines a preferred server for the services and
an ordered list of candidate backup servers should the preferred server fail. Only one of these servers can
host the migratable target at any one time.
After a service is configured to use a migratable target, the service is independent from the server member
that is currently hosting it. For example, if a JMS server with a deployed JMS queue is configured to use a
migratable target, the queue is decoupled from the server that hosts it. In other words, the queue is always
available when the migratable target is hosted by any server in the cluster.
You must target your JMS service to the same migratable target that your custom persistent store is also
targeted to. If no custom store is specified for a JMS service that uses a migratable target, a validation
message will be generated by the Administration Server, followed by failed JMS server deployment.
Moreover, a server configured in such a way will not boot.
When a migratable target uses the manual policy (the system default), an administrator can manually
migrate pinned migratable services from one server instance to another in the cluster, either in response to
a server failure or as part of regularly scheduled maintenance.
The failure recovery policy indicates that the service will start only if its preferred server is started. If an
administrator manually shuts down the preferred server, either gracefully or forcibly, then services will not
migrate. However, if the preferred server fails due to an internal error, services will be automatically
migrated to another candidate server. If such a candidate server is unavailable (due to a manual shutdown
or an internal failure), the migration framework will first attempt to reactivate the service on its preferred
server. If the preferred server is not available at that time, the service will be migrated to another candidate
server.
The exactly-once policy indicates that if at least one server in the candidate list is running, then the service
will be active somewhere in the cluster, even if servers are shut down (either gracefully or forcibly). It is
important to note that this value can lead to target grouping. For example, if you have five exactly-once
migratable targets and only boot one server in the cluster, all five targets will be activated on that server.
The failure recovery option is recommended for most types of clustered JMS applications.
Shared Cluster
storage or DB
Server 1
JTA Service 1
TLOG1
JTA participants
(resource managers)
Server 2
JTA Service 2
TLOG2
TLOG3
The JTA Transaction Recovery Service is designed to gracefully handle transaction recovery after a crash.
You can manually migrate the Transaction Recovery Service from an unhealthy server instance to a healthy
server instance, with the help of the server health monitoring services. In this manner, the backup server
can complete transaction work for the failed server.
By default, the transaction manager uses the default persistent store to store transaction log files. To enable
migration of the Transaction Recovery Service, you must either configure JTA to use a highly available
database, or configure the default persistent store so that it maintains its data files on a persistent storage
solution that is available to other servers in the cluster.
JMS services can be migrated independently of the JTA Transaction Recovery Service. However, because
the JTA Transaction Recovery Service provides the transaction control of the other subsystem services, it is
usually migrated along with them. This ensures that the transaction integrity is maintained before and after
the migration of the subsystem services.
Shared Cluster
storage or DB
Server 1
TLOG2
TLOG3
When JTA has automatic migration enabled, a server will shut itself down if the JTA subsystem reports itself
as unhealthy (FAILED). For example, if any I/O error occurs when accessing the TLOG, then JTA health
state will change to FAILED.
A server can perform transaction recovery for multiple failed servers. While recovering transactions for other
servers, the backup server continues to process its own transactions.
For transactions for which a commit decision has been made, but the second phase of the two-phase
commit process has not completed, the Transaction Recovery Service completes the commit process. For
transactions that the transaction manager has prepared with a resource manager (transactions in phase one
of the two-phase commit process), the Transaction Recovery Service must try to recover the transaction for
each resource manager and eventually resolve (by either committing, rolling back, or forgetting) all
transaction IDs identified during the recovery process.
If the backup server finishes recovering the TLOG transactions before the primary server is restarted, it will
initiate an implicit migration of the Transaction Recovery Service back to the primary server. If the backup
server is still recovering the TLOG transactions when the primary server is restarted, the backup server will
initiate an implicit migration of the Transaction Recovery Service to the primary server.
WebLogic Server provides the ability to migrate the Transaction Recovery Service at both the server level
and the service level, either manually or automatically. Automatic service migration of the Transaction
Recovery Service migration leverages the Health Monitoring subsystem to monitor the health of the service
hosted by a migratable target. When the primary server fails, the migratable service framework automatically
migrates the Transaction Recovery Service to a backup server. When using the Automatic Service Migration
Feature, you must configure the "Migration Basis" for the cluster, enable automatic migration, and optionally
specify whether you will be using any pre- or post-migration scripts. In the example, a pre-migration script,
mount_disk.py, ensures that a shared disk is mounted on the surviving server. Such script must be
stored in the DOMAIN_HOME/bin/service_migration directory. Conversely, a post-migration script,
stored in the same directory, called umount_old.py, ensures that a SAN disk is dismounted from the
failed server after the migration has occurred.
A server’s transaction manager is not assigned to a migratable target like other pinned services. Instead,
manual and automatic JTA is simply configured for each individual server in the cluster. This is because the
transaction manager has no direct dependencies on other pinned resources when contrasted with services
such as JMS.
Select each server in the cluster. In the JTA Migration Configuration section of the server's Configuration >
Migration tab, configure automatic migration of the JTA Transaction Recovery Service by selecting the
Automatic JTA Migration Enabled check box.
You may also want to restrict the potential servers to which you can migrate the Transaction Recovery
Service. For example, there may be cases when all servers do not have access to the current server’s
transaction log files. If no candidate servers are chosen, any server within the cluster can be chosen as a
candidate server. From the Candidate Servers Available box, select the managed servers that can access
the JTA log files. They become valid Candidate Servers when you move them into the Chosen box.
You must include the original server in the list of chosen servers so that you can manually migrate the
Transaction Recovery Service back to the original server, if need be. The administration console enforces
this rule.
Answer: a
Answer: a, c
Answer: c
Some types of WebLogic resources are not replicated and instead are pinned to a specific
server, including:
• JMS servers, Store and Forward Agents, Singleton services
• Persistent stores
• Transaction logs (TLOGs)
Cluster
In a WebLogic Server cluster, most subsystem services are hosted homogeneously on all server instances
in the cluster, enabling transparent failover from one server to another. In contrast, pinned services, such as
JMS-related services and the WebLogic transaction recovery service, are hosted on individual server
instances within a cluster. For these services, the WebLogic Server migration framework supports failure
recovery through service migration and whole server migration.
JMS-related services are singleton services, and therefore are not active on all the server instances in a
cluster. Instead, they are pinned to a single server in the cluster to preserve data consistency. However, this
approach potentially introduces a single point of failure in your application.
Choose one of the following options to ensure HA for JMS and to avoid orphaned
messages:
Now YES!
Server and service migration are capabilities that are available only to configured clusters that do not use
cluster-targeted JMS servers. Service migration is currently not supported with dynamic clusters. Note that
Service Migration is supported for Dynamic Clusters as of WebLogic Server 12.2.1.
WebLogic Server provides the capability to migrate clustered server instances. A clustered server that is
configured to be migratable can be moved in its entirety from one machine to another, at the command of an
administrator, or automatically, in the event of failure. The migration process makes all of the services
running on the server instance available on a different machine, but not the state information for the
singleton services that were running at the time of failure
When a migratable server becomes unavailable for any reason (for example, if it hangs, loses network
connectivity, or its host machine fails), migration is automatic. Upon failure, a migratable server is
automatically restarted on the same machine if possible. If the migratable server cannot be restarted on the
machine where it failed, it is migrated to another machine. In addition, an administrator can manually initiate
migration of a server instance.
Node Manager is used by the Administration Server or a stand-alone Node Manager client to start and stop
migratable servers and is invoked by the cluster master to shut down and restart migratable servers, as
necessary.
Server migration has the following additional requirements:
• There is no built-in mechanism for transferring files that a server depends on between machines.
Using a disk that is accessible from all machines is the preferred way to ensure file availability. If you
cannot share disks between servers, you must ensure that the contents of domain_dir/bin are
copied to each machine.
• You cannot create network channels that have different listen addresses on a migratable server.
• Although migration works when servers are not time-synchronized; time-synchronized servers are
recommended in a clustered environment.
Cluster
Node Node
Manager Manager
Server 1 Renew
Server 2
Renew Lease
Cluster Lease
Master Leasing
Service
Check
Lease Status Renew Server 3
Lease
The example cluster contains three managed servers, all of which are migratable. Each managed server
also runs on its own machine. A fourth machine is also available as a backup, if one of the migratable
servers fails. Node Manager is running on the backup machine and on each machine with a running
migratable server.
All managed servers in the cluster obtain a migratable server lease from the leasing service. They also
periodically renew their leases in the lease table, proving their health and liveness. Because Server 1 starts
up first, it also obtains a cluster master lease, whose responsibilities include monitoring the lease table.
Cluster
Server 1 Server 2
Check
Lease Status
Cluster
Master 1 Leasing
Service
3 Renew Server 3
1. The machine that hosts Server 2 fails. On its next periodic review of the lease table, the cluster
master detects that Server 2’s lease has expired.
2. The cluster master tries to contact the Node Manager on the backup machine to restart Server 2, but
fails, because the entire machine is unreachable. Alternatively, if Server 2’s lease had expired
because it was hung, but its machine was still reachable, the cluster master would use the Node
Manager to restart Server 2 on the same machine.
3. The cluster master contacts the Node Manager on the backup machine, which is configured as an
available host for migratable servers in the cluster. The Node Manager then starts Server 2.
4. Server 2 starts up and contacts the Administration Server to obtain its configuration and finally
obtains a migratable server lease.
Most migration
Node Manager must be running and configured to allow server migration. The Java version of Node
Manager can be used for server migration on Windows or UNIX. The SSH version of Node Manager can be
used for server migration on UNIX only. Refer to the Node Manager Administrator's Guide in the WLS
documentation for the available configuration and security options.
Answer: a, f
• Review:
– JDBC
– Data Source
• Managing Data Sources
• GridLink Data Source Review
• Multi Data Sources
• Review: Connection Testing
• Proxy Data Sources
• Creating a Multi Data Source
Application Database
Code
The JDBC API is the way in Java code to work with SQL. It builds on Open Database Connectivity (ODBC),
so developers familiar with ODBC find it easy to use.
The value of JDBC lies in the fact that an application can access virtually any relational database and run on
any platform with a Java Virtual Machine (JVM). That is, with JDBC, it is not necessary to write one program
to access a Sybase database, another to access an Oracle database, another to access an IBM DB2
database, and so on. You can write a single program by using the JDBC API. Because the application is
written in Java, you need not write different applications to run on different platforms, such as Windows and
Linux.
JDBC accomplishes database connections by using a driver mechanism that translates JDBC calls to native
database calls. Although most available drivers are fully written in Java (Type 4 drivers), and are thus
platform independent, some drivers (Type 2) use native libraries and are targeted to specific platforms.
A data source is a Java object targeted to and managed by one or more instances of
WebLogic Server. A deployed data source has connections to a particular database in its
connection pool ready-to-go for applications running on those servers.
1. An application looks up the data source
in a server’s resource tree by using the 1
JNDI API.
2
2. It asks the data source for a connection. Data Source
3. It uses the connection (which uses a driver)
to access the database. Application
Code Connection
4. When finished, it closes the connection Pool
Driver
WebLogic Server
A data source is a Java object managed by WebLogic Server and used by application code to obtain a
database connection. Retrieving a database connection from a data source is better than getting a
connection directly from the database for two reasons:
• Connections in a data source’s connection pool have already been created. Therefore, the
application does not have to wait for connection creation.
• All database-specific information moves out of the application code and into the WebLogic Server
configuration, making the code more portable and robust.
Data sources can be created by using one of the WebLogic Server administration tools. A data source is
configured with a connection pool that will contain connections to a particular database. It is also targeted to
one or more instances of WebLogic Server.
For an application to use one of the connections in a data source’s connection pool, first the application
looks up the data source in the server’s resource tree. The API used is the Java Naming and Directory
Interface (JNDI). After the data source is retrieved, the application asks it for a database connection. The
data source gives the application one of the connections not currently being used, from its pool of
connections. The application uses that connection to access the database. When the application is finished
with the connection, it closes it. But rather than close, the connection is returned to the connection pool for
some other application to use.
• Data source objects retrieved by applications via JNDI can be of two types:
– Non-XA
– XA
• XA data sources:
– Will automatically participate in distributed or “global” transactions initiated by the
application
– Typically require the underlying JDBC driver to support XA
– Can use a non-XA driver in certain scenarios
One of the most fundamental features of WebLogic Server is transaction management. Transactions are a
means to guarantee that database changes are completed accurately. WebLogic Server protects the
integrity of your transactions by providing a complete infrastructure for ensuring that database updates are
done accurately, even across a variety of resource managers. If any one of the operations fails, the entire
set of operations is rolled back.
If you use global or XA transactions in your applications, you should use an XA JDBC driver to create
database connections in the JDBC data source. If an XA driver is unavailable for your database, or you
prefer not to use an XA driver, you should enable support for global transactions in the data source. You
should also enable support for global transactions if your applications meet any of the following criteria:
• They use the EJB container in WebLogic Server to manage transactions.
• They include multiple database updates within a single transaction.
• They access multiple resources, such as a database and the Java Messaging Service (JMS), during
a transaction.
• They use the same data source on multiple servers (clustered or nonclustered).
• Review
• Managing Data Sources
– Why Manage Data Sources?
– Suspend a Data Source
– Resume a Data Source
– Configure Data Source Interceptors
• GridLink Data Source Review
• Multi Data Sources
• Review: Connection Testing
You may require managing the connection pool of your data source for several reasons:
• Database maintenance
• Troubleshooting • Suspend
• Production issues • Resume
• Reset
• Shrink
• Stop
WebLogic Server
• Start
Application
There are several operations that you can perform on a data source. The following commands enable you to
control your data source to keep your applications running smoothly:
• Suspend: Marks the data source as disabled, so applications cannot use connections from the pool
• Resume: Marks the data source as enabled, so applications can use connections from the pool
• Reset: Closes and re-creates all database connections in a connection pool
• Shrink: Shrinks the connection pool to the greater of minCapacity or the number of connections in
use
• Stop: Shuts down a data source and associated connections. This can be a graceful or forced
shutdown.
• Start: Re-initializes a data source that was previously shut down
1 2
To suspend a data source instance by using the administration console, perform the following steps:
1. Select Services > Data Sources.
2. Select a data source from the list of available data sources.
3. Click the data source’s Control tab.
4. Select the target where the data source instance resides that you want to suspend.
5. Click Suspend and choose to suspend gracefully or forcefully:
- Suspend > Suspend: Marks the data source instance as disabled and blocks new
connection requests. All connections are preserved and are viable again when the data
source is resumed.
- Suspend > Force Suspend: Marks the data source instance as disabled and destroys all
connections. All transactions are rolled back. The data source attempts to re-create the
connections in preparation for a subsequent resume operation.
6. Ensure that the data source is in the Suspended state.
1 2
To resume a suspended data source instance by using the administration console, perform the following
steps:
1. Select Services > Data Sources.
2. Select a data source from the list of available data sources.
3. Click the data source’s Control tab.
4. Select the target of the suspended data source.
5. Click the Resume button to return the data source to a Running state:
- If the data source was suspended gracefully, all connections are preserved and are usable
again.
- If the data source was suspended forcefully, all clients must reserve new connections to
perform work.
6. Ensure that the data source is in the Running state.
1
Lock the domain
2
During dynamic cluster scale up operations, new dynamic server instances in a dynamic cluster are started.
For data sources targeted to the dynamic clusters, scaling can result in the creation of additional
connections to the database. Scaling these connections may result in exceeding database capacities.
Datasource interceptors intercepts scaling actions. A data source interceptor intercepts a dynamic cluster
scale up operation to validate that the scaling operation will not overload the database.
To configure a datasource script interceptor:
1. Lock the configuration
2. In the Domain Structure pane navigate to Diagnostics > Interceptors.
3. In the Interceptors pane select the Datasource Interceptors tab and then click New.
4 5
Configure
name, quota
and priority.
Enter a URL
pattern and
4. Enter the name, connection quota and priority of the interceptor. Interceptors are run in priority order,
highest first.
5. Enter a URL pattern for the interceptor. Patterns can be based on existing URLs.
6. Click Finish to create the interceptor.
7. Activate changes.
• Review
• Managing Data Sources
• GridLink Data Source Review
• Multi Data Sources
• Review: Connection Testing
• Proxy Data Sources
• Creating a Multi Data Source
Oracle RAC:
• Supports multiple Oracle database servers for greater scalability
• Relies on database servers having access to a shared and highly available storage
device
RAC
node 1
Driver Shared
Storage
RAC
Oracle Real Application Clusters (RAC) is software that enables users on multiple machines to access a
single database with increased reliability. RAC is made up of two or more Oracle database instances
running on two or more clustered machines that access a shared storage device via cluster technology. To
support this architecture, the machines that host the database instances are linked by a high-speed
interconnect to form the cluster. This interconnect is a physical network used as a means of communication
between the nodes of the cluster. Cluster functionality is provided by the operating system or compatible
third-party clustering software.
Because every RAC node in the cluster has equal access and authority, the loss of a node may impact
performance, but does not result in down time.
GridLink RAC
Data Source node 2
A single GridLink data source provides connectivity between WebLogic Server and an Oracle database
service that has been targeted to an Oracle RAC cluster. This type of data source automatically adjusts the
distribution of work based on the current performance metrics reported by each RAC node, such as CPU
usage, availability, and response time. If this capability is disabled, GridLink data sources instead use a
round-robin, load-balancing algorithm to allocate connections to RAC nodes.
A GridLink data source implements Oracle’s Fast Connection Failover (FCF) pattern, which:
• Provides rapid failure detection
• Aborts and removes invalid connections from the connection pool
• Performs graceful shutdown for planned and unplanned Oracle RAC node outages
• Adapts to changes in topology, such as adding or removing a node
• Distributes runtime work requests to all active Oracle RAC instances, including those rejoining a
cluster
XA affinity ensures that all the database operations performed on a RAC cluster within a global transaction
are directed to the same RAC instance. This increases performance and also helps ensure data integrity
after a failure.
• Review
• Managing Data Sources
• GridLink Data Source Review
• Multi Data Sources
– Multi Data Source Architecture
– Comparison of GridLink and Multi Data Sources
– Multi Data Source Failover Option
– Multi Data Source Load Balancing Option
• Review: Connection Testing
• To avoid a single point of failure and to achieve greater scalability, many enterprises
employ multiple database servers.
• A multi data source:
– Is a pool of data sources
– Is used by applications exactly like a standard data source
– Transparently provides load balancing or failover across the member data sources
– Can be XA or non-XA
A multi data source is an abstraction around a group of data sources that provides load balancing or failover
processing between the data sources associated with the multi data source. Multi data sources are bound to
the JNDI tree or local application context just like data sources are bound to the JNDI tree. Applications look
up a multi data source on the JNDI tree just like they do for data sources, and then request a database
connection. The multi data source determines which data source to use to satisfy the request depending on
the algorithm selected in the multi data source configuration: load balancing or failover.
All data sources used by a multi data source to satisfy connection requests must be deployed on the same
servers and clusters as the multi data source. A multi data source always uses a data source deployed on
the same server to satisfy connection requests. Multi data sources do not route connection requests to other
servers in a cluster or in a domain. To deploy a multi data source to a cluster or server, you select the server
or cluster as a deployment target. When a multi data source is deployed on a server, WebLogic Server
creates an instance of the multi data source on the server. When you deploy a multi data source to a cluster,
WebLogic Server creates an instance of the multi data source on each server in the cluster.
Data Source A
JDBC Driver
Synchronize
Data Source B
A multi data source can be thought of as a pool of data sources. Multi data sources are best used for
failover or load balancing between nodes of a highly available database system, such as redundant
databases. Multi data sources do not provide any synchronization between databases. It is assumed that
database synchronization is handled properly outside of WebLogic Server so that data integrity is
maintained.
You create a multi data source by first creating data sources, then creating the multi data source by using
the administration console or the WebLogic Scripting Tool (WLST), and then assigning the data sources to
the multi data source.
The data source member list for a multi data source supports dynamic updates. You can remove a database
node and corresponding data sources without redeployment. This capability provides you the ability to shut
down a node for maintenance or shrink a cluster.
Some examples of database replication technologies include Oracle Streams, Oracle Golden Gate, Oracle
Data Guard, IBM InfoSphere, Sybase Replication Server, and MySQL Replication.
GridLink and multi data sources are used for different scenarios:
Used only for Oracle RAC databases Used for any database that supports replication
Provides high availability for access to a database that Provides high availability for multiple databases that
is viewed as a single database are synchronized using another technology
• The multi data source uses the first member data source to handle all connection
requests.
• During a connection request, the other member data sources are tried in succession if
the current data source:
– Becomes unavailable
– Has no unused connections (optional)
– Is suspended by the administrator
• If a connection fails when in use, the application must still handle it programmatically.
The multi data source failover algorithm provides an ordered list of data sources to use to satisfy connection
requests. Normally, every connection request to this kind of multi data source is served by the first data
source in the list. If a database connection test fails and the connection cannot be replaced, or if the data
source is suspended, then a connection is sought sequentially from the next data source on the list.
JDBC is a highly stateful client-DBMS protocol, in which the DBMS connection and transactional state are
tied directly to the socket between the DBMS process and the client (driver). Therefore, it is still possible for
a connection to fail after being reserved, in which case your application must handle the failure. WebLogic
Server cannot provide failover for connections that fail while being used by an application. Any failure while
using a connection requires that the application code handle it, such as restarting the transaction.
• The multi data source selects member data sources to satisfy connection requests using
a round-robin scheme.
• Data source failure is handled in the same way as described for the Failover option.
Round robin
Data Source A
Multi Data
Source
Data Source B
The load balancing option is suitable for scenarios where a single database resource may not be able to
keep up with the load of the system. The multi data source balances the load across databases to evenly
distribute the workload across a scaled out data tier.
Connection requests to a load-balancing multi data source are served from any data source in the list. The
multi data source selects member data sources to satisfy connection requests using a round-robin scheme.
When the multi data source provides a connection, it selects a connection from the data source listed just
after the last data source that was used to provide a connection. Multi data sources that use the load
balancing algorithm also fail over to the next data source in the list if a database connection test fails and
the connection cannot be replaced, or if the data source is suspended.
Answer: b
• Review
• Managing Data Sources
• GridLink Data Source Review
• Multi Data Sources
• Review: Connection Testing
• Proxy Data Sources
• Creating a Multi Data Source
To provide failover, multi data sources require that all member data sources be configured to
use connection testing.
Test connections
periodically.
Data sources rely on the Test Reserved Connections feature to know when database connectivity is lost.
Testing reserved connections must be enabled and configured for all the data sources within the multi data
source. WebLogic Server will test each connection before giving it to an application. With the failover
algorithm, the multi data source uses the results from connection test to determine when to fail over to the
next data source in the multi data source. After a test failure, the data source attempts to re-create the
connection. If that attempt fails, the multi data source fails over to the next data source.
• Test Frequency: You can enable periodic background connection testing by entering the number of
seconds between periodic tests.
• Test Reserved Connections: Select this check box to test the database connection before giving it
to your application when your application requests a connection from the data source.
• Test Table Name: Enter the name of a small table to use in a query to test database connections.
The standard query is select count(*) from <table>. Most database servers optimize this
SQL to avoid a full table scan, but it is still a good idea to use the name of a table that is known to
have few rows, or even no rows. If you prefer to use a different query as a connection test, enter
SQL followed by a space and the SQL code that you want to use to test database connections.
• Seconds to Trust an Idle Connection Pool: This specifies the number of seconds that WebLogic
trusts that an already reserved idle connection is valid before performing another connection test.
• Review
• Managing Data Sources
• GridLink Data Source Review
• Multi Data Sources
• Review: Connection Testing
• Proxy Data Sources
• Creating a Multi Data Source
Domain
Partition
Source
Source
Proxy
Data
Data
In WebLogic Server Multitenant environments, data sources are replicated for each partition.
A proxy data source provides a mechanism for access to a data source associated with a partition or tenant.
It enables access to a data source without the need to have naming conventions such as context names,
partitions, or tenants. Proxy data sources simplify the administration of multiple data sources by providing a
lightweight mechanism for accessing a data source associated with a partition or tenant. Applications often
need to quickly access a data source by name without needing to know the naming conventions, context
names (partitions or tenants), and so on. A proxy data source provides access to the underlying data
sources. All of the significant processing happens in the data sources to which it points. That is, the
underlying data sources actually handle deployment, management, security, and so on.
3
2
To create a proxy data source after clicking Lock & Edit in the Change Center, perform the following tasks:
1. In the Domain Structure tree, expand Services and then select Data Sources.
2. Above (or below) the table of data sources, click the New drop-down list and select Proxy Data
Source.
3. On the first page of the data source creation wizard, enter or select the following information and
then click Next:
- Name: The configuration name for this data source
- JNDI Name: The JNDI “binding name” for this data source. Applications look up the data
source on the server’s JNDI tree by this name. The name can include contexts by placing
dots in the string. Note that the name and JNDI name can be different.
- Switching Properties: Enter the switching properties to be passed to the switching callback
method for the Proxy data source. This value is dependent on the requirements of the
switching callback. The format of the proxy switching properties is
partition1=datasource1;partition2=datasource2;...;default=datasource.
4. Select the servers or clusters to which the data source should be deployed and then click Finish. If
no servers are targeted, the data source is created, but not deployed. You will need to target it later.
As usual, to confirm the changes, in the Change Center, click Activate Changes.
You can monitor a variety of statistics for each data source instance in your domain, such as the current
number of database connections in the connection pool, current number of connections in use, and the
longest wait time for a database connection.
To view the statistics of a Proxy JDBC Data source:
1. In the Domain tree, expand Services node and then select Data Sources.
2. On the Summary of Data Sources page, click the Proxy Data Source name.
3. Select the Monitoring: Statistics tab. Statistics of the deployed instance of the Proxy Data source are
displayed.
• Review
• Managing Data Sources
• GridLink Data Source Review
• Multi Data Sources
• Proxy Data Sources
• Review: Connection Testing
• (Optional) Creating a Multi Data Source
1 3
1. In the Domain Structure panel, expand Services and then select Data Sources.
2. Click New > Multi Data Source.
3. Enter or select the following information and click Next:
- Name: Enter a unique name for this JDBC multi data source.
- JNDI Name: Enter the JNDI path to where this JDBC data source will be bound. Applications
look up the data source on the JNDI tree by this name when reserving a connection. To
specify multiple JNDI names for the multi data source, enter each on a separate line.
- Algorithm Type: Select an algorithm option:
Failover: The multi data source routes connection requests to the first data source in the list; if the request
fails, the request is sent to the next data source in the list, and so forth.
Load-Balancing: The multi data source distributes connection requests evenly to its member data sources.
4. Select the servers or clusters on which you want to deploy the multi data source. The targets that
you select will limit the data sources that you can select as part of the multi data source. You can
select only data sources that are deployed to the same targets as the multi data source. Click Next.
5. Select one of the following options and click Next. The option that you select limits the data sources
that you can select as part of the multi data source in a later step. Limiting data sources by JDBC
driver type enables the WebLogic Server transaction manager to properly complete or recover global
transactions that use a database connection from a multi data source:
- XA Driver: The multi data source will use only data sources that use an XA JDBC driver to
create database connections.
- Non-XA Driver: The multi data source will use only data sources that use a
non-XA JDBC driver to create database connections.
6. Select the data sources that you want the multi data source to use to satisfy connection requests.
Then use the supplied arrow buttons to reorder the chosen data source list as desired and click
Finish. For convenience, you can also create new data sources at this time using the “Create a New
Data Source” button.
Change member
data sources.
Change the
algorithm option.
1. In the Domain Structure panel, expand Services, and then select Data Sources.
2. Click the existing multi data source.
3. Edit any of the following fields, and then click Save. In general, changes take effect after you
redeploy the multi data source or restart the server:
- JNDI Name: Same field as when creating a new multi data source
- Algorithm Type: Same field as when creating a new multi data source
- Failover Request if Busy: For multi data sources with the failover algorithm, enables the
multi data source to fail over connection requests to the next data source if all connections in
the current data source are in use
- Test Frequency Seconds: The number of seconds between when WebLogic Server tests
unused connections (requires that you specify a test table name). Connections that fail the
test are closed and reopened to reestablish a valid physical connection. If the test fails again,
the connection is closed. In the context of multi data sources, this attribute controls the
frequency at which WebLogic Server checks the health of data sources it had previously
marked as unhealthy. When set to 0, the feature is disabled.
edit()
startEdit()
jdbcSystemResource = cmo.create('HRMSMultiDataSource',
'JDBCSystemResource')
jdbcResource = jdbcSystemResource.getJDBCResource()
jdbcResource.setName('HRMSMultiDataSource')
jdbcResourceParams = jdbcResource.getJDBCDataSourceParams()
jdbcResourceParams.setJNDINames(['jdbc.hr.HRMSDS'])
jdbcResourceParams.setAlgorithmType('Failover')
jdbcResourceParams.setDataSourceList('DataSourceA',
'DataSourceB')
save()
activate(block='true')
CreateMultiDataSource.py
The same JDBCSystemResourceMBean used to configure standard data sources is also used to configure
multi data sources. This MBean has a child MBean of type JDBCDataSourceBean, which in turn has a
child of type JDBCDataSourceParamsBean. Several of these parameters are applicable only to multi data
sources, including:
• AlgorithmType
• DataSourceList
• FailoverRequestIfBusy
The data source member list for a multi data source supports dynamic updates. This allows environments to
add and remove database nodes and corresponding data sources without restarting the server or losing
connectivity to other member data sources. For example, situations may arise in which you need to grow
and shrink your multi pool in response to throughput, or shut down multi pool nodes for maintenance. You
can do all the required steps in one configuration edit session.
To improve performance when a data source within a multi data source fails, WebLogic Server automatically
disables the data source when a pooled connection fails a connection test. After a data source is disabled,
WebLogic Server does not route connection requests from applications to the data source. Instead, it routes
connection requests to the next available data source listed in the multi data source.
After a data source is automatically disabled because a connection failed a connection test, the multi data
source periodically tests a connection from the disabled data source to determine when the data source (or
underlying database) is available again. When the data source becomes available, the multi data source
automatically re-enables the data source and resumes routing connection requests to the data source,
depending on the multi data source algorithm and the position of the data source in the list of included data
sources. Frequency of these tests is controlled by the Test Frequency Seconds attribute of the multi data
source.
Answer: c
Answer: c, d, e
Diagnostic Framework
After completing this lesson, you should be able to configure the WebLogic Diagnostic
Framework (WLDF) to monitor a domain.
• General Architecture
– WebLogic Diagnostics Framework (WLDF)
– Diagnostic Archives
– Diagnostics Modules
• Diagnostic Images
• Harvesters
• Policies and Actions
• WLDF provides a generic framework to gather and analyze WebLogic runtime data for
monitoring and troubleshooting.
• Use WLDF to:
– Capture a snapshot of key server metrics for distribution to support personnel
– Capture metrics at specific code points in WebLogic or your application
– Periodically record selected MBean attributes
– Send notifications when attributes meet certain conditions
WLDF consists of several components that work together to collect, archive, and access diagnostic
information about a WebLogic Server instance and the applications it hosts.
WLDF
Image Capture
MBeans
Harvester
Data
Metric Collectors Archive
Logs
Actions
Policies
Code
Data creators generate diagnostic data, which is consumed by the logger and the harvester. Those
components coordinate with the archive to persist the data, and they coordinate with the watch and
notification subsystem to provide automated monitoring. The data accessor interacts with the logger and the
harvester to expose current diagnostic data and with the archive to present historical data. MBeans make
themselves known as data providers by registering with the harvester. Collected data is then exposed to
both the watch and notification system for automated monitoring and to the archive for persistence.
The instrumentation system creates monitors and inserts them at well-defined points in the flow of code
execution within the JVM. These monitors trigger events and publish data directly to the archive. They can
also take advantage of policies and actions.
Diagnostic image capture support gathers the most common sources of the key server state used in
diagnosing problems. It packages that state into a single artifact, which can be made available to support
technicians.
The past state is often critical in diagnosing faults in a system. This requires that the state be captured and
archived for future access, creating a historical archive. In WLDF, the archive meets this need with several
persistence components. Both events and harvested metrics can be persisted and made available for
historical review.
• Collected MBean metrics and events are recorded in the server’s diagnostic archives:
– WLDF file store (<server>/data/store/diagnostics, by default)
– WLDF JDBC store
• Recorded data can be limited using size- or age-based retirement policies.
Module1 Module2
Harvested Data Archive
Harvester Harvester
Instrumentation Instrumentation
The Archive component of the WebLogic Diagnostic Framework (WLDF) captures and persists all data
events, log records, and metrics collected by WLDF from server instances and applications running on
them. You can access archived diagnostic data in online mode (that is, on a running server). You can also
access archived data in offline mode using the WebLogic Scripting Tool (WLST). You configure the
diagnostic archive on a per-server basis.
For a file-based store, WLDF creates a file to contain the archived information. The only configuration option
for a WLDF file-based archive is the directory where the file will be created and maintained. The default
directory is <domain>/servers/<server>/data/store/diagnostics. When you save to a
file-based store, WLDF uses the WebLogic Server persistent store subsystem.
To use a JDBC store, the appropriate tables must exist in a database, and JDBC must be configured to
connect to that database. The wls_events table stores data generated from WLDF Instrumentation
events. The wls_hvst table stores data generated from the WLDF Harvester component. Refer to the
WebLogic Configuring Diagnostic Archives documentation for the required schema.
WLDF includes a configuration-based, data-retirement feature for periodically deleting old diagnostic data
from the archives. You can configure size-based data retirement at the server level and age-based
retirement at the individual archive level.
2
1
For file
store option
Log files are archived as human-readable files. Events and harvested data are archived in binary format, in
a WebLogic persistent store or in a database.
1. In the left pane, expand Diagnostics and select Archives.
2. Click the name of the server for which you want to configure diagnostic archive settings.
3. Select one of the following archive types from the Type list:
- Select File Store to persist data to the file system. If you choose this option, enter the
directory in the Directory field.
- Select JDBC to persist data to a database. If you choose this option, select the JDBC data
source from the Data Source list. You must first configure JDBC connectivity to use this
option.
4. Select or deselect Data Retirement Enabled to enable or disable data retirement for this server
instance, respectively. In the Preferred Store Size field, enter a maximum data file size, in
megabytes. When this size is exceeded, enough of the oldest records in the store are removed to
reduce the size of the store below the maximum. In the Store Size Check Period field, enter the
interval, in minutes, between the times when the store will be checked to see if it has exceeded the
preferred store size.
A diagnostic module is used to contain the configuration of WLDF components and is used
to target that configuration to servers and clusters.
• Built-in modules
• User-created system modules
Server
Instrumentation Harvester
To configure and use the Instrumentation, Harvester, and Watch and Notification components at the server
level, you must first create a system resource called a “diagnostic system module.” System modules are
globally available for targeting to servers and clusters configured in a domain. In WebLogic 12c, you can
target multiple diagnostic system modules to any given server or cluster.
There are two types of diagnostic modules:
• Built-in: These are diagnostic modules that are part of the WebLogic product:
- Production mode domains have a built-in module enabled by default.
- Development mode domains have all built-in modules disabled by default.
- Modules provide for low overhead historical server performance metrics.
- There are three built-in modules:
- Low: Captures the most important data from key WebLogic MBean attributes (default)
- Medium: Captures additional attributes captured by Low and data from other MBeans
- High: Captures more verbose data than Medium and from a larger number of MBeans
- Built-in modules can be cloned to customize a new module based on a built-in module.
Server
Instrumentation Harvester
When a server hosting a dynamic diagnostic module shuts down, the module reverts to whatever is
contained in the configuration when the server is restarted.
<wldf-resource
xmlns="http://xmlns.oracle.com/weblogic/weblogic-diagnostics"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-
diagnostics/1.0/weblogic-diagnostics.xsd">
<name>MyDiagnosticModule</name>
<instrumentation>
<!-- Configuration for harvesting metrics -->
</instrumentation>
<harvester>
<!-- Configuration for harvesting metrics -->
</harvester>
A resource descriptor contains the actual configuration of WLDF components and features within an XML
file. There are two types of resource descriptors:
• Configured:
- Is part of the domain configuration stored in $DOMAIN_HOME/config/diagnostics
- Is referenced by the config.xml file
- Can be deployed, activated, and deactivated dynamically
• External:
- Is external to the domain configuration
- Is not referenced by the config.xml file
- Is only deployed, activated, and deactivated dynamically, and can be administered only by
WLST
- Resides in memory of the server until the server is shut down
- Never gets persisted to the domain configuration
1
2
Configure
Target to one or
resources
more servers
serverConfig> enableSystemResource('external-wldf')
serverConfig> listSystemResourceControls()
Command Description
destroySystemResourceControl
Answer: b
Archived data can also be accessed in offline mode by using WLST.
• General Architecture
• Diagnostic Images
– What Is a Diagnostic Image?
– Capturing a Diagnostic Image
• Harvesters
• Policies and Actions
machine
Server 1
Application Application
Application Application
Diagnostic image is sent to
Oracle Support for analysis.
Image Capture
Diagnostic image capture support gathers the most common sources of the key server state used in
diagnosing problems. It packages that state into a single artifact, which can be made available to support
technicians.
The past state is often critical in diagnosing faults in a system. You use the diagnostic image capture
component of WLDF to create a diagnostic snapshot, or dump, of a server’s internal runtime state at the
time of the capture. This information helps support personnel analyze the cause of a server failure. You can
capture an image manually by using the console or WLST, or generate one automatically as part of a watch
notification.
Because the diagnostic image capture is meant primarily as a post-failure analysis tool, there is little control
over what information is captured. It includes the server’s configuration, log cache, JVM state, work
manager state, JNDI tree, and most recently harvested data. The image capture subsystem combines the
data files produced by the different server subsystems into a single zip file.
If Java's Flight Recorder feature is enabled on your JVM, the diagnostic image will also contain the
recording file, which you can view with Java Mission Control (JMC). If enabled, WebLogic can record its own
events to this JVM file, which you can also browse with JMC.
Location of image
file (.zip)
1
3
Perform the following steps to capture a server image by using the console:
1. In the left pane, expand Diagnostics and select Diagnostic Images.
2. Select the server for which you want to capture a diagnostic image, and click Capture Image.
3. Enter a new directory in the Destination Directory field, or accept the default directory. If you change
the directory for this image capture, it does not change the default directory for capturing images for
this server when they are captured by other means, such as a watch notification. Then click OK.
serverRuntime()
wldfCapture = getMBean('WLDFRuntime/WLDFRuntime/
WLDFImageRuntime/Image')
wldfCapture.captureImage('logs/diagnostic_images',30)
CreateImage.py
Additional WLDF WLST examples can be found in the WLDF guide in the product documentation.
Refer to the following MBeans for more information about MBeans related to the diagnostic framework:
• WLDFResourceBean
• WLDFHarvesterBean (a subclass of WLDFResourceBean)
• WLDFHarvestedTypeBean (a component of WLDFHarvesterBean )
• WLDFInstrumentationBean (a subclass of WLDFResourceBean )
• WLDFInstrumentationMonitorBean (a component of WLDFInstrumentationBean )
• WLDFWatchNotificationBean (a subclass of WLDFResourceBean )
• WLDFWatchBean (a component of WLDFWatchNotificationBean )
• WLDFNotificationBean (abstract; subclasses exist for each notification type )
• WLDFRuntimeMBean
• WLDFImageRuntimeMBean (a component of WLDFRuntimeMBean )
• WLDFControlRuntimeMBean
• WLDFSystemResourceControlRuntimeMBean
Answer: c
• General Architecture
• Diagnostic Images
• Harvesters
– What Is a Harvester?
– Configuring Metric Collection
• Policies and Actions
The Harvester is a WLDF component that you can configure to capture WebLogic runtime
MBean data.
Harvester
Data
MBeans
Metric Collectors Archive
MBean attributes
Harvesting metrics is the process of gathering data that is useful for monitoring the system state and
performance. Metrics are exposed to WLDF as attributes on qualified MBeans. The Harvester gathers
values from selected MBean attributes at a specified sampling rate. Therefore, you can track potentially
fluctuating values over time. For custom MBeans, the MBean must be currently registered with the JMX
server.
You can configure the Harvester to harvest data from named MBean types, instances, and attributes. If only
a type is specified, data is collected from all attributes in all instances of the specified type. If only a type and
attributes are specified, data is collected from all instances of the specified type.
The sample period specifies the time between each cycle. For example, if the Harvester begins execution at
time T, and the sample period is I, the next harvest cycle begins at T+I. If a cycle takes A seconds to
complete and if A exceeds I, then the next cycle begins at T+A. If this occurs, the Harvester tries to start the
next cycle sooner, to ensure that the average interval is I.
WLDF allows for the use of wildcards (*) in type names, instance names, and attribute specifications. WLDF
also supports nested attributes using a dot delimiter, as well as complex attributes such as arrays and
maps. WLDF watch expressions also support similar capabilities.
Metrics are configured and collected in the scope of a diagnostic system module targeted to one or more
server instances. Therefore, to collect metrics, you must first create a diagnostic system module.
1. Click the name of the module for which you want to configure metric collection. Then click
Configuration > Collected Metrics.
2. To enable or disable all metric collection for this module, select or deselect the Enabled check box.
To set the period between samples, enter the period (in milliseconds) in the Sampling Period field.
To define a new collected metric, click New.
3. From the MBean Server Location drop-down list, select either DomainRuntime or ServerRuntime.
Then click Next. Select an MBean that you want to monitor from the MBean Type list. Then click
Next again.
4. In the Collected Attributes section, select one or more attributes from the Available list and move
them to the Chosen list (default is all attributes). Click Next.
5. In the Collected Instances section, select one or more instances from the Available list and move
them to the Chosen list (default is all instances). Click Finish.
6. Click Save.
module = cmo.createWLDFSystemResource('JMSDebugModule')
module.addTarget(getMBean('/Servers/serverA'))
harvester = getMBean('/WLDFSystemResources/JMSDebugModule/
WLDFResource/JMSDebugModule/Harvester/JMSDebugModule')
harvester.setSamplePeriod(300000)
harvester.setEnabled(true)
harvestType = harvester.createHarvestedType
('weblogic.management.runtime.JMSServerRuntimeMBean')
harvestType.setEnabled(true)
harvestType.setHarvestedAttributes(['MessagesHighCount'])
Additional WLDF WLST examples can be found in the WLDF guide in the product documentation.
Refer to the following MBeans for more information about MBeans related to the diagnostic framework:
• WLDFResourceBean
• WLDFHarvesterBean (a subclass of WLDFResourceBean)
• WLDFHarvestedTypeBean (a component of WLDFHarvesterBean )
• WLDFInstrumentationBean (a subclass of WLDFResourceBean )
• WLDFInstrumentationMonitorBean (a component of WLDFInstrumentationBean )
• WLDFWatchNotificationBean (a subclass of WLDFResourceBean )
• WLDFWatchBean (a component of WLDFWatchNotificationBean )
• WLDFNotificationBean (abstract; subclasses exist for each notification type )
• WLDFRuntimeMBean
• WLDFImageRuntimeMBean (a component of WLDFRuntimeMBean )
Answer: b
• General Architecture
• Diagnostic Images
• Harvesters
• Policies and Actions
Policies:
• Identify situations used to capture data
• Are used to analyze log records, data events, and harvested metrics
• Include expressions, alarms and action handlers
Actions:
• Are executed when an associated policy expression evaluates to true
• Include support for Scaling dynamic clusters, interacting with JMX, JMS, SMTP, SNMP,
REST and so on
POST
Command line
cURL PUT
linux $>
DELETE
win c:\
JMS SNMP
REST SMTP WLST
A policy identifies a situation that used to trap for monitoring or diagnostic purposes. You can configure
policies to analyze log records, data events, and harvested metrics.
A policy includes:
• A policy expression (with the exception of calendar-based policies)
• An alarm setting
• One or more action handlers
You can configure policies to enable elasticity in dynamic clusters; for example, to automatically scale a
dynamic cluster up or down by a specific number of server instances. Policies support two categories of
elasticity:
• Calendar-based scaling: Scaling operations on a dynamic cluster that are executed on a particular
date and time
• Policy-based scaling: Scaling operations on a dynamic cluster that are executed in response to
changes in demand
Smart Rules Use a set of prepackaged functions and configurable parameters that allow for the creation
of complex policy expressions combinations of functions and parameters
Calendar Based Are scheduled at a specific time, after a duration of time, or at timed intervals
Collected Metrics Use WLDF-provided beans and functions within policy expressions based on a variety of
MBeans including WebLogic Server Runtime, Domain Runtime, and JVM platform Mbeans.
Server Log Use various server variables to define policy expressions. Fields include SERVER,
MACHINE, SUBSYSTEM and so on
Event Data Use various event data generated by instrumentation and stored in the event archive
Example
Smart Rule
...
Example
An action is an operation that is executed when a policy expression evaluates to true. WLDF supports the
following types of diagnostic actions, based on the delivery mechanism:
To create an action:
1. Log in to the WebLogic console and in the Domain Structure pane, click Lock & Edit.
2. Expand Diagnostics > Diagnostic Modules.
3. On the Summary of Diagnostic Modules page, click the name of the module to house the new policy.
4. On the Settings for [Module Name] page, select Configuration > Policies and Actions > Policies.
5. Select the Actions tab and click New.
8. Click Finish when you are done creating your new action.
9. In the Change Center, click Activate Changes.
Elastic actions:
• Are used to increase or decrease the size of a dynamic cluster
• Are triggered when policy guidelines are met
• Are associated 1:1 with policy Only one elastic scaling action
per/policy.
• Are configured within domain-scope There may be other actions.
diagnostic system modules
• Are configured with scale-up/down-actions
inside wldf-resources
scaleDown
Elastic actions are actions used to scale up or down a dynamic cluster when certain policy conditions are
met, such as load or demand. The scale up and scale down actions are used to add or remove running
dynamic server instances from a dynamic cluster during scaling operations. Elastic actions can be
associated with policies. As a policy’s conditions are met, the elastic action is triggered and the scaling
operation begins. This type of scaling is called policy-based scaling. Only one elastic action can be
assigned to a policy. However, any number of non-scaling actions can be assigned to a policy, and elastic
actions can be used in conjunction with those non-scaling actions.
You can configure a scale up action in WLST using code similar to:
startEdit()
scaleUp=wn.createScaleUpAction('scaleUp')
scaleUp.setScalingSize(1)
scaleUp.setClusterName('DynamicCluster')
save()
activate()
Scale up actions add dynamic server instances to the specified dynamic cluster.
<wldf-resource>
<name>ClusterManager</name>
<watch-notification>
<!-- One or more policy configurations -->
<scale-up-action>
<name>scaleUp</name>
<cluster-name>DynamicCluster</cluster-name>
<scaling-size>1</scaling-size>
</scale-up-action>
<!-- Other action configurations -->
A resource descriptor contains the actual configuration of WLDF components and features within an XML
file. There are two types of resource descriptors:
• Configured:
- Is part of the domain configuration stored in $DOMAIN_HOME/config/diagnostics
- Is referenced by the config.xml file
- Can be deployed, activated, and deactivated dynamically
• External:
- Is external to the domain configuration
- Is not referenced by the config.xml file
- Is only deployed, activated, and deactivated dynamically, and can be administered only by
WLST
- Resides in memory of the server until the server is shut down
- Never gets persisted to the domain configuration
Scale down shuts down and removes dynamic server instances in the specified dynamic
cluster.
<wldf-resource>
<name>ClusterManager</name>
<watch-notification>
<!-- One or more policy configurations -->
<scale-down-action>
<name>scaleDown</name>
<cluster-name>DynamicCluster</cluster-name>
<scaling-size>1</scaling-size>
</scale-down-action>
<!-- Other action configurations -->
</watch-notification>
This slide shows a scale down XML example. The scale-down-action is specified within a watch-
notifications element which itself is within a wldf-resource element. The scale down element,
much like scale up, includes the name of the cluster as well as the scaling-size.
Similar code, but performed using WLST might resemble:
startEdit()
scaleDown=wn.createScaleDownAction('scaleDown')
scaleDown.setScalingSize(1)
scaleDown.setClusterName('DynamicCluster')
save()
activate()
Answer: c
In this lesson, you should have learned how to configure the WebLogic Diagnostic
Framework (WLDF) to monitor a domain.
• Coherence Overview
• Coherence*Web Session Replication
• Managed Coherence Servers
Coherence:
• Provides a distributed, in-memory caching solution for Java
• Is based on a grid of cache servers or nodes
• Automatically distributes or partitions cached data across the grid based on the number
of servers
• Implements replication for high availability
• Includes a decentralized management and monitoring model based on JMX
• Supports a proxy architecture to provide connectivity beyond the Coherence cluster
• Supports additional languages such as .NET and C++
One of the primary uses of Oracle Coherence is to cluster an application’s objects and data. In the simplest
sense, this means that all the objects and data that an application delegates to Coherence are automatically
available to and accessible by all servers in the application cluster. None of the objects or data will be lost in
the event of server failure. By clustering the application’s objects and data, Coherence solves many of the
difficult problems related to achieving availability, reliability, scalability, performance, serviceability, and
manageability of clustered applications.
Oracle Coherence is a JCache-compliant, in-memory caching and data management solution for clustered
Java EE applications and application servers. Coherence makes sharing and managing data in a cluster as
simple as on a single server. It accomplishes this by coordinating updates to the cached data by using
clusterwide concurrency control, replicating and distributing data modifications across the cluster, and
delivering notifications of data modifications to any servers that request them. Developers can easily take
advantage of Coherence features by using the standard Java Collections API to access and modify data,
and by using the standard JavaBean event model to receive data change notifications.
Coherence provides a clusterwide view of management information through the standard JMX API, so that
the entire cluster can be managed from a single server. The information provided includes cache sizes
along with hit and miss rates.
No data replication or
partitioning on these nodes Data replication and
partitioning across
these nodes
Grid
Replicated
data
Application Cache Server
provides a
Network
backup for
failover
Application Cache Server
Partitioning refers to the ability of Coherence to load-balance data storage, access, and management
across all the servers in the cluster. For example, when using Coherence data partitioning, if there are four
servers in a cluster, each will manage 25% of the data. And if another server is added, each server will
dynamically adjust so that the five servers will manage 20% of the data. This data load balancing will occur
without any application interruption and without any lost data or operations. Similarly, if one of those five
servers were to fail, each of the remaining four servers would readjust to managing 25% of the data. Once
again, there is no data loss, including the 20% of the data that was being managed on the failed server.
While the partitioning feature dynamically load balances data evenly across the entire server cluster,
replication ensures that a desired set of data is always available and up-to-date at all times in the cluster.
Replication allows operations running on any server to obtain the data that they need locally, at basically no
cost, because that data has already been replicated to that server. The only downside of partitioning is that
it introduces latency for data access, and in most applications the data access rate far outweighs the data
modification rate. To eliminate the latency associated with partitioned data access, Coherence can employ
“near caching.” Frequently and recently used data from the partitioned cache are maintained on the specific
servers that are accessing that data within a near cache, and this local data is kept up-to-date by using
event-based invalidation.
• Coherence Overview
• Coherence*Web Session Replication
– Types of Session Persistence
– Coherence*Web
– Coherence*Web and WebLogic Clusters
– Coherence*Web Session Failover
– Configuring Coherence*Web in WebLogic
• Managed Coherence Servers
WebLogic Server supports several strategies so that the session information is not lost when
a server fails:
• In-memory session persistence
• File session persistence
• JDBC (database) session persistence
• Coherence*Web session persistence
Java web application components, such as Servlets and JavaServer Pages (JSPs), can maintain data on
behalf of clients by storing objects in an HttpSession. An HttpSession is available on a per-client basis.
After an instance of WebLogic Server creates a session for a client, it also writes a cookie to the client’s web
browser. This cookie indicates which server has this client’s session. The cluster proxy checks this cookie
on subsequent client requests, and routes the client to the instance of WebLogic Server that has the client’s
session.
If the server that has the client’s session fails, when the client makes their next request, they cannot be
routed to that server. The proxy chooses another server. That server does not have the client’s session, so
information about the client is lost.
To provide transparent failover for web applications, replication or persisted, shared access to the
information in each HttpSession object must be provided. This is accomplished in WebLogic Server by
using in-memory persistence, file system persistence, database persistence, or Coherence*Web
persistence. Each of these options is configured in the web application’s WebLogic deployment descriptor,
weblogic.xml. Each option has its own configurable parameters that are also entered in weblogic.xml.
Note that in-memory and JDBC persistence have two options: synchronous and asynchronous. The
asynchronous option persists data in batches to improve cluster performance.
Cache Server
Cluster Network
Cache Server
Coherence*Web is not a replacement for WebLogic Server's in-memory HTTP state replication services.
However, Coherence*Web should be considered when an application has large HTTP session state
objects, when running into memory constraints due to storing HTTP session object data, or if you have an
existing Coherence cluster and want to offload HTTP Session storage to a Coherence cluster.
Coherence*Web is configured with local storage disabled. This means that although the WebLogic server
instance is a member of the Coherence cluster, it is not a data storage member of the cluster. Storage-
disabled members rely on other Coherence cluster members to manage and replicate data.
By default, Coherence*Web creates a single HTTP session across all web applications for each client and
scopes the session attributes to each web application. This means that if a session is invalidated in one web
application, that same session is invalidated for all web applications in WebLogic Server that are using
Coherence*Web. If you do not want this behavior, a potential work-around is to edit the <cookie-path>
element of the weblogic.xml descriptor.
Web Application
Coherence
Web
Java EE or Servlet Current
Container Sessions
Router
Web Application
Application
Coherence
Coherence
Web Load Web
tier balanced
Java EE
Java EE or
or Servlet
Servlet Application
Current
In-memory
Coherence data grid
for session state
The diagram in the slide illustrates that if a server were to fail, if you lose the entire application server or a
JVM, Coherence still maintains the data. Therefore, you do not lose any session state information. Using
your existing infrastructure, the user session fails over to one of the other application servers, or the one that
is surviving. Therefore, no data is lost. The end user may not even notice that there is an outage. The user
session continues without the user even noticing that anything went wrong because Coherence manages
the data.
One common use case for Coherence clustering is to manage user sessions (conversational state) in the
cluster. This capability is provided by the Coherence*Web module, which is a built-in feature of Oracle
Coherence. Coherence*Web provides linear scalability for HTTP Session Management in clusters of
hundreds of production servers. It can achieve this linear scalability because, at its core, it is built on Oracle
Coherence dynamic partitioning.
<weblogic-web-app>
<session-descriptor>
<persistence-store-type>
coherence-web
</persistent-store-type>
</session-descriptor>
</weblogic-web-app>
weblogic.xml
Answer: c
• Coherence Overview
• Coherence*Web Session Replication
• Managed Coherence Servers
– What Are Managed Coherence Servers?
– What Are Coherence Container Applications?
– What Are the Benefits of the Coherence Container
– Installing WebLogic with Coherence Support
– Coherence Grid Archives
– Coherence Cluster
Traditionally, Coherence has been deployed as a JAR, incorporated into a Java application along with a tier
of stand-alone cache server JVMs. For example, the use of embedded Coherence from within a WAR, was
referred to as clients and the stand-alone cache servers were referred to as servers. The life cycles of the
“clients” and cache servers were managed separately, often manually. Application development and
deployment in this model was a complicated process involving many moving parts that required custom
management processes.
The WebLogic Server Coherence Container for applications provide tight integration between WebLogic
Server and Coherence and eliminate the difficulty of managing Coherence instances and applications in a
WebLogic environment. This integration allows for simplified and streamlined development and
management environments of distributed applications. The Coherence Container allows end users to build a
new archive type (GAR) that can be deployed and managed via standard WebLogic Server methods. Using
the Coherence Container:
• Developers can now streamline their build processes to Coherence applications
• Operations departments can now standardize deployment of Coherence instances and Coherence
servers
Coherence applications are packaged into Grid Archive or GAR files. In general, a GAR contains all the
artifacts required to service a particular cache or invocation request and mirrors, to a high degree, the
structure of other Java EE artifacts, such as web archives. Specifically, a GAR contains:
• A deployment descriptor, coherence-application.xml, which defines the location of a cache
configuration as well as a variety of other content such as POF configuration. The deployment
descriptor may include references to several other optional elements, such as lifecycle handlers.
• A cache configuration, referenced from the deployment descriptor
• Java classes representing the application. These classes typically include the entities and other
application classes, POF support, entry processors, event handlers, and similar classes.
• A set of optional library JAR files
The entire set of classes is encapsulated into a JAR known as a Coherence Grid Archive, which mirrors
similar constructs for web applications, enterprise applications, and JAR files in general.
GAR files are also deployable directly to stand-alone Coherence (without WebLogic present at all),
reinforcing their JAR-like structure.
EAR
such as EJBs WAR
• Shared library: GAR resources are available to all WARs and EARs
Coherence
located on the server. Server
EAR
WAR
GAR
With WebLogic Server 12c, Coherence is provided and installed with WebLogic Server. Both WebLogic
Server and Coherence provide services to each other.
WebLogic Server provides several services to Coherence, including:
• Coherence, separate from WebLogic, cluster, member, and application management and scoping.
Applications can be scoped to be stand-alone, WAR, EAR, or shared library scoped. Coherence also
provides a cache-level scoping mechanism much like namespaces for caches and data.
• Packaging: Available on the system classpath, simplifying packaging
• Management: Coherence is now a first-class member of the cluster and fully supported by the
console and WLST.
• Instance support: Full support for creating and managing Coherence instances
• Application support: Full support for application development and runtime life cycle
• Cluster support: Full support for defining and managing Coherence Clusters
Storage-enabled
GAR
Storage-enabled
... GAR
Storage-enabled
• Share configuration and Data-tier WLS cluster
overrides
• Have one or more deployed Coherence Coherence Coherence
Coherence applications server server
... server
EAR
EAR
EAR
GAR GAR GAR
Coherence clusters are a mechanism for logically grouping sets of managed Coherence servers. In general,
Coherence clusters consist of multiple managed Coherence servers, which together form a distributed data
grid. Coherence clusters are different from WebLogic Clusters in several ways. First and foremost
Coherence clusters often span multiple WebLogic clusters and are intended to represent Coherence
servers, which logically represent a single data grid. Additionally, Coherence clusters use different clustering
protocols, ports, and are configured separately from WebLogic Clusters. Managed Coherence servers can
be associated with a Coherence cluster or with a WebLogic Cluster, which itself is associated with a
Coherence cluster.
Typically, a Coherence cluster is associated with one or more WebLogic Clusters, which together represent
a tiered architecture, typically having data, web, and proxy tiers. In the sample shown, two WebLogic
Clusters, a data-tier cluster and a web-tier cluster, are defined. These two WebLogic Clusters are bound
together into a single Coherence cluster. Typically, such a cluster shares a set of Coherence configurations
and applications. In the example shown, the same Coherence GAR is deployed to each Coherence
managed server, but packaged differently. On the data tier, the application is packaged as a simple GAR
and deployed stand-alone. On the web tier, the same GAR is packaged as a component of an EAR and
accessible from any other Java EE application within that EAR.
Coherence Cluster
Managed Coherence servers are server instances that are associated with a Coherence cluster. These
servers work together and collectively form a Coherence cluster. Members of a Coherence cluster are
historically referred to as Coherence instances, and are referred to as managed Coherence servers within a
WebLogic installation. However, managed Coherence servers are not the same as Coherence servers.
Managed Coherence servers are specific to the WebLogic Server Coherence Container integration. The use
of Coherence servers, a resource defined in the previous ActiveCache integration, is deprecated. Managed
Coherence servers conceptually are no different from earlier Coherence server instances running
stand-alone, but there are differences. Some of the differences include:
• Managed Coherence servers that are managed within a WebLogic Server domain must not join an
external Coherence cluster comprised of stand-alone JVM cluster members.
• Stand-alone JVM cluster members cannot be managed within a WebLogic Server domain.
• The administration server is typically not used as a managed Coherence server in a production
environment.
Managed Coherence servers are distinguished by their role in the cluster, and can be:
• Storage enabled: A managed Coherence server that is responsible for storing data in the cluster.
Coherence applications are packaged as Grid Archives (GARs) and are deployed on storage-
enabled managed Coherence servers.
• Storage disabled: A managed Coherence server that is not responsible for storing data and is used
to host Coherence applications (cache clients). A Coherence application GAR is packaged within an
EAR and deployed on storage-disabled managed Coherence servers.
• Proxy: A managed Coherence server that is storage disabled and allows external clients (non-
cluster members) to use a cache. A Coherence application GAR is deployed on managed
Coherence proxy servers.
Answer: b, c
• Review:
– Java EE applications
– Deployment
• Server staging modes
• Deploying an application to multiple environments
• Java EE deployment descriptors and annotations
• Deployment plans
• Deployment plan tools
Web
container
Java
APIs
EJB
Container
Clients Back-end
Application systems
2
4
Managed
The administrator uses a tool, server
such as the admin console, to Clients
1 communicate with the admin
server and deploy the application.
The deployment is pushed
out to target servers. The
3 The archive is uploaded to deployed application is
the admin server and the “turned on” to start
The developer develops deployment is targeted. servicing client requests.
1. First, an application must be developed, tested, and packaged (usually as an application archive).
2. The application archive file is placed somewhere that an administrator in charge of deployment can
access it. A deployment tool, like the administration console, is used to communicate with the
administration server of the domain, which is in charge of any configuration changes, including
application deployment.
3. The deployment tool gives the administration server access to the application, and allows the
deployment administrator to target a server (or servers) on which to run the application. The
administration server updates the domain’s configuration.
4. The administration server pushes the application’s code out to the target managed server (or
servers). After the application is “activated” (told to start servicing requests), it becomes accessible to
clients.
• Review
• Server staging modes
– Staged mode example
– No stage mode example
– External stage mode example
– Configuring the staging mode
• Deploying an application to multiple environments
• Java EE deployment descriptors and annotations
• Deployment plans
WebLogic provides you with control over whether or not, where, and by whom application files are copied
before being deployed. All applications that are targeted to the managed servers can be copied by the
administration server to the managed server before being prepared. This is called staging, and the files are
staged to a configurable staging area for each server. There are three kinds of staging:
• stage: (default) Files are copied to the preconfigured staging directory for preparation and
activation.
• nostage: Files are deployed from a static location.
• external stage: Files are copied by a user or a third-party tool (for example, JDeveloper or
Eclipse) before deployment.
Applications can be deployed from the source location by changing the configuration (StagingMode). If the
managed servers are running on a machine other than the administration server, it means that either the
source location is on a file system that is accessible to the managed server (StagingMode = nostage) or
the user or third-party tool performs the copy before the deployment request is issued (StagingMode =
external stage).
The administration server first copies the deployment files to the staging directories of target
servers.
Machine B
Machine A
Managed server 1
3 Deploy
Admin
server Application
1 Distribut
e 4 Deploy to
Copy files to 2 server
When a server is configured with the stage staging mode, it causes the administration server to first
distribute deployment files to the server’s configured stage folder so the files are available to the server for
the deployment process. This staging mode is best used for small to medium-sized applications to multiple
WebLogic Server instances or a cluster. If applications are very large in size, then other staging modes may
provide a better deployment experience. The stage mode deployment process involves:
1. The first aspect of the deployment process causes the administration server to execute the
distribution process, which copies the deployment's files to the stage folder that is configured for the
target server.
2. The administration server copies the files to the stage folder of the server instance. This is done by
sending the files to the server instance and the instance places the files in its staging folder.
3. After the distribution phase of deployment is complete, the administration server executes the
deployment process on all targeted servers.
4. The target server receives the deployment command, performs its internal preparation of the
deployment, and performs the actual deployment of the application on the server.
The administration server does not copy deployment files. All servers deploy the same
physical copy of files from a location that is accessible to all servers.
Machine B
Machine A
Managed server 1
1 Deploy
Admin
server application
When a server is configured with the nostage staging mode, the administration server only issues the
deployment command to all targeted servers because the files are already accessible to all servers
involved. When the nostage staging mode is used in conjunction with exploded deployment files,
WebLogic automatically detects changes to web components and refreshes the deployment to reflect the
latest state. This staging mode is best used for deploying:
• To a single server domain because all files would be local to the administration server by default
• To a cluster on a multihomed machine to avoid multiple copies of the same files on one machine
• Very large applications to multiple targets to avoid multiple copies that take up a lot of disk space
and to leverage the infrastructure of a shared file system for faster deployments
• Exploded deployments to refresh any changes
• Applications that require dynamic updates of deployment descriptors
The nostage mode deployment process involves the following steps:
1. The administration server executes the deployment process on all targeted servers.
2. The target server receives the deployment command, performs its internal preparation of the
deployment and the actual deployment of the application on the server using the files available on
shared storage.
The administration server does not copy deployment files. The administrator must ensure
that all files are distributed and available to all servers before deployment.
Machine B
Machine A
Managed server 1
2 Deploy
Admin
server Application
3 Deploy to
1 Copy files to
staging
folder
When a server is configured with the external stage staging mode, the administration server does not
take responsibility for distributing deployment files to targeted servers. The WebLogic administrator must
ensure that all deployment files are distributed before performing a deployment. File distribution can take
place in the form of running a script, using a third-party tool, manually copying files, and so on. All that
matters is that the files are distributed before performing a deployment. When using the external stage
staging mode, the administration server requires a copy of the deployment files, which it uses for validation
purposes during deployment. WebLogic supplies the -noversion option to disable this requirement, but
this also renders production redeployment impossible.
This staging mode is best used for deployments:
• Where an administrator requires or wants manual control over file distribution
• Where other mechanisms used in operations procedures manage file distribution, such as scripts or
third-party systems
• That do not require dynamic updating or partial redeployment
The external stage mode deployment process involves:
1. The WebLogic administrator performs the operational process that copies deployment files to the
stage folder that is configured for the target server.
2. The WebLogic administrator performs a deployment operation on WebLogic. The administration
server executes the deployment process on all targeted servers.
3. The target server receives the deployment command, performs its internal preparation of the
deployment, and performs the actual deployment of the application on the server.
Override default
server staging mode
during deployment
The server staging mode specifies a staging mode for a server when no staging mode is indicated at the
time of deployment. The default staging mode for the administration server is nostage, and the default
staging mode for managed servers is stage. You can configure the staging mode of a server using the
administration console by selecting a server's Configuration > Deployment tab. This page provides three
fields for configuring the server's staging mode:
• Staging Mode: This is where you specify the default staging mode used by this particular server.
• Staging Directory Name: This is the relative path to the folder where deployment files are stored for
this server before deployment.
• Upload Directory Name: When a remote client not running on the machine where the
administration console is located is used to deploy an application, such as WLST, the command
uploads the deployment files to this upload folder on the administration server. This specifies a
relative path of the administration server used before beginning the staging and deployment process.
Answer: b
• Review
• Server staging modes
• Deploying an application to multiple environments
– Development to test to production
– Examples of deployment descriptor values to change
• Java EE deployment descriptors and annotations
• Deployment plans
• Deployment plan tools
Applications are deployed to several different environments during a full development life
cycle:
The full development life cycle of a WebLogic application involves some variation of development, test, and
production environments. Whether it includes multiple levels of any of these environments is irrelevant, such
as a development environment for each individual developer and an integrated development environment to
see how components coexist with components from other developers. The main point is that the application
is going to begin in a development phase, graduate to a testing environment, and graduate again into a live
production environment. Each environment presents a new ecosystem in which the application gets
deployed. The application must be able to adapt accordingly in order to function in each environment. This
means that the application’s configuration properties must change from one environment to the next.
• Review
• Server staging modes
• Deploying an application to multiple environments
• Java EE deployment descriptors and annotations
– Java EE deployment descriptors
– Java EE annotations
– appmerge and appc
• Deployment plans
• Deployment plan tools
EJB
EJB (JAR) Deployment descriptor
EJB ejb-jar.xml J2EE Enterprise
weblogic-ejb-jar.xml
Application (EAR)
EJB
application.xml
Web application (WAR) weblogic-application.xml
Web web.xml
weblogic.xml
Web
EJB-JAR
Application client (JAR)
weblogic-appclient.xml WAR
Java EE deployment descriptors configure Java EE application features. These configuration settings are
different depending on the type of component the descriptor represents. Deployment descriptors are useful
because they are external to the code of an application and provide a way for developers to include
configuration settings along with their applications. The drawbacks to deployment descriptors include:
• Sometimes difficult to learn and format because each is defined by an XML schema that is not visibly
available when editing files. Administrators are not typically familiar with an application’s code and it
is possible that they lack the knowledge required to make descriptor changes directly.
• Difficult to modify because components are typically provided in the form of a packaged archive file
with the descriptors contained within the archive. Modification requires unpacking the archive,
modifying the descriptors, and repacking the archive correctly. Descriptors almost always require
some modification during the full development life cycle when they move from development to test,
and then from test to production.
package com.examples;
@Stateless
@Local(MyEJBInterface.class)
public class MyEJB {
@Resource(mappedName = "jdbc/employeeDatabase")
private DataSource myDS;
...
} MyEJB.java
MyEJB.jar
.. <ejb-jar>
A Java annotation is a way of adding metadata to Java source code that is also available to the program at
run time. Annotations are an alternative to deployment descriptors, that were required by J2EE 1.4 and
earlier. In fact, Java EE metadata annotations now make standard deployment descriptors optional.
Annotations are realized by WebLogic during deployment, just like deployment descriptors. Developers use
annotations to simplify development, by specifying within the Java class how the application component
behaves in the container. Annotations make development easier, but how do they affect administrators?
• Developers are no longer required to create deployment descriptors to configure their components.
As a result, some configuration hooks that would have previously been present in the descriptors of
components passed from developers to administrators are no longer available by default.
• Administrators have no visibility into what configuration is contained within the annotations of Java
source files by default. This is because components passed to administrators for deployment do not
include source code, and administrators are not always developers who understand the syntax of
annotations.
These situations are avoided by following some best practices and leveraging some tools that make the
configuration provided by annotations visible to administrators.
WebLogic provides tools to debug annotations and deployment descriptors. These tools can
work together to troubleshoot problems in your configuration.
-writeInferredDescriptors flag metadata-complete attribute
WebLogic’s appmerge and appc tools have a “-writeInferredDescriptors” flag to see how the
container views the configuration of deployments.
These tools process the descriptors and annotations of your component like the container does and write
everything out to descriptors for you to review.
The example in this slide shows how to use “–writeInferredDescriptors” with the appmerge tool.
The “–writeInferredDescriptors” flag automatically sets “metadata-complete” to true in
deployment descriptors.
Setting “metadata-complete” to true in a deployment descriptor instructs annotation processors to ignore
the annotations in your deployments and only use what is contained in deployment descriptors.
The example shows the “metadata-complete” attribute set to true.
As an administrator, you will most likely not execute the appc tool because you do not customarily compile
Java EE programs. You will find using appmerge to be a productive tool for making the configuration of
your applications more visible.
Remember to use the following options with appmerge to see how WebLogic is interpreting
your deployments:
-writeInferredDescriptors flag metadata-complete attribute
$ java weblogic.appmerge \
-writeInferredDescriptors \
<application
-library jstl \
metadata-complete="true">
-librarydir myLibPath \
...
-plan myPath/plan.xml \
</application>
-output app.debug/app.ear \
MyEAR.ear Command Line application.xml
WebLogic’s appmerge tool enables you to see how the container views the configuration of your
deployments that include shared libraries. As an administrator, you will find using appmerge to be a
productive tool for seeing how WebLogic processes the configuration properties of your deployments. Note
that now you can include the arguments for shared libraries and a deployment plan. From the Java Servlet
specification:
The web application deployment descriptor contains a metadata-complete attribute on the web-app
element. The metadata-complete attribute defines whether the web.xml descriptor is complete, or
whether other sources of metadata used by the deployment process should be considered. Metadata may
come from the web.xml file, web-fragment.xml files, annotations on class files in WEB-
INF/classes, and annotations on classes in jar files in the WEB-INF/lib directory. If metadata-complete
is set to "true", the deployment tool only examines the web.xml file and must ignore annotations such as
@WebServlet, @WebFilter, and @WebListener present in the class files of the application, and must
also ignore any web-fragment.xml descriptor packaged in a jar file in WEB-INF/lib. If the metadata-
complete attribute is not specified or is set to "false", the deployment tool must examine the class files
and web-fragment.xml files for metadata, as previously specified.
Answer: a
• Review
• Server staging modes
• Deploying an application to multiple environments
• Java EE deployment descriptors and annotations
• Deployment plans
– What is a deployment plan?
– Using deployment plans for different environments
– Deployment plan example
– Staging deployment plans
A deployment plan is an XML document that is used to define an application’s deployment configuration for
a specific Oracle WebLogic Server environment, such as development, test, or production. A deployment
plan resides outside of an application’s archive file and contains deployment properties that override an
application’s existing Java EE and Oracle WebLogic Server deployment descriptors. Use deployment plans
to easily change an application’s Oracle WebLogic Server configuration for a specific environment without
modifying the existing deployment descriptors. Multiple deployment plans can be used to reconfigure a
single application for deployment to multiple, differing Oracle WebLogic Server domains or servers.
Any external resources required by the application are subject to change when the application is deployed
to a different environment. For example, the Java Naming and Directory
Interface (JNDI) names of the data sources that are used in your development environment can be different
from those used in testing or production. Exposing those JNDI names as variables makes it easy for people
deploying applications to use the available resources or create the required resources when deploying the
application.
Certain tuning parameters that are acceptable in a development environment are unacceptable in a
production environment. For example, it may suffice to accept default or minimal values for EJB caching on
a development machine, whereas a production cluster would need higher levels of caching to maintain
acceptable performance. To deploy the application to a new environment, an administrator simply creates or
uses a new deployment plan as necessary.
MyEJB.jar
contains the deployment descriptor
weblogic-ejb-jar.xml.
1 2 3
Oracle WebLogic Server Oracle WebLogic Server Oracle WebLogic Server
Development Testing Production
uses uses uses
DevDataSource QADataSource GADataSource
QAPlan.xml ProductionPlan.xml
<variable> <variable>
<name> <variable>
<name> <name>
myresource myresource
IdleTimeout
1. Development: A developer develops and creates both Java EE and WebLogic Server deployment
descriptors to configure the application for repeated deployments to the development environment.
The development server uses a simple Derby database for development, named
“DevDataSource,” and the weblogic-ejb-jar.xml descriptor identifies the resources for the
application.
2. Testing: The developer packages the application into an archive file and delivers it to the
administrator in the QA team. The testing environment uses a different data source named
“QADataSource.” At this point, the embedded deployment descriptors provide a configuration that is
valid for the development environment used by the developer, but is not valid for the testing
environment where the application must be deployed for testing. To deploy the application, the
administrator of the testing environment generates a deployment plan QAPlan.xml to override the
data source name configured in the application’s embedded deployment descriptors.
3. Production: Similarly, when the application is released into production, the administrator of the
staging or production environment creates or uses another deployment plan to configure the
application. The production deployment plan ProductionPlan.xml again overrides the
application deployment descriptors to identify a new JDBC data source “GADataSource” that is
used by the production environment. For this environment, the deployment plan also defines tuning
parameters to make better use of the additional resources that are available in the production
domain.
XPath to value:
/persistence/persistence-unit[name="AuctionPU"]/jta-data-source
This slide shows an example of a deployment descriptor for a Java Persistence API (JPA) component,
called persistence.xml. This file contains the configuration of a data source JNDI name used to access
the database connections required for the related component. The current value contained within the
descriptor that is packaged within the deployed archive is jdbc/AuctionDB.
For those who may not be familiar with XPath, this slide shows the syntax used to pinpoint the element that
contains the value of the JNDI name of the data source in this file. Following from left to right in the XPath
statement, the first element is persistence. The next element is persistence-unit, but it also
contains criteria to distinctly identify which persistence-unit element using [name="AuctionPU"].
The next element is jta-data-source, which contains the target value you want to change.
This slide shows a deployment plan that is used to override the jdbc/AuctionDB value set in the original
persistence.xml file shown in the previous slide to jdbc/AuctionDBTest.
The basic elements in a deployment plan serve the following functions:
• deployment-plan: Encapsulates the deployment plan’s contents
• application-name: Corresponds to the deployment name for the application or module
• variable-definition: Defines one or more variable elements. Each variable element defines
the name of a variable that is used in a plan and a value to assign, which can be null. In this
example, the variable is PersistenceUnit_AuctionPU_jtaDataSource_13620081523747
with a value of jdbc/AuctionDBTest, which is the new JNDI name of the data source for the
AuctionPU persistence unit defined in the application's persistence.xml file. The variable name
of this variable is matched with the name of a variable assignment defined in the module-
override section below.
-plannostage Does not copy and instead leaves deployment plan in a fixed location
-planexternal_stage Does not copy and requires a separate copy step prior to deployment
$ java weblogic.Deployer \
-adminurl http://host01.example.com:7001 -username weblogic \
-password Welcome1 -deploy -targets cluster1 \
-plan /plans/prod/plan.xml \
-planstage /deployments/AuctionDbLib.war Command Line
Planstage is the default if no deployment plan staging options are specified. Deployment plan staging allows
you to specify different staging options for applications and deployment plans.
Answer: c
Remember that deployment plans are not packaged within an application, so that the same application can
be reused in multiple deployment environments.
• Review
• Server staging modes
• Deploying an application to multiple environments
• Java EE deployment descriptors
• Deployment plans
• Deployment plan tools
– Creating a deployment plan
– weblogic.PlanGenerator
– Using plan generator
Although it is not officially part of WebLogic, OEPE is an Oracle plug-in for Eclipse that leverages WebLogic
tools and the Eclipse framework to create deployment plans for applications. Developers may use OEPE
tools to create plan templates to make it easy for administrators to change settings in their applications.
To create a deployment plan for a deployed application that does not already have a deployment plan, make
a configuration change to the deployed application using the administration console. When you make a
persisted configuration change to a deployed application that does not have an existing deployment plan,
the console automatically creates a deployment plan for you and prompts you for the location in which to
save it.
weblogic.PlanGenerator is a Java-based deployment configuration tool that is intended for developers
who want to export portions of a deployment configuration into a deployment plan. This utility can generate
a brand new plan or can append to an existing one. By default, weblogic.PlanGenerator writes an
application’s deployment plan to a file named Plan.xml in the application’s root directory. The syntax for
invoking weblogic.PlanGenerator is the following: java weblogic.PlanGenerator [options]
[application]
You can generate a deployment plan with the administration console using the following
steps:
1. Prepare the deployment files.
2. Install the application archive.
3. Save configuration changes to a deployment plan.
The application was originally deployed with various parameters (for example, a Session Invalidation
Interval of 60 seconds).
1. Select an installed application that you want to modify, and make a change of the Session Invalidation
Interval setting from 60 to 90.
2. When you click Save, the system prompts you for a new or existing deployment plan into which to
save this.
3. There was no deployment plan originally. So it creates a new one called Plan.xml.
After:
Lists the
After Before
This is a partial snippet of what was created. Notice the session invalidation in particular.
<?xml version='1.0' encoding='UTF-8'?>
<deployment-plan . . .>
<application-name>SimpleAuctionWebAppDbSec.war</application-name>
<variable-definition>
<variable>
<name>SessionDescriptor_invalidationIntervalSecs</name>
<value>90</value>
</variable>
</variable-definition>
<module-override>
<module-name>SimpleAuctionWebAppDbSec.war</module-name>
<module-type>war</module-type>
<module-descriptor external="false">
<root-element>weblogic-web-app</root-element>
<uri>WEB-INF/weblogic.xml</uri>
<variable-assignment>
<name>SessionDescriptor_invalidationIntervalSecs</name>
:
The applications that you receive for deployment may come with varying levels of configuration information.
If you have an existing deployment plan for an application, simply prepare the application and place the
deployment plan in the plan subdirectory of the application root. Then install the application. The
administration console automatically uses a deployment plan named plan.xml in the /plan subdirectory
of an application root directory if one is available. If multiple plans are available for your application, they are
placed in their own /plan subdirectories (for example, /plan1 and /plan2), and the administration
console cannot identify them. Therefore, config.xml must specify the plan that you want to use.
After you install a new application and an existing deployment plan, the administration console validates the
deployment plan configuration against the target servers and clusters that were selected during installation.
If the deployment plan contains empty (null) variables, or if any values configured in the deployment plan
are not valid for the target server instances, you must override the deployment plan before you deploy the
application. You can also configure tuning parameters to better suit the target environment in which you are
deploying the application. The changes you make to the application’s configuration are saved to a new
deployment plan.
If you have a valid deployment plan that fully configures an application for the environment in which you are
deploying, you can use either the administration console, weblogic.Deployer, or WLST to deploy an
application with a deployment plan to use for deployment.
Note: A deployment plan that you use with the weblogic.Deployer utility must be complete and valid for
your target servers. weblogic.PlanGenerator does not allow you to set or override individual
deployment properties when it creates a plan.
1
2
You can use the administration console to specify a deployment plan for your application.
1. In the left pane, click Deployments.
2. In the right pane, select the check box next to the application for which you want to specify a
deployment plan. Click Update.
3. Click Change Path next to “Deployment plan path” to browse to the desired deployment plan. Click
Next, and then click Finish.
> deploy(appName='MyApp',
path='/apps/MyApp.ear',
targets='cluster1',
createPlan='true')
WLST Command Line
This example shows WLST deploying an application and creating an initial deployment plan for the
application.
• Enables you to generate basic WebLogic configuration extensions for applications that
have only Java EE deployment descriptors
• Enables you to:
– Create an initial plan
– Create a new plan based on an existing plan
– Control which components are exported to a plan
$ java weblogic.PlanGenerator \
-useplan /plans/MyApplication_template.xml \
-root /appRelease/MyApplication
Perform the following steps to create a deployment plan for an application in Eclipse:
1. Right-click your application project and select New > Other to open the Select a wizard dialog box.
2. In the dialog box, expand Oracle > WebLogic > Configuration, select Oracle WebLogic Deployment
Plan, and click Next.
3. On the File Name and Location page, select the parent folder location and the file name to use for
the plan file, and click Next.
4. On the Target Application and Options page, make your desired selections to control how your
deployment plan is created. In the example on this slide, you are using
weblogic.PlanGenerator to create your plan and instruct it to only create plan entries for
resources that are part of your application.
Because deployment plans are different for each target environment and stored externally
from application archives:
• Plans should be managed using a source code control system (SCCS). SCCS provides
built-in:
– File versioning
– Change management and file before and after comparisons
• Each plan should be stored separately:
Deployment plans are different for each environment during the full development life cycle of an application.
Each environment that an application is deployed to requires its own unique deployment plan unless all of
the settings are going to remain identical. Typically, applications are packaged up and deployed as archives
and deployment plans are used as an externally available configuration tool to conveniently modify the
application’s settings. Because of this, deployment plans are not part of the application itself and must be
managed separately.
• Container Services
• Docker Fundamentals
• Kubernetes Primer
• Oracle WebLogic Operator for Kubernetes
• Oracle Linux Container Services
Host
Cluster
FMW
WebLogic
Back-end
Systems
and
Host
Load FMW Databases
Balancer WebLogic
Web
Clients
FMW
1. Waterfall design and development resulted in creation of monolithic applications that were deployed to
physical servers in specific datacenter(s).
2. Multitier model became available with the advent of Agile development process, it became possible to
deploy to Virtual Machines and the resulting resource consolidation allowed for capital investment
savings and pawed the way to IaaS Cloud model.
3. Containers allow not only consolidation, but standardization of environments and more flexible
deployment capabilities due to sharing of these standard execution environments and features. Provide
for more deployment, distribution, and maintenance flexibility and enable DevOps approach. Suitable to
both on-premises and Cloud-based setups.
Request forwarding for options 1 and 2 was typically by external load-balancing functionality.
With the container model, request routing is also more flexible and may be done using variety of methods.
Ingress controllers are more typical in the latter case.
Containers
VMs
• Container Services
• Docker Fundamentals
• Kubernetes Primer
• Oracle WebLogic Operator for Kubernetes
• Oracle Linux Container Services
• Container Services
• Docker Fundamentals
• Kubernetes Primer
• Oracle WebLogic Operator for Kubernetes
• Oracle Linux Container Services
– Nodes are workers that run the containers (application) Kubelet Kubelet Kubelet
•
Container
A Pod is a Kubernetes abstraction
• A Pod represents a group of:
– One or more application containers
– Some shared resources for those containers: storage
resources and a unique network IP Node
• Each Pod is tied to the Node where it is scheduled, and remains Pod1 – IP:10.1.1.2
Container
there until termination (determined by restart policy) or deletion.
– In case of a Node failure, identical Pods are scheduled by
• Container Services
• Docker Fundamentals
• Kubernetes Primer
• Oracle WebLogic Operator for Kubernetes
• Oracle Linux Container Services
Docker Image contains entire WebLogic Domain file system – Image cannot be modified directly.
There should be a mechanism to customize domain configuration details without creating a new Docker
Image.
Overrides have to be external.
CI/CD stands for Continuous Integration/Continuous Delivery.
Introspection Job
Documentation: https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-domains/configoverrides
• Container Services
• Docker Fundamentals
• Kubernetes Primer
• Oracle WebLogic Operator for Kubernetes
• Oracle Linux Container Services
Version 1.12 adds support for high availability multi-master clusters kubeadm-ha-setup, CoreDNS and other enhancements
At the time this presentation was created, Oracle Linux Container Services for use with Kubernetes
supported release version 1.12 had been current.
Using root-level access on a Linux UEKR5 host run the following commands:
# yum-config-manager --enable ol7_addons
# yum-config-manager --disable ol7_UEKR4
# yum-config-manager --enable ol7_UEKR5
# yum update
Reboot the system after the update before continuing.
Docker installed from ol7_preview repository is not supported by Oracle and must be uninstalled,
ol7_preview repository should be also disabled.
Install Docker using:
# yum install docker-engine
Enable Docker service to start automatically upon system boot; do it prior to configuring Kubernetes:
# systemctl enable docker
# systemctl start docker
• Kubernetes Master Node communicates with all Worker Nodes using predefined ports.
• Worker Nodes access Master Node via TCP port 6443.
• On each node running Kubernetes and running firewalld use following commands:
# firewall-cmd –-add-masquerade –-permanent
# firewall-cmd –-add-port=10250/tcp –-permanent
# firewall-cmd –-add-port=8472/udp –-permanent
• Additional port on Master Node for API server access:
# firewall-cmd –-add-port=6443/tcp –-permanent
• Restart firewall or reboot the systems where these rules were added or modified.
Oracle hosts regional mirror servers available both for Oracle Cloud and on-premises users
• Use mirror sites for docker login to improve performance and reduce bandwidth
• Still need to login and accept terms at https://container-registry.oracle.com
• Docker registry may be run locally, within your private network: deploy private registry
server – Docker container itself
• Registry listens by default inside container on port 5000.
– Examples below use external/public port 5008.
– Add --restart=always to ensure container starts automatically on system boot.
# docker run -d -p 5008:5000 --name registry-private registry:2
• Internal container port may be changed using parameter HTTP_REGISTRY_ADDR
# docker run -d -p 5008:5000 --name registry-private registry:2
• Registry access is over HTTPS connection by default. Private registry may be secured
• Systems running Docker behind corporate firewalls may not be able to connect to Oracle
Container Registry or any mirrors directly.
• Set up a local Docker Container Registry on one of your internal hosts as an alternative
– that host must have access to Oracle-managed registry to download latest images.
• Use kubeadm-registry.sh to pull images from an Oracle registry and store/push into
your local one
– Ensure that local registry images have the exact same tags as in Oracle registry.
# kubeadm-registry.sh –-to your.registry.host:5008 --from \
container-registry.oracle.com/kubernetes –-version 1.12.5
• On each host intended to run Docker containers add KUBE_REPO_PREFIX environment
Production Redeployment
After completing this lesson, you should be able to deploy versioned applications
(production redeployment).
2 4
1
Existing Web Web
client Application Application
connections MyApp MyApp
REDEPLOY
WebLogic maintains the user state for clients that use web applications in HTTP session objects. When the
associated web application is redeployed, the undeployment aspect of the process effectively eliminates the
HTTP sessions associated with that deployment.
1. A client using a web browser is using the MyApp web application and has an associated HTTP
session on the server.
2. The application is redeployed, which involves undeploying the original MyApp application and
replacing it with the updated MyApp application. The associated HTTP session for the existing client
is lost.
3. The client browser makes another request to the application, which is routed to the newly deployed
MyApp application. However, there is no HTTP session for this client and all session state is lost.
4. Depending on the situation, the client may have to start his or her session over to reestablish state
with a new HTTP session that is associated with the newly deployed MyApp web application.
In-place Completely replaces the existing application Upgrading applications that can tolerate client
with the new application interruptions
Partial redeploy for static files Immediately replaces files such as HTML, Updating static web files that do not affect
JSP, and images application clients
Partial redeploy of Completely replaces existing modules within Replacing a component of an enterprise
Java EE modules an enterprise application (EAR) application that can tolerate client
interruptions
Note
Production redeployment is also called side-by-side deployment. This lesson focuses mainly on the last
redeployment strategy; production redeployment.
Production redeployment is the ability to deploy multiple versions of the same application at
the same time to avoid disrupting existing or new users.
Managed server
Retiring Active
Application Application
Version 1 Version 2
Completely avoids:
• Scheduling application down time
• Setting up redundant servers to host new application versions
• Managing client access to multiple application versions manually
• Retiring older versions of an application manually
2 3
Application Application
V1 DEPLOY V1
Weblogic- 9
Application-
Version: V1 7 Existing
1 MANIFEST.MF clients
10
6
4 Application Application
V2 REDEPLOY V2
Graceful client transition Waits for all existing client sessions to end, and then automatically retires the old
version
Timeout Keeps the existing version running for a configured timeout period, and then
automatically retires the old version
Forced An administrator can force work to stop and cause the original application version to
retire. This is the equivalent of doing an
in-place deployment.
During a graceful retirement for an application deployed on a cluster, each cluster member maintains its
own set of HTTP sessions. Client activity is varied across cluster members so some clusters may retire the
old version of the application before others. In the event of a failover scenario, the client fails over to the
server where its secondary HTTP session resides. If the existing version of the application is still active on
that server, the request works normally. If the existing version has already retired on that server, then
failover fails and the client must reestablish its session with the server.
Console host1:8001
Admin
Server
host1:9001
SSL
host2:9001
Managed
While maintaining or troubleshooting a production server, it is often desirable to disable all incoming
application requests. However, a server’s default network configuration implies that all traffic run on the
same port. Therefore, if the port were closed, all remote administration tools, such as the console or WLST,
would also not be able to connect to the server.
WebLogic Server supports a domain in which all servers use a separate SSL port that accepts only
administration requests. The use of dedicated administration ports enables you to:
• Start a server in standby state: This enables you to administer a server while its other network
connections are unavailable to accept client connections.
• Separate administration traffic from application traffic in your domain: In production
environments, separating the two forms of traffic ensures that critical administration operations (such
as starting and stopping servers, changing a server’s configuration, and deploying applications) do
not compete with high-volume application traffic on the same network connection.
• Administer a deadlocked server instance: If you do not configure an administration port,
administrative commands such as THREAD_DUMP and SHUTDOWN will not work on deadlocked server
instances.
Production redeployment can be used with the -distribute command to prepare a new version of an
application for deployment. For more information, refer to Distributing a New Version of a Production
Application.
Distributing an application prepares it for deployment by copying its files to all target servers and validating
the files.
You can start a distributed application in Administration mode. Access to the application is then restricted to
a configured administration channel.
WebLogic prepares the new application version, which is deployed in administration mode and made
available via a configured administration channel, for deployment. However, the older version of the
application is not automatically retired by WebLogic when the new version of the application is distributed
and deployed in administration mode. The older version of the application remains active to process both
new and existing client requests.
The new application version can either be undeployed or started after it has been completely tested via an
administration channel. WebLogic routes new client connections to the updated application after starting the
application and begins retiring the older application version.
Application Application
Version 1 Version 2
REDEPLOY
Note: Rolling back is technically not always an actual rollback procedure. In some cases, it is merely a new
production redeployment task whereby the version and files of the new version are really the same as the
original existing version. WebLogic has no knowledge of whether or not the version information is sequential
or has any meaning. It only requires that the version strings are different.
Rolling an application deployment back to a previous version is possible regardless of the current state of
the side-by-side deployment:
• If the new version is in administration mode, the administrator can stop the new version so the
existing version remains in place.
• If the new version is active and the existing version is retiring, the administrator can redeploy the
existing version, which will cause the two versions to switch places. The new version changes to a
retiring state and the existing version becomes active. The administrator can also force work to stop
for the retiring version to ensure that clients revert to using the original version quickly.
• If the new version is active and the existing version is retired, the administrator can redeploy the
existing version, which will again cause the two versions to switch places.
Answer: b
Answer: d
This slide shows a summary of the tasks required to perform production redeployment on WebLogic.
Note: WebLogic supports deploying only two versions of an application at a time. You must undeploy or
retire a version of an application before deploying a subsequent version.
You configure application version information by using the application’s manifest file or
specifying options during deployment:
Note: The best practice is to include version data in the manifest file.
You should include version information in the application’s manifest file because it is easier to track and
maintain the versions of your applications. Otherwise, you could deploy an older version of an application
accidentally.
WLST:
> deploy('myApp',
'/prod/deployments/myApp/beta1.0/MyApp.war',
'myCluster',
versionIdentifier='1.0beta')
Command Line
Configure the manifest file of the updated application with the new version and redeploy the
application:
WebLogic recognizes that
this string is different.
Manifest-Version: 1.0
Weblogic-Application-Version: 1.0
MANIFEST.MF
If a timeout is not specified during production redeployment, the default behavior is to gracefully retire the
existing application version after all client work has completed.
Administration console:
This slide and the following slide show an example of redeploying an application using the administration
console. The first image shows that the existing application version is selected and the Update button is
clicked to update the deployment. The second image shows that the Change Path button is clicked to
choose the updated source files for the new application version. Next is clicked to view the application’s
settings, including its version information.
New version
source files
New application
version identifier
The image in the slide shows the version information associated with the new version of the application
before deployment. Clicking Finish completes the process.
WLST:
> redeploy('myApp',
'',
appPath='/prod/deployments/myApp/GA1.0/MyApp.war',
versionIdentifier='1.0GA',
retireGracefully='true')
Command Line
Note
You can also specify adminMode='true' as an option to cause the redeployment to start in Admin mode.
This slide shows using the administration console to start a distributed new version of an application in
Admin mode. The first image shows the state of the two versions of the application before starting the new
version in Admin mode. The second image shows that the existing version of the application is in Active
mode and the new version is in Admin mode.
> distributeApplication('/prod/deployments/myApp/GA1.0/MyApp.war',
'',
'myCluster',
versionIdentifier='1.0GA')
WLST Command Prompt
> startApplication('myApp',
versionIdentifier='1.0GA',
retireGracefully='true') Command Line
This slide shows using the administration console and WLST to move an application from Admin mode to
Active mode. The first image shows the state of the two versions of the application before starting the new
version in Admin mode. The second image shows that the existing version of the application is in Retired
mode and the new version is in Active mode.
To roll back to the retired application version, perform a second -redeploy command and
specify the source files for that version:
Answer: b
Production redeployment supports only HTTP clients and RMI clients. Specific coding is required to handle
the reconnection to the new RMI client version. Your development and design team must ensure that
applications using production redeployment are not accessed by an unsupported client. WebLogic does not
detect when unsupported clients access the application and does not preserve unsupported client
connections during production redeployment.
Production redeployment is not supported for:
• Stand-alone EJB or RAR modules. If you attempt to use production redeployment with such
modules, WebLogic Server rejects the redeployment request. To redeploy such modules, remove
their version identifiers and explicitly redeploy the modules.
• Applications that use Java Transaction Service (JTS) drivers:
- The LoggingLastResource global transaction optimization is not permitted for JDBC
application modules.
- When deploying application-scoped JDBC resources, the EmulateTwoPhaseCommit
feature is not supported for multiple versions.
• Applications that obtain JDBC data sources via the DriverManager API. To use production
redeployment, an application must use JNDI to look up data sources.
• Applications that include EJB 1.1 container-managed persistence (CMP) EJBs. To use production
redeployment with applications that include CMP EJBs, use EJB 2.x CMP instead of EJB 1.1 CMP.
In this lesson, you should have learned how to deploy versioned applications (production
redeployment).
Exalogic is an integrated hardware and software system designed to provide a complete platform for a wide
range of application types and widely varied workloads. Exalogic is intended for large-scale, performance-
sensitive, mission-critical application deployments. It combines Oracle Fusion Middleware and Sun
hardware to enable a high degree of isolation between concurrently deployed applications, which have
varied security, reliability, and performance requirements. Exalogic enables customers to develop a single
environment that can support end-to-end consolidation of their entire applications portfolio.
Exalogic hardware is preassembled and delivered in standard 19” 42U rack configurations. The main
hardware components of a single Exalogic X3-2 machine Full Rack are the following:
• 30 Sun Server X3-2 (formerly Sun Fire X4170) compute nodes
• One dual controller Sun ZFS Storage 7320 appliance with 20 disks
• Four Sun Network QDR InfiniBand Gateway Switches
• One Sun Datacenter InfiniBand Switch 36
• One 48-port Cisco Catalyst 4948 Ethernet management switch
• Two redundant 24 kVA power distribution units
Oracle Exalogic:
• Combines Sun storage and x86 servers by using a high-speed InfiniBand network
• Is specifically engineered to host Oracle Fusion Middleware software
• Includes optimizations to Oracle Fusion Middleware software that result in huge
performance gains
• Supports Oracle Enterprise Manager for centralized configuration, monitoring, and
diagnostics
• Provides an integrated IaaS cloud solution
You can connect up to eight Exalogic machines, or a combination of Exalogic machines and Oracle Exadata
database machines, together without the need for any external switches. If more than eight racks are
required to be connected on the same InfiniBand fabric, Oracle offers a choice of several high-capacity data
center switches, which enable the creation of Exalogic clouds comprising hundreds of racks and tens of
thousands of processors.
Exalogic is designed to fully leverage an internal InfiniBand fabric that connects all of the processing,
storage, memory, and external network interfaces within an Exalogic machine to form a single, large
computing device. Each Exalogic machine is connected to the customer's data center networks via 10 GB
Ethernet (external traffic) and 1 GB Ethernet (management traffic) interfaces.
Oracle Enterprise Manager Grid Control with Oracle WebLogic Server Management Pack Enterprise
Edition's capabilities include Exalogic specific management tools to monitor the Oracle software deployed in
the Exalogic environment. If using Solaris as your Exalogic operating system, you can also use Oracle
Enterprise Manager Ops Center to provide configuration and management capabilities for the Exalogic
hardware components.
Moving data between applications over a traditional network can consume a lot of time and drain precious
server resources. With traditional network technologies, data exchanges traverse the operating systems on
both the source and destination servers, resulting in excessive application latency due to operating system
calls, buffer copies, and interrupts.
InfiniBand, which today delivers 40 GB per second connectivity with application-to-application latency as low
as 1 microsecond, has become a dominant fabric for high-performance enterprise clusters. Its ultra-low
latency and near-zero CPU utilization for remote data transfers make InfiniBand ideal for high-performance
clustered applications.
InfiniBand also provides a direct channel from the source application to the destination application,
bypassing the operating systems on both servers. InfiniBand’s channel architecture eliminates the need for
OS intervention in network and storage communication. This frees server memory bandwidth and CPU
cycles for application processing.
In addition to carrying all InfiniBand traffic, the Sun Network QDR InfiniBand Gateway Switch enables all
InfiniBand attached servers to connect to an Ethernet LAN by using standard Ethernet semantics. No
application modifications are required for applications written to use standard Ethernet.
An Exalogic machine includes compute nodes, a Sun ZFS Storage 7320 appliance, as well as equipment to
connect the compute nodes to your network. The network connections allow the servers to be administered
remotely, enable clients to connect to the compute nodes, and enable client access to the storage
appliance. Additional configuration, such as defining multiple virtual local area networks (VLANs) or
enabling routing, may be required for the switches to operate properly in your environment and is beyond
the scope of the installation service.
There are up to five networks for an Exalogic machine. Each network must be on a distinct and separate
subnet from the others. The Exalogic management network connects to your existing management network
and is used for administrative work for all components of the Exalogic machine. It connects ILOM, compute
nodes, server heads in the storage appliance, and switches connected to the Ethernet switch in the Exalogic
machine rack. This management network is in a single subnet. Do not use the management network
interface (ETH0/NET0) on compute nodes for client or application network traffic. Cabling or configuration
changes to these interfaces on Exalogic compute nodes is not permitted.
Clients
Rack
Compute Node
InfiniBand Mgmt 1 Gb
Storage Appliance
Switches Switch
40 Gb
Exalogic hardware is preassembled and delivered in standard 19” 42U rack configurations. Each Exalogic
configuration is a unit of elastic cloud capacity balanced for compute-intensive workloads. Each Exalogic
configuration contains several hot-swappable compute nodes along with a clustered, high-performance disk
storage subsystem. The hardware also includes a high-bandwidth InfiniBand fabric to connect every
individual component within the configuration as well as to externally connect additional Exalogic or Exadata
Database Machine racks. In addition, each configuration includes multiple 10 Gb Ethernet ports for
integration with the data center's service network, along with 1 Gb Ethernet ports used for integration with
the data center’s management network. All Exalogic configurations are fully redundant at every level and
are designed with no single point of failure.
All device management ports are connected to your local data center management network by using a
Cisco Catalyst 4948 switch, which is a built-in component of an Exalogic rack. This switch offers 48 ports of
wire-speed 10/100/1000BASE-T with 4 alternative wired ports that can accommodate optional 1000BASE-X
Small Form-Factor Pluggable (SFP) optics. Reliability and serviceability are delivered with optional internal
AC or DC 1 + 1 hot-swappable power supplies and a hot-swappable fan tray with redundant fans.
SDP Socket Direct Applications communicate directly with the IB fabric, bypassing
Protocol the operating system's TCP/IP stack.
Client EoIB Ethernet over Applications within an IB fabric communicate with external
InfiniBand Ethernet networks.
IP over InfiniBand (IPoIB) enables applications on separate devices to communicate with each other over a
private InfiniBand fabric by using native IB protocols and without the overhead of Ethernet. For example, a
compute node on one Exalogic rack may communicate with a database on an Exadata rack. However,
applications must support the SDP protocol to use IPoIB instead of the default TCP/IP stack of the host
operating system.
The InfiniBand switches also act as gateways to connect to external Ethernet networks. They support eight
10 Gb Ethernet ports. Exalogic compute nodes can communicate through these ports by using Ethernet
over InfiniBand (EoIB). Each port is represented on the compute nodes as a VNIC. This allows that node's
IB connection to appear like any other Ethernet NIC to both the operating system and to the external
Ethernet network.
Private IB
Network
Compute Node
IPoIB, SDP
IB IB0 bond0
Switches VNICs IB1 bond1 eth0
EoIB
By default, each Exalogic compute node is configured with one bonded EoIB interface for one external LAN
(client network), and is named bond1. The Cisco Ethernet management switch is connected to the NET0
port of compute nodes, the NET0 port of the storage appliance, and also the management ports of the
InfiniBand gateway switches. On the compute nodes, this connection is represented on the operating
system by an “eth” network interface, such as eth0. The compute nodes are configured at the time of
manufacturing to use sideband management only. Therefore, the MGMT (or ILOM) port is not connected,
but ILOM is accessible from NET0.
Compute Node
Server Server
Compute Node
Client Private IB
Proxy
Network Network
Term Definition
Project Default administrative and file system settings for a collection of shares
Share A file system mount point, access protocols, access rights, and other settings
Pool
The appliance is based on the ZFS file system. ZFS groups the underlying storage devices into pools.
Shared file systems then allocate disk space from these pools. Before creating file systems, you must first
configure storage on the appliance. After a storage pool is configured, you do not have to statically size file
systems, although this behavior can be achieved by using quotas and reservations.
Each storage node can have any number of pools, and each pool can be assigned ownership independently
in a cluster. While an arbitrary number of pools is supported, creating multiple pools with the same
redundancy characteristics owned by the same cluster head is not advised. Doing so results in poor
performance, suboptimal allocation of resources, artificial partitioning of storage, and additional
administrative complexity.
The storage appliance exports file systems as shares, which are managed in this section of the appliance.
All file systems are grouped into projects. A project defines a common administrative control point for
managing shares. All shares within a project can share common settings, and quotas can be enforced at the
project level in addition to the share level. Projects can also be used solely for grouping logically related
shares, so their common attributes (such as accumulated space) can be accessed from a single point.
exalogic
FMW12c-1 HR
domains
Middleware
/hrdomain
/wlserver
/jdk
nodemanager
recovery
apps
Most Exalogic specific WLS optimizations are not enabled by default. Some require a simple configuration
check box. But others require multiple configuration steps. For example, WLS clusters can be configured to
further improve server-to-server communication. First, you can enable multiple replication channels, which
improve network throughput among cluster members. Second, you can enable InfiniBand support via the
Sockets Direct Protocol, which reduces CPU utilization because network traffic bypasses the TCP stack.
When creating a network channel, there is not a specific protocol option available for internal cluster
replication traffic. Instead, you must configure a channel that supports the T3 protocol and specify the name
of that channel as part of a cluster’s replication settings.
1. Click Clusters and then select a cluster.
2. Click the Configuration > Replication tab.
3. Set the Replication Channel to the name of the custom network channel that you created on each
member of the cluster. As a result, these channels will be used for replication traffic instead of the
default channels on each server.
startWebLogic.sh:
WLS can use SDP to bypass the operating system and communicate directly with the InfiniBand fabric for
session replication.
• A single replication channel cannot use the available bandwidth on the IB network.
• For convenience, simply specify a port range and multiple channels will be created and
used automatically.
When members of a cluster need to communicate and replicate HTTP session data for high availability, they
use an internal Java protocol called T3. Because Exalogic uses InfiniBand, establishing individual T3
connections does not take full advantage of the available bandwidth. Instead, WLS uses parallel or
“multiplexed” connections for this inter-cluster communication.
Multiple replication channels do not need to be configured on each clustered server instance. Only one
replication channel with explicit IP-Address needs to be configured for each server and replication. Port
range can be specified for each server. For example, the range 7001-7010 will create 10 replication
channels with ports 7001 to 7010 for the given server. These channels inherit all the properties of the
configured replication channel except the listen port. Names of these channels will be derived from the
name of the configured replication channel. A numeric suffix (1,2,3) will be added to each name. If you
specify a range of 10 ports, 10 additional channels will be created. Public ports are the same as the listen
port for these additional channels.
As replicated Java objects are passed across the InfiniBand network, they must be serialized and
deserialized by WLS. To avoid unnecessary processing, WLS deserializes a replicated object only if the
server from which it originated has failed.
Exabus consists of unique software, firmware, and device drivers, and is built on Oracle's QDR InfiniBand
technology. It includes a low-level Java API for the network communications layer that is specifically
optimized for Exalogic. The Exabus implementation uses RDMA (direct memory I/O) and other optimizations
for low latency communications.
Subsystem Description
Threading Create more threads to take advantage of the compute node’s processing
power.
Network I/O • Use Exabus for tighter integration with native IB libraries
• Use larger packet sizes to take advantage of the IB throughput.
File I/O Use larger buffers to take advantage of the IB connection to the storage
appliance.
Traditional I/O employs the use of buffers to read and write data, and as this data is transferred between
operating system memory, JVM memory, and WLS memory (heap), it must be copied from buffer to buffer.
On Exalogic, this file I/O buffering is significantly reduced or eliminated.
InfiniBand supports much higher throughput rates compared to standard Ethernet, so WebLogic
automatically uses larger packet sizes to communicate with the InfiniBand fabric on Exalogic. This includes
external network communication as well as communication with other servers on other compute nodes.
To enable all general optimizations on all servers in a domain, perform the following steps:
1. Access the WebLogic Server console.
2. In the Domain Structure panel, click the name of the domain.
3. In the right panel, select the Enable Exalogic Optimizations check box.
Refer to the Deployment Guide for a complete list of server startup arguments that individually enable or
disable specific enhancements. Here are some examples:
• -Dweblogic.ScatteredReadsEnabled=true/false
• -Dweblogic.GatheredWritesEnabled=true/false
• -Dweblogic.replication.enableLazyDeserialization=true/false
The Oracle Exadata Database Machine is an easy-to-deploy solution for hosting Oracle Database that
delivers the highest levels of database performance. The Exadata Database Machine is composed of
database servers, Oracle Exadata Storage Servers, and an InfiniBand fabric for networking all the
components. It delivers outstanding I/O and SQL processing performance for online transaction processing
(OLTP) and data warehousing.
There are two versions of the Exadata Database Machine. The Exadata Database Machine X2-2 expands
from two 12-core database servers with 192 GB of memory and three Exadata Storage Servers to eight 12-
core database servers with 768 GB of memory and 14 Exadata Storage Servers, all in a single rack. The
Exadata Database Machine X2-8 comprises two
64-core database servers with 2 TB of memory and 14 Exadata Storage Servers in a single rack.
Exadata Smart Flash Cache dramatically accelerates Oracle Database processing by speeding I/O
operations. The Flash provides intelligent caching of database objects to avoid physical I/O operations. The
Oracle database on the Database Machine is the first Flash-enabled database. Exadata storage uses an
advanced compression technology, Exadata Hybrid Columnar Compression, that typically provides 10 times
and higher levels of data compression than a traditional database server.
Exalogic Exadata
Compute Node
IPoIB, SDP
Application
It is possible to connect as many as eight full racks of Exalogic hardware (or any combination of Exalogic
and Exadata configurations) without the need for any external switches. In cases where more than eight
racks of Exalogic or Exadata hardware are required, Oracle offers a choice of several high-capacity data
center switches that enable the creation of Exalogic clouds comprising hundreds of racks and tens of
thousands of processors.
Edit the default data source configuration and change all occurrences of “TCP” to “SDP” in
the URL.
startWebLogic.sh:
Oracle Net Services provides support for the Sockets Direct Protocol (SDP) for InfiniBand high-speed
networks. For example, InfiniBand can be used to connect an Exalogic machine to an Exadata machine.
SDP is characterized by short-distance, high-performance communications between multiple server
systems.
Simply connecting the machines with InfiniBand cables is not sufficient—WebLogic and RAC will still
communicate by using TCP (IPoIB). First, configure an SDP address in the listener.ora file on the
database server. Then edit your GridLink data source by using the WLS console. On the Configuration >
Connection Pool tab, locate the URL field. Replace all instances of the text PROTOCOL=TCP with
PROTOCOL=SDP. Finally, use the Control tab to restart the data source.
Note that in order to use SDP, you will also need to add a command-line argument when starting your
WebLogic Servers: -Djava.net.preferIPv4Stack=true. For example, edit startWebLogic.sh.
Oracle Cloud
An Overview
Cloud Computing
SaaS Computing
Next generation
Utility Computing Internet
Grid Computing Network-based computing and
Offering subscription to next generation
Silos Computing computing applications data centers
Solving large resources as a
problems with metered
Basic computing service
parallel
with dedicated
computing
physical
hardware
Data
Hybrid Community
Cloud Cloud
• It is an arrangement of two or
more cloud servers, i.e. •Type of cloud hosting in which
Application
Platform
Provider
Application
PaaS
Consumer
Customizations
• Target: End users
• “Ready to Wear”
Provider
Designed for large enterprises, which allow them to scale up their computing, networking, and storage
systems into the cloud, rather than expanding their physical infrastructure.
• Allows large businesses and organizations to run their workloads, replicate their network, and
back up their data in the cloud
Delivers modern cloud applications that connect business processes across the enterprise.
• Only Cloud integrating ERP, HCM, EPM, SCM
• Seamless co-existence with Oracle’s On-Premise Applications
Dev/Test in the New App Development Migrate Apps to Cloud Strategic Outsourcing
Cloud Recapture
• Fully-automated, on-
demand – do it yourself
without IT!
• Each managed server on
separate virtual machine
• Zero downtime during
scaling – keep customers
happy
• Scale data capacity and
processing up/down on
demand
JAVA CLOUD